id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2207.00688
Perez Ogayo
Perez Ogayo, Graham Neubig, Alan W Black
Building African Voices
null
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Modern speech synthesis techniques can produce natural-sounding speech given sufficient high-quality data and compute resources. However, such data is not readily available for many languages. This paper focuses on speech synthesis for low-resourced African languages, from corpus creation to sharing and deploying the Text-to-Speech (TTS) systems. We first create a set of general-purpose instructions on building speech synthesis systems with minimum technological resources and subject-matter expertise. Next, we create new datasets and curate datasets from "found" data (existing recordings) through a participatory approach while considering accessibility, quality, and breadth. We demonstrate that we can develop synthesizers that generate intelligible speech with 25 minutes of created speech, even when recorded in suboptimal environments. Finally, we release the speech data, code, and trained voices for 12 African languages to support researchers and developers.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 23:28:16 GMT" } ]
2022-07-05T00:00:00
[ [ "Ogayo", "Perez", "" ], [ "Neubig", "Graham", "" ], [ "Black", "Alan W", "" ] ]
new_dataset
0.998447
2207.00691
Robert Wolfe
Robert Wolfe and Aylin Caliskan
American == White in Multimodal Language-and-Image AI
Accepted to AI Ethics and Society 2022
null
null
null
cs.CY cs.AI cs.CL cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Three state-of-the-art language-and-image AI models, CLIP, SLIP, and BLIP, are evaluated for evidence of a bias previously observed in social and experimental psychology: equating American identity with being White. Embedding association tests (EATs) using standardized images of self-identified Asian, Black, Latina/o, and White individuals from the Chicago Face Database (CFD) reveal that White individuals are more associated with collective in-group words than are Asian, Black, or Latina/o individuals. In assessments of three core aspects of American identity reported by social psychologists, single-category EATs reveal that images of White individuals are more associated with patriotism and with being born in America, but that, consistent with prior findings in psychology, White individuals are associated with being less likely to treat people of all races and backgrounds equally. Three downstream machine learning tasks demonstrate biases associating American with White. In a visual question answering task using BLIP, 97% of White individuals are identified as American, compared to only 3% of Asian individuals. When asked in what state the individual depicted lives in, the model responds China 53% of the time for Asian individuals, but always with an American state for White individuals. In an image captioning task, BLIP remarks upon the race of Asian individuals as much as 36% of the time, but never remarks upon race for White individuals. Finally, provided with an initialization image from the CFD and the text "an American person," a synthetic image generator (VQGAN) using the text-based guidance of CLIP lightens the skin tone of individuals of all races (by 35% for Black individuals, based on pixel brightness). The results indicate that biases equating American identity with being White are learned by language-and-image AI, and propagate to downstream applications of such models.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 23:45:56 GMT" } ]
2022-07-05T00:00:00
[ [ "Wolfe", "Robert", "" ], [ "Caliskan", "Aylin", "" ] ]
new_dataset
0.995197
2207.00750
Ru He
Chao Yang, Ru He, Fangquan Lin, Suoyuan Song, Jingqiao Zhang, Cheng Yang
GUIM -- General User and Item Embedding with Mixture of Representation in E-commerce
10 pages, 3 figures
null
null
null
cs.AI cs.IR
http://creativecommons.org/licenses/by/4.0/
Our goal is to build general representation (embedding) for each user and each product item across Alibaba's businesses, including Taobao and Tmall which are among the world's biggest e-commerce websites. The representation of users and items has been playing a critical role in various downstream applications, including recommendation system, search, marketing, demand forecasting and so on. Inspired from the BERT model in natural language processing (NLP) domain, we propose a GUIM (General User Item embedding with Mixture of representation) model to achieve the goal with massive, structured, multi-modal data including the interactions among hundreds of millions of users and items. We utilize mixture of representation (MoR) as a novel representation form to model the diverse interests of each user. In addition, we use the InfoNCE from contrastive learning to avoid intractable computational costs due to the numerous size of item (token) vocabulary. Finally, we propose a set of representative downstream tasks to serve as a standard benchmark to evaluate the quality of the learned user and/or item embeddings, analogous to the GLUE benchmark in NLP domain. Our experimental results in these downstream tasks clearly show the comparative value of embeddings learned from our GUIM model.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 06:27:54 GMT" } ]
2022-07-05T00:00:00
[ [ "Yang", "Chao", "" ], [ "He", "Ru", "" ], [ "Lin", "Fangquan", "" ], [ "Song", "Suoyuan", "" ], [ "Zhang", "Jingqiao", "" ], [ "Yang", "Cheng", "" ] ]
new_dataset
0.969775
2207.00758
Akari Asai
Akari Asai, Shayne Longpre, Jungo Kasai, Chia-Hsuan Lee, Rui Zhang, Junjie Hu, Ikuya Yamada, Jonathan H. Clark, Eunsol Choi
MIA 2022 Shared Task: Evaluating Cross-lingual Open-Retrieval Question Answering for 16 Diverse Languages
NAACL Workshop on Multilingual Information Access
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present the results of the Workshop on Multilingual Information Access (MIA) 2022 Shared Task, evaluating cross-lingual open-retrieval question answering (QA) systems in 16 typologically diverse languages. In this task, we adapted two large-scale cross-lingual open-retrieval QA datasets in 14 typologically diverse languages, and newly annotated open-retrieval QA data in 2 underrepresented languages: Tagalog and Tamil. Four teams submitted their systems. The best system leveraging iteratively mined diverse negative examples and larger pretrained models achieves 32.2 F1, outperforming our baseline by 4.5 points. The second best system uses entity-aware contextualized representations for document retrieval, and achieves significant improvements in Tamil (20.8 F1), whereas most of the other systems yield nearly zero scores.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 06:54:10 GMT" } ]
2022-07-05T00:00:00
[ [ "Asai", "Akari", "" ], [ "Longpre", "Shayne", "" ], [ "Kasai", "Jungo", "" ], [ "Lee", "Chia-Hsuan", "" ], [ "Zhang", "Rui", "" ], [ "Hu", "Junjie", "" ], [ "Yamada", "Ikuya", "" ], [ "Clark", "Jonathan H.", "" ], [ "Choi", "Eunsol", "" ] ]
new_dataset
0.996485
2207.00785
Ebrahim Chekol Jibril
Ebrahim Chekol Jibril and A. C\"uneyd Tant\u{g}
ANEC: An Amharic Named Entity Corpus and Transformer Based Recognizer
22 pages including references and indexes, 10 figures and 6 tables
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Named Entity Recognition is an information extraction task that serves as a preprocessing step for other natural language processing tasks, such as machine translation, information retrieval, and question answering. Named entity recognition enables the identification of proper names as well as temporal and numeric expressions in an open domain text. For Semitic languages such as Arabic, Amharic, and Hebrew, the named entity recognition task is more challenging due to the heavily inflected structure of these languages. In this paper, we present an Amharic named entity recognition system based on bidirectional long short-term memory with a conditional random fields layer. We annotate a new Amharic named entity recognition dataset (8,070 sentences, which has 182,691 tokens) and apply Synthetic Minority Over-sampling Technique to our dataset to mitigate the imbalanced classification problem. Our named entity recognition system achieves an F_1 score of 93%, which is the new state-of-the-art result for Amharic named entity recognition.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 09:50:37 GMT" } ]
2022-07-05T00:00:00
[ [ "Jibril", "Ebrahim Chekol", "" ], [ "Tantğ", "A. Cüneyd", "" ] ]
new_dataset
0.999495
2207.00794
Tian-Zhu Xiang
Yujia Sun, Shuo Wang, Chenglizhao Chen, Tian-Zhu Xiang
Boundary-Guided Camouflaged Object Detection
Accepted by IJCAI2022
IJCAI2022
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camouflaged object detection (COD), segmenting objects that are elegantly blended into their surroundings, is a valuable yet challenging task. Existing deep-learning methods often fall into the difficulty of accurately identifying the camouflaged object with complete and fine object structure. To this end, in this paper, we propose a novel boundary-guided network (BGNet) for camouflaged object detection. Our method explores valuable and extra object-related edge semantics to guide representation learning of COD, which forces the model to generate features that highlight object structure, thereby promoting camouflaged object detection of accurate boundary localization. Extensive experiments on three challenging benchmark datasets demonstrate that our BGNet significantly outperforms the existing 18 state-of-the-art methods under four widely-used evaluation metrics. Our code is publicly available at: https://github.com/thograce/BGNet.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 10:48:35 GMT" } ]
2022-07-05T00:00:00
[ [ "Sun", "Yujia", "" ], [ "Wang", "Shuo", "" ], [ "Chen", "Chenglizhao", "" ], [ "Xiang", "Tian-Zhu", "" ] ]
new_dataset
0.993206
2207.00796
Yuping Ye
Yuping Ye, Siyuan Chen and Zhan Song
Benchmarks for Industrial Inspection Based on Structured Light
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Robustness and accuracy are two critical metrics for industrial inspection. In this paper, we propose benchmarks that can evaluate the structured light method's performance. Our evaluation metric was learning from a lot of inspection tasks from the factories. The metric we proposed consists of four detailed criteria such as flatness, length, height and sphericity. Then we can judge whether the structured light method/device can be applied to a specified inspection task by our evaluation metric quickly. A structured light device built for TypeC pin needles inspection performance is evaluated via our metrics in the final experimental section.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 11:09:05 GMT" } ]
2022-07-05T00:00:00
[ [ "Ye", "Yuping", "" ], [ "Chen", "Siyuan", "" ], [ "Song", "Zhan", "" ] ]
new_dataset
0.953717
2207.00801
Keita Ishizuka
Keita Ishizuka
Construction of quaternary Hermitian LCD codes
15 pages
null
null
null
cs.IT cs.DM math.CO math.IT
http://creativecommons.org/licenses/by/4.0/
We introduce a general construction of many Hermitian LCD $[n, k]$ codes from a given Hermitian LCD $[n, k]$ code. Furthermore, we present some results on punctured codes and shortened codes of quaternary Hermitian LCD codes. As an application, we improve some of the previously known lower bounds on the largest minimum weights of quaternary Hermitian LCD codes of length $12 \le n \le 30$.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 11:30:53 GMT" } ]
2022-07-05T00:00:00
[ [ "Ishizuka", "Keita", "" ] ]
new_dataset
0.999612
2207.00804
Xingyu Wu
Xingyu Wu and Jinyang Li
An AIoT-enabled Autonomous Dementia Monitoring System
null
null
null
null
cs.LG cs.AI cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An autonomous Artificial Internet of Things (AIoT) system for elderly dementia patients monitoring in a smart home is presented. The system mainly implements two functions based on the activity inference of the sensor data, which are real time abnormal activity monitoring and trend prediction of disease related activities. Specifically, CASAS dataset is employed to train a Random Forest (RF) model for activity inference. Then, another RF model trained by the output data of activity inference is used for abnormal activity monitoring. Particularly, RF is chosen for these tasks because of its balanced trade offs between accuracy, time efficiency, flexibility, and interpretability. Moreover, Long Short Term Memory (LSTM) is utilised to forecast the disease related activity trend of a patient. Consequently, the accuracy of two RF classifiers designed for activity inference and abnormal activity detection is greater than 99 percent and 94 percent, respectively. Furthermore, using the duration of sleep as an example, the LSTM model achieves accurate and evident future trends prediction.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 11:36:16 GMT" } ]
2022-07-05T00:00:00
[ [ "Wu", "Xingyu", "" ], [ "Li", "Jinyang", "" ] ]
new_dataset
0.993888
2207.00806
\'Agoston Sipos
\'Agoston Sipos
Corner-based implicit patches
null
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-sided surfaces are often defined by side interpolants (also called ribbons), i.e. the surface has to connect to the ribbons with a prescribed degree of smoothness. The I-patch is such a family of implicit surfaces capable of interpolating an arbitrary number of ribbons and can be used in design and approximation. While in the case of parametric surfaces describing ribbons is a well-discussed problem, defining implicit ribbons is a different task. This paper will introduce corner I-patches, a new representation that describes implicit surfaces based on corner interpolants. Those may be defined with much simpler surfaces, while the shape of the patch will depend on a handful of scalar parameters. Continuity between patches will be enforced via constraints on these parameters. Corner I-patches have several favorable properties that can be exploited for example in volume rendering or approximation.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 11:44:35 GMT" } ]
2022-07-05T00:00:00
[ [ "Sipos", "Ágoston", "" ] ]
new_dataset
0.999285
2207.00837
Jingyao Wang
Jingyao Wang, Naigong Yu
UTD-Yolov5: A Real-time Underwater Targets Detection Method based on Attention Improved YOLOv5
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the treasure house of nature, the ocean contains abundant resources. But the coral reefs, which are crucial to the sustainable development of marine life, are facing a huge crisis because of the existence of COTS and other organisms. The protection of society through manual labor is limited and inefficient. The unpredictable nature of the marine environment also makes manual operations risky. The use of robots for underwater operations has become a trend. However, the underwater image acquisition has defects such as weak light, low resolution, and many interferences, while the existing target detection algorithms are not effective. Based on this, we propose an underwater target detection algorithm based on Attention Improved YOLOv5, called UTD-Yolov5. It can quickly and efficiently detect COTS, which in turn provides a prerequisite for complex underwater operations. We adjusted the original network architecture of YOLOv5 in multiple stages, including: replacing the original Backbone with a two-stage cascaded CSP (CSP2); introducing the visual channel attention mechanism module SE; designing random anchor box similarity calculation method etc. These operations enable UTD-Yolov5 to detect more flexibly and capture features more accurately. In order to make the network more efficient, we also propose optimization methods such as WBF and iterative refinement mechanism. This paper conducts a lot of experiments based on the CSIRO dataset [1]. The results show that the average accuracy of our UTD-Yolov5 reaches 78.54%, which is a great improvement compared to the baseline.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 14:09:08 GMT" } ]
2022-07-05T00:00:00
[ [ "Wang", "Jingyao", "" ], [ "Yu", "Naigong", "" ] ]
new_dataset
0.983767
2207.00843
EPTCS
Joris Ceulemans (KU Leuven), Andreas Nuyts (KU Leuven), Dominique Devriese (KU Leuven)
Sikkel: Multimode Simple Type Theory as an Agda Library
In Proceedings MSFP 2022, arXiv:2206.09534
EPTCS 360, 2022, pp. 93-112
10.4204/EPTCS.360.5
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many variants of type theory extend a basic theory with additional primitives or properties like univalence, guarded recursion or parametricity, to enable constructions or proofs that would be harder or impossible to do in the original theory. However, implementing such extended type theories (either from scratch or by modifying an existing implementation) is a big hurdle for their wider adoption. In this paper we present Sikkel, a library in the dependently typed programming language Agda that allows users to program in extended type theories. It uses a deeply embedded language that can be easily extended with additional type and term constructors, thus supporting a wide variety of type theories. Moreover, Sikkel has a type checker that is sound by construction in the sense that all well-typed programs are automatically translated to their semantics in a shallow embedding based on presheaf models. Additionally, our model supports combining different base categories by using modalities to transport definitions between them. This enables in particular a general approach for extracting definitions to the meta-level, so that we can use the extended type theories to define regular Agda functions and prove properties of them. In this paper, we demonstrate Sikkel theories with guarded recursion and parametricity, but other extensions can be easily plugged in. For now, Sikkel supports only simple type theories but its model already anticipates the future addition of dependent types and a universe.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 14:24:04 GMT" } ]
2022-07-05T00:00:00
[ [ "Ceulemans", "Joris", "", "KU Leuven" ], [ "Nuyts", "Andreas", "", "KU Leuven" ], [ "Devriese", "Dominique", "", "KU Leuven" ] ]
new_dataset
0.998358
2207.00851
EPTCS
Dylan McDermott, Tarmo Uustalu
What Makes a Strong Monad?
In Proceedings MSFP 2022, arXiv:2206.09534
EPTCS 360, 2022, pp. 113-133
10.4204/EPTCS.360.6
null
cs.LO cs.PL math.CT
http://creativecommons.org/licenses/by/4.0/
Strong monads are important for several applications, in particular, in the denotational semantics of effectful languages, where strength is needed to sequence computations that have free variables. Strength is non-trivial: it can be difficult to determine whether a monad has any strength at all, and monads can be strong in multiple ways. We therefore review some of the most important known facts about strength and prove some new ones. In particular, we present a number of equivalent characterizations of strong functor and strong monad, and give some conditions that guarantee existence or uniqueness of strengths. We look at strength from three different perspectives: actions of a monoidal category V, enrichment over V, and powering over V. We are primarily motivated by semantics of effects, but the results are also useful in other contexts.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 14:36:10 GMT" } ]
2022-07-05T00:00:00
[ [ "McDermott", "Dylan", "" ], [ "Uustalu", "Tarmo", "" ] ]
new_dataset
0.998
2207.00896
Klen \v{C}opi\v{c} Pucihar
Maheshya Weerasinghe, Verena Biener, Jens Grubert, Aaron J Quigley, Alice Toniolo, Klen \v{C}opi\v{c} Pucihar and Matja\v{z} Kljun
VocabulARy: Learning Vocabulary in AR Supported by Keyword Visualisations
null
null
null
null
cs.HC cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning vocabulary in a primary or secondary language is enhanced when we encounter words in context. This context can be afforded by the place or activity we are engaged with. Existing learning environments include formal learning, mnemonics, flashcards, use of a dictionary or thesaurus, all leading to practice with new words in context. In this work, we propose an enhancement to the language learning process by providing the user with words and learning tools in context, with VocabulARy. VocabulARy visually annotates objects in AR, in the user's surroundings, with the corresponding English (first language) and Japanese (second language) words to enhance the language learning process. In addition to the written and audio description of each word, we also present the user with a keyword and its visualisation to enhance memory retention. We evaluate our prototype by comparing it to an alternate AR system that does not show an additional visualisation of the keyword, and, also, we compare it to two non-AR systems on a tablet, one with and one without visualising the keyword. Our results indicate that AR outperforms the tablet system regarding immediate recall, mental effort and task completion time. Additionally, the visualisation approach scored significantly higher than showing only the written keyword with respect to immediate and delayed recall and learning efficiency, mental effort and task-completion time.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 18:35:22 GMT" } ]
2022-07-05T00:00:00
[ [ "Weerasinghe", "Maheshya", "" ], [ "Biener", "Verena", "" ], [ "Grubert", "Jens", "" ], [ "Quigley", "Aaron J", "" ], [ "Toniolo", "Alice", "" ], [ "Pucihar", "Klen Čopič", "" ], [ "Kljun", "Matjaž", "" ] ]
new_dataset
0.996669
2207.00913
Yuhao Nie
Yuhao Nie, Xiatong Li, Andea Scott, Yuchi Sun, Vignesh Venugopal, Adam Brandt
SKIPP'D: a SKy Images and Photovoltaic Power Generation Dataset for Short-term Solar Forecasting
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale integration of photovoltaics (PV) into electricity grids is challenged by the intermittent nature of solar power. Sky-image-based solar forecasting using deep learning has been recognized as a promising approach to predicting the short-term fluctuations. However, there are few publicly available standardized benchmark datasets for image-based solar forecasting, which limits the comparison of different forecasting models and the exploration of forecasting methods. To fill these gaps, we introduce SKIPP'D -- a SKy Images and Photovoltaic Power Generation Dataset. The dataset contains three years (2017-2019) of quality-controlled down-sampled sky images and PV power generation data that is ready-to-use for short-term solar forecasting using deep learning. In addition, to support the flexibility in research, we provide the high resolution, high frequency sky images and PV power generation data as well as the concurrent sky video footage. We also include a code base containing data processing scripts and baseline model implementations for researchers to reproduce our previous work and accelerate their research in solar forecasting.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 21:52:50 GMT" } ]
2022-07-05T00:00:00
[ [ "Nie", "Yuhao", "" ], [ "Li", "Xiatong", "" ], [ "Scott", "Andea", "" ], [ "Sun", "Yuchi", "" ], [ "Venugopal", "Vignesh", "" ], [ "Brandt", "Adam", "" ] ]
new_dataset
0.99888
2207.00928
Jing Li
Qidan Zhu, Jing Li, Fei Yuan, Quan Gan
Continuous Sign Language Recognition via Temporal Super-Resolution Network
13 pages, 11 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aiming at the problem that the spatial-temporal hierarchical continuous sign language recognition model based on deep learning has a large amount of computation, which limits the real-time application of the model, this paper proposes a temporal super-resolution network(TSRNet). The data is reconstructed into a dense feature sequence to reduce the overall model computation while keeping the final recognition accuracy loss to a minimum. The continuous sign language recognition model(CSLR) via TSRNet mainly consists of three parts: frame-level feature extraction, time series feature extraction and TSRNet, where TSRNet is located between frame-level feature extraction and time-series feature extraction, which mainly includes two branches: detail descriptor and rough descriptor. The sparse frame-level features are fused through the features obtained by the two designed branches as the reconstructed dense frame-level feature sequence, and the connectionist temporal classification(CTC) loss is used for training and optimization after the time-series feature extraction part. To better recover semantic-level information, the overall model is trained with the self-generating adversarial training method proposed in this paper to reduce the model error rate. The training method regards the TSRNet as the generator, and the frame-level processing part and the temporal processing part as the discriminator. In addition, in order to unify the evaluation criteria of model accuracy loss under different benchmarks, this paper proposes word error rate deviation(WERD), which takes the error rate between the estimated word error rate (WER) and the reference WER obtained by the reconstructed frame-level feature sequence and the complete original frame-level feature sequence as the WERD. Experiments on two large-scale sign language datasets demonstrate the effectiveness of the proposed model.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 00:55:45 GMT" } ]
2022-07-05T00:00:00
[ [ "Zhu", "Qidan", "" ], [ "Li", "Jing", "" ], [ "Yuan", "Fei", "" ], [ "Gan", "Quan", "" ] ]
new_dataset
0.995005
2207.00942
Nathaniel Hanson
Nathaniel Hanson, Tarik Kelestemur, Deniz Erdogmus, Taskin Padir
Pregrasp Object Material Classification by a Novel Gripper Design with Integrated Spectroscopy
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robots benefit from being able to classify objects they interact with or manipulate based on their material properties. This capability ensures fine manipulation of complex objects through proper grasp pose and force selection. Prior work has focused on haptic or visual processing to determine material type at grasp time. In this work, we introduce a novel parallel robot gripper design and a method for collecting spectral readings and visual images from within the gripper finger. We train a nonlinear Support Vector Machine (SVM) that can classify the material of the object about to be grasped through recursive estimation, with increasing confidence as the distance from the finger tips to the object decreases. In order to validate the hardware design and classification method, we collect samples from 16 real and fake fruit varieties (composed of polystyrene/plastic) resulting in a dataset containing spectral curves, scene images, and high-resolution texture images as the objects are grasped, lifted, and released. Our modeling method demonstrates an accuracy of 96.4% in classifying objects in a 32 class decision problem. This represents a performance improvement by 29.4% over the state of the art computer vision algorithms at distinguishing between visually similar materials. In contrast to prior work, our recursive estimation model accounts for increasing spectral signal strength and allows for decisions to be made as the gripper approaches an object. We conclude that spectroscopy is a promising sensing modality for enabling robots to not only classify grasped objects but also understand their underlying material composition.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 03:14:45 GMT" } ]
2022-07-05T00:00:00
[ [ "Hanson", "Nathaniel", "" ], [ "Kelestemur", "Tarik", "" ], [ "Erdogmus", "Deniz", "" ], [ "Padir", "Taskin", "" ] ]
new_dataset
0.996429
2207.00960
Dhruv Makwana
Subhrajit Nag, Dhruv Makwana, Sai Chandra Teja R, Sparsh Mittal, C Krishna Mohan
WaferSegClassNet -- A Light-weight Network for Classification and Segmentation of Semiconductor Wafer Defects
11 pages, 2 figures, 7 tables, Published in Computers in Industry
Volume 142, 2022, 103720, ISSN 0166-3615,
10.1016/j.compind.2022.103720
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-nc-nd/4.0/
As the integration density and design intricacy of semiconductor wafers increase, the magnitude and complexity of defects in them are also on the rise. Since the manual inspection of wafer defects is costly, an automated artificial intelligence (AI) based computer-vision approach is highly desired. The previous works on defect analysis have several limitations, such as low accuracy and the need for separate models for classification and segmentation. For analyzing mixed-type defects, some previous works require separately training one model for each defect type, which is non-scalable. In this paper, we present WaferSegClassNet (WSCN), a novel network based on encoder-decoder architecture. WSCN performs simultaneous classification and segmentation of both single and mixed-type wafer defects. WSCN uses a "shared encoder" for classification, and segmentation, which allows training WSCN end-to-end. We use N-pair contrastive loss to first pretrain the encoder and then use BCE-Dice loss for segmentation, and categorical cross-entropy loss for classification. Use of N-pair contrastive loss helps in better embedding representation in the latent dimension of wafer maps. WSCN has a model size of only 0.51MB and performs only 0.2M FLOPS. Thus, it is much lighter than other state-of-the-art models. Also, it requires only 150 epochs for convergence, compared to 4,000 epochs needed by a previous work. We evaluate our model on the MixedWM38 dataset, which has 38,015 images. WSCN achieves an average classification accuracy of 98.2% and a dice coefficient of 0.9999. We are the first to show segmentation results on the MixedWM38 dataset. The source code can be obtained from https://github.com/ckmvigil/WaferSegClassNet.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 05:46:19 GMT" } ]
2022-07-05T00:00:00
[ [ "Nag", "Subhrajit", "" ], [ "Makwana", "Dhruv", "" ], [ "R", "Sai Chandra Teja", "" ], [ "Mittal", "Sparsh", "" ], [ "Mohan", "C Krishna", "" ] ]
new_dataset
0.966847
2207.00964
Jiajun Chai
Jiajun Chai, Yuanheng Zhu, Dongbin Zhao
NVIF: Neighboring Variational Information Flow for Large-Scale Cooperative Multi-Agent Scenarios
null
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Communication-based multi-agent reinforcement learning (MARL) provides information exchange between agents, which promotes the cooperation. However, existing methods cannot perform well in the large-scale multi-agent system. In this paper, we adopt neighboring communication and propose a Neighboring Variational Information Flow (NVIF) to provide efficient communication for agents. It employs variational auto-encoder to compress the shared information into a latent state. This communication protocol does not rely dependently on a specific task, so that it can be pre-trained to stabilize the MARL training. Besides. we combine NVIF with Proximal Policy Optimization (NVIF-PPO) and Deep Q Network (NVIF-DQN), and present a theoretical analysis to illustrate NVIF-PPO can promote cooperation. We evaluate the NVIF-PPO and NVIF-DQN on MAgent, a widely used large-scale multi-agent environment, by two tasks with different map sizes. Experiments show that our method outperforms other compared methods, and can learn effective and scalable cooperation strategies in the large-scale multi-agent system.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 06:15:16 GMT" } ]
2022-07-05T00:00:00
[ [ "Chai", "Jiajun", "" ], [ "Zhu", "Yuanheng", "" ], [ "Zhao", "Dongbin", "" ] ]
new_dataset
0.979413
2207.00973
Tian-Zhu Xiang
Lin Li, Jingyi Liu, Shuo Wang, Xunkun Wang, Tian-Zhu Xiang
Trichomonas Vaginalis Segmentation in Microscope Images
Accepted by MICCAI2022
MICCAI2022
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Trichomoniasis is a common infectious disease with high incidence caused by the parasite Trichomonas vaginalis, increasing the risk of getting HIV in humans if left untreated. Automated detection of Trichomonas vaginalis from microscopic images can provide vital information for the diagnosis of trichomoniasis. However, accurate Trichomonas vaginalis segmentation (TVS) is a challenging task due to the high appearance similarity between the Trichomonas and other cells (e.g., leukocyte), the large appearance variation caused by their motility, and, most importantly, the lack of large-scale annotated data for deep model training. To address these challenges, we elaborately collected the first large-scale Microscopic Image dataset of Trichomonas Vaginalis, named TVMI3K, which consists of 3,158 images covering Trichomonas of various appearances in diverse backgrounds, with high-quality annotations including object-level mask labels, object boundaries, and challenging attributes. Besides, we propose a simple yet effective baseline, termed TVNet, to automatically segment Trichomonas from microscopic images, including high-resolution fusion and foreground-background attention modules. Extensive experiments demonstrate that our model achieves superior segmentation performance and outperforms various cutting-edge object detection models both quantitatively and qualitatively, making it a promising framework to promote future research in TVS tasks. The dataset and results will be publicly available at: https://github.com/CellRecog/cellRecog.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 07:29:05 GMT" } ]
2022-07-05T00:00:00
[ [ "Li", "Lin", "" ], [ "Liu", "Jingyi", "" ], [ "Wang", "Shuo", "" ], [ "Wang", "Xunkun", "" ], [ "Xiang", "Tian-Zhu", "" ] ]
new_dataset
0.997808
2207.01058
Weiming Zhuang
Weiming Zhuang, Chongjie Ye, Ying Xu, Pengzhi Mao, Shuai Zhang
Chat-to-Design: AI Assisted Personalized Fashion Design
null
null
null
null
cs.AI cs.CV cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this demo, we present Chat-to-Design, a new multimodal interaction system for personalized fashion design. Compared to classic systems that recommend apparel based on keywords, Chat-to-Design enables users to design clothes in two steps: 1) coarse-grained selection via conversation and 2) fine-grained editing via an interactive interface. It encompasses three sub-systems to deliver an immersive user experience: A conversation system empowered by natural language understanding to accept users' requests and manages dialogs; A multimodal fashion retrieval system empowered by a large-scale pretrained language-image network to retrieve requested apparel; A fashion design system empowered by emerging generative techniques to edit attributes of retrieved clothes.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 14:54:39 GMT" } ]
2022-07-05T00:00:00
[ [ "Zhuang", "Weiming", "" ], [ "Ye", "Chongjie", "" ], [ "Xu", "Ying", "" ], [ "Mao", "Pengzhi", "" ], [ "Zhang", "Shuai", "" ] ]
new_dataset
0.96846
2207.01092
Alexander Sch\"afer
Alexander Sch\"afer, Gerd Reis, Didier Stricker
The Gesture Authoring Space: Authoring Customised Hand Gestures for Grasping Virtual Objects in Immersive Virtual Environments
null
null
10.1145/3543758.3543766
null
cs.HC cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural user interfaces are on the rise. Manufacturers for Augmented, Virtual, and Mixed Reality head mounted displays are increasingly integrating new sensors into their consumer grade products, allowing gesture recognition without additional hardware. This offers new possibilities for bare handed interaction within virtual environments. This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world. The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures. In a user study, the proposed approach is compared with the pinch gesture and the controller for grasping virtual objects. The different grasping techniques are compared in terms of accuracy, task completion time, usability, and naturalness. The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 18:33:33 GMT" } ]
2022-07-05T00:00:00
[ [ "Schäfer", "Alexander", "" ], [ "Reis", "Gerd", "" ], [ "Stricker", "Didier", "" ] ]
new_dataset
0.995989
2207.01124
Mohayeminul Islam
Mohayeminul Islam and Ajay Kumar Jha and Sarah Nadi
PyMigBench and PyMigTax: A Benchmark and Taxonomy for Python Library Migration
40 pages, 21 figures, submitted to Empirical Software Engineering
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Developers heavily rely on Application Programming Interfaces (APIs) from libraries to build their projects. However, libraries might become obsolete, or new libraries with better APIs might become available. In such cases, developers need to replace the used libraries with alternative libraries, a process referred to as library migration. When done manually, library migration can be tedious, time-consuming, and error-prone. Most of the current research on automated library migration techniques focus on Java libraries, and even more so on version migrations of the same library. Despite the increasing popularity of Python, limited research work has investigated migration between Python libraries. In this paper, we investigate the nature of Python library migrations in open-source systems. We analyze the code changes that happen during library migration and build PyMigBench, a manually verified migration benchmark. PyMigBench contains 436 migration-related code changes from 74 commits in 57 client repositories, and includes migrations between 34 unique pairs of libraries. Additionally, we manually analyze the migration-related code changes and create a taxonomy of migrations, PyMigTax, that categorizes migrations across various dimensions. Our contributions provide the necessary foundations for developing automated Python library migration tools and techniques.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 21:00:08 GMT" } ]
2022-07-05T00:00:00
[ [ "Islam", "Mohayeminul", "" ], [ "Jha", "Ajay Kumar", "" ], [ "Nadi", "Sarah", "" ] ]
new_dataset
0.998576
2207.01204
Eugene Ang
Eugene P.W. Ang, Shan Lin, Rahul Ahuja, Nemath Ahmed, Alex C. Kot
Adversarial Pairwise Reverse Attention for Camera Performance Imbalance in Person Re-identification: New Dataset and Metrics
Accepted into the IEEE International Conference on Image Processing (ICIP) 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Existing evaluation metrics for Person Re-Identification (Person ReID) models focus on system-wide performance. However, our studies reveal weaknesses due to the uneven data distributions among cameras and different camera properties that expose the ReID system to exploitation. In this work, we raise the long-ignored ReID problem of camera performance imbalance and collect a real-world privacy-aware dataset from 38 cameras to assist the study of the imbalance issue. We propose new metrics to quantify camera performance imbalance and further propose the Adversarial Pairwise Reverse Attention (APRA) Module to guide the model learning the camera invariant feature with a novel pairwise attention inversion mechanism.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 05:16:16 GMT" } ]
2022-07-05T00:00:00
[ [ "Ang", "Eugene P. W.", "" ], [ "Lin", "Shan", "" ], [ "Ahuja", "Rahul", "" ], [ "Ahmed", "Nemath", "" ], [ "Kot", "Alex C.", "" ] ]
new_dataset
0.972848
2207.01216
Cheng Zou
Cheng Zou, Furong Xu, Meng Wang, Wen Li, Yuan Cheng
Solutions for Fine-grained and Long-tailed Snake Species Recognition in SnakeCLEF 2022
Top solutions for FGVC9, accepted to CLEF2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automatic snake species recognition is important because it has vast potential to help lower deaths and disabilities caused by snakebites. We introduce our solution in SnakeCLEF 2022 for fine-grained snake species recognition on a heavy long-tailed class distribution. First, a network architecture is designed to extract and fuse features from multiple modalities, i.e. photograph from visual modality and geographic locality information from language modality. Then, logit adjustment based methods are studied to relieve the impact caused by the severe class imbalance. Next, a combination of supervised and self-supervised learning method is proposed to make full use of the dataset, including both labeled training data and unlabeled testing data. Finally, post processing strategies, such as multi-scale and multi-crop test-time-augmentation, location filtering and model ensemble, are employed for better performance. With an ensemble of several different models, a private score 82.65%, ranking the 3rd, is achieved on the final leaderboard.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 05:55:58 GMT" } ]
2022-07-05T00:00:00
[ [ "Zou", "Cheng", "" ], [ "Xu", "Furong", "" ], [ "Wang", "Meng", "" ], [ "Li", "Wen", "" ], [ "Cheng", "Yuan", "" ] ]
new_dataset
0.998825
2207.01220
Oshri Naparstek
Oshri Naparstek, Ophir Azulai, Daniel Rotman, Yevgeny Burshtein, Peter Staar, Udi Barzelay
BusiNet -- a Light and Fast Text Detection Network for Business Documents
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For digitizing or indexing physical documents, Optical Character Recognition (OCR), the process of extracting textual information from scanned documents, is a vital technology. When a document is visually damaged or contains non-textual elements, existing technologies can yield poor results, as erroneous detection results can greatly affect the quality of OCR. In this paper we present a detection network dubbed BusiNet aimed at OCR of business documents. Business documents often include sensitive information and as such they cannot be uploaded to a cloud service for OCR. BusiNet was designed to be fast and light so it could run locally preventing privacy issues. Furthermore, BusiNet is built to handle scanned document corruption and noise using a specialized synthetic dataset. The model is made robust to unseen noise by employing adversarial training strategies. We perform an evaluation on publicly available datasets demonstrating the usefulness and broad applicability of our model.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 06:08:49 GMT" } ]
2022-07-05T00:00:00
[ [ "Naparstek", "Oshri", "" ], [ "Azulai", "Ophir", "" ], [ "Rotman", "Daniel", "" ], [ "Burshtein", "Yevgeny", "" ], [ "Staar", "Peter", "" ], [ "Barzelay", "Udi", "" ] ]
new_dataset
0.999763
2207.01227
Shahid Alam
Shahid Alam
Cybersecurity: Past, Present and Future
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The digital transformation has created a new digital space known as cyberspace. This new cyberspace has improved the workings of businesses, organizations, governments, society as a whole, and day to day life of an individual. With these improvements come new challenges, and one of the main challenges is security. The security of the new cyberspace is called cybersecurity. Cyberspace has created new technologies and environments such as cloud computing, smart devices, IoTs, and several others. To keep pace with these advancements in cyber technologies there is a need to expand research and develop new cybersecurity methods and tools to secure these domains and environments. This book is an effort to introduce the reader to the field of cybersecurity, highlight current issues and challenges, and provide future directions to mitigate or resolve them. The main specializations of cybersecurity covered in this book are software security, hardware security, the evolution of malware, biometrics, cyber intelligence, and cyber forensics. We must learn from the past, evolve our present and improve the future. Based on this objective, the book covers the past, present, and future of these main specializations of cybersecurity. The book also examines the upcoming areas of research in cyber intelligence, such as hybrid augmented and explainable artificial intelligence (AI). Human and AI collaboration can significantly increase the performance of a cybersecurity system. Interpreting and explaining machine learning models, i.e., explainable AI is an emerging field of study and has a lot of potentials to improve the role of AI in cybersecurity.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 06:47:50 GMT" } ]
2022-07-05T00:00:00
[ [ "Alam", "Shahid", "" ] ]
new_dataset
0.984365
2207.01239
Zhongxiang Chang
Zhongxiang Chang and Yuning Chen and Zhongbao Zhou
Satellite downlink scheduling under breakpoint resume mode
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
A novel problem called satellite downlink scheduling problem (SDSP) under breakpoint resume mode (SDSP-BRM) is studied in our paper. Compared to the traditional SDSP where an imaging data has to be completely downloaded at one time, SDSP-BRM allows the data of an imaging data be broken into a number of pieces which can be downloaded in different playback windows. By analyzing the characteristics of SDSP-BRM, we first propose a mixed integer programming model for its formulation and then prove the NP-hardness of SDSP-BRM. To solve the problem, we design a simple and effective heuristic algorithm (SEHA) where a number of problem-tailored move operators are proposed for local searching. Numerical results on a set of well-designed scenarios demonstrate the efficiency of the proposed algorithm in comparison to the general purpose CPLEX solver. We conduct additional experiments to shed light on the impact of the segmental strategy on the overall performance of the proposed SEHA.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 07:30:51 GMT" } ]
2022-07-05T00:00:00
[ [ "Chang", "Zhongxiang", "" ], [ "Chen", "Yuning", "" ], [ "Zhou", "Zhongbao", "" ] ]
new_dataset
0.987488
2207.01255
Guochen Yu
Yuansheng Guan, Guochen Yu, Andong Li, Chengshi Zheng, Jie Wang
TMGAN-PLC: Audio Packet Loss Concealment using Temporal Memory Generative Adversarial Network
accepted by INTERSPEECH 2022
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Real-time communications in packet-switched networks have become widely used in daily communication, while they inevitably suffer from network delays and data losses in constrained real-time conditions. To solve these problems, audio packet loss concealment (PLC) algorithms have been developed to mitigate voice transmission failures by reconstructing the lost information. Limited by the transmission latency and device memory, it is still intractable for PLC to accomplish high-quality voice reconstruction using a relatively small packet buffer. In this paper, we propose a temporal memory generative adversarial network for audio PLC, dubbed TMGAN-PLC, which is comprised of a novel nested-UNet generator and the time-domain/frequency-domain discriminators. Specifically, a combination of the nested-UNet and temporal feature-wise linear modulation is elaborately devised in the generator to finely adjust the intra-frame information and establish inter-frame temporal dependencies. To complement the missing speech content caused by longer loss bursts, we employ multi-stage gated vector quantizers to capture the correct content and reconstruct the near-real smooth audio. Extensive experiments on the PLC Challenge dataset demonstrate that the proposed method yields promising performance in terms of speech quality, intelligibility, and PLCMOS.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 08:27:19 GMT" } ]
2022-07-05T00:00:00
[ [ "Guan", "Yuansheng", "" ], [ "Yu", "Guochen", "" ], [ "Li", "Andong", "" ], [ "Zheng", "Chengshi", "" ], [ "Wang", "Jie", "" ] ]
new_dataset
0.991411
2207.01256
Ran Yu
Ran Yu, Limock, Stefan Dietze
Still Haven't Found What You're Looking For -- Detecting the Intent of Web Search Missions from User Interaction Features
null
null
null
null
cs.IR cs.HC
http://creativecommons.org/licenses/by/4.0/
Web search is among the most frequent online activities. Whereas traditional information retrieval techniques focus on the information need behind a user query, previous work has shown that user behaviour and interaction can provide important signals for understanding the underlying intent of a search mission. An established taxonomy distinguishes between transactional, navigational and informational search missions, where in particular the latter involve a learning goal, i.e. the intent to acquire knowledge about a particular topic. We introduce a supervised approach for classifying online search missions into either of these categories by utilising a range of features obtained from the user interactions during an online search mission. Applying our model to a dataset of real-world query logs, we show that search missions can be categorised with an average F1 score of 63% and accuracy of 69%, while performance on informational and navigational missions is particularly promising (F1>75%). This suggests the potential to utilise such supervised classification during online search to better facilitate retrieval and ranking as well as to improve affiliated services, such as targeted online ads.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 08:30:18 GMT" } ]
2022-07-05T00:00:00
[ [ "Yu", "Ran", "" ], [ "Limock", "", "" ], [ "Dietze", "Stefan", "" ] ]
new_dataset
0.974846
2207.01296
Ester Gonzalez-Sosa
Ester Gonzalez-Sosa, Andrija Gajic, Diego Gonzalez-Morin, Guillermo Robledo, Pablo Perez and Alvaro Villegas
Real Time Egocentric Segmentation for Video-self Avatar in Mixed Reality
9 pages, 9 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work we present our real-time egocentric body segmentation algorithm. Our algorithm achieves a frame rate of 66 fps for an input resolution of 640x480, thanks to our shallow network inspired in Thundernet's architecture. Besides, we put a strong emphasis on the variability of the training data. More concretely, we describe the creation process of our Egocentric Bodies (EgoBodies) dataset, composed of almost 10,000 images from three datasets, created both from synthetic methods and real capturing. We conduct experiments to understand the contribution of the individual datasets; compare Thundernet model trained with EgoBodies with simpler and more complex previous approaches and discuss their corresponding performance in a real-life setup in terms of segmentation quality and inference times. The described trained semantic segmentation algorithm is already integrated in an end-to-end system for Mixed Reality (MR), making it possible for users to see his/her own body while being immersed in a MR scene.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 10:00:16 GMT" } ]
2022-07-05T00:00:00
[ [ "Gonzalez-Sosa", "Ester", "" ], [ "Gajic", "Andrija", "" ], [ "Gonzalez-Morin", "Diego", "" ], [ "Robledo", "Guillermo", "" ], [ "Perez", "Pablo", "" ], [ "Villegas", "Alvaro", "" ] ]
new_dataset
0.992401
2207.01404
Ling Gao
Ling Gao and Yuxuan Liang and Jiaqi Yang and Shaoxun Wu and Chenyu Wang and Jiaben Chen and Laurent Kneip
VECtor: A Versatile Event-Centric Benchmark for Multi-Sensor SLAM
null
IEEE Robotics and Automation Letters, 2022
10.1109/LRA.2022.3186770
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event cameras have recently gained in popularity as they hold strong potential to complement regular cameras in situations of high dynamics or challenging illumination. An important problem that may benefit from the addition of an event camera is given by Simultaneous Localization And Mapping (SLAM). However, in order to ensure progress on event-inclusive multi-sensor SLAM, novel benchmark sequences are needed. Our contribution is the first complete set of benchmark datasets captured with a multi-sensor setup containing an event-based stereo camera, a regular stereo camera, multiple depth sensors, and an inertial measurement unit. The setup is fully hardware-synchronized and underwent accurate extrinsic calibration. All sequences come with ground truth data captured by highly accurate external reference devices such as a motion capture system. Individual sequences include both small and large-scale environments, and cover the specific challenges targeted by dynamic vision sensors.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 13:37:26 GMT" } ]
2022-07-05T00:00:00
[ [ "Gao", "Ling", "" ], [ "Liang", "Yuxuan", "" ], [ "Yang", "Jiaqi", "" ], [ "Wu", "Shaoxun", "" ], [ "Wang", "Chenyu", "" ], [ "Chen", "Jiaben", "" ], [ "Kneip", "Laurent", "" ] ]
new_dataset
0.99955
2207.01406
Bjorn Lindqvist Mr.
Bj\"orn Lindqvist, Sina Sharif Mansouri, Jakub Halu\v{s}ka, and George Nikolakopoulos
Reactive Navigation of an Unmanned Aerial Vehicle with Perception-based Obstacle Avoidance Constraints
16 pages, 28 figures
IEEE Transactions on Control Systems Technology (2021) Early Access
10.1109/TCST.2021.3124820
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this article we propose a reactive constrained navigation scheme, with embedded obstacles avoidance for an Unmanned Aerial Vehicle (UAV), for enabling navigation in obstacle-dense environments. The proposed navigation architecture is based on Nonlinear Model Predictive Control (NMPC), and utilizes an on-board 2D LiDAR to detect obstacles and translate online the key geometric information of the environment into parametric constraints for the NMPC that constrain the available position-space for the UAV. This article focuses also on the real-world implementation and experimental validation of the proposed reactive navigation scheme, and it is applied in multiple challenging laboratory experiments, where we also conduct comparisons with relevant methods of reactive obstacle avoidance. The solver utilized in the proposed approach is the Optimization Engine (OpEn) and the Proximal Averaged Newton for Optimal Control (PANOC) algorithm, where a penalty method is applied to properly consider obstacles and input constraints during the navigation task. The proposed novel scheme allows for fast solutions, while using limited on-board computational power, that is a required feature for the overall closed loop performance of an UAV and is applied in multiple real-time scenarios. The combination of built-in obstacle avoidance and real-time applicability makes the proposed reactive constrained navigation scheme an elegant framework for UAVs that is able to perform fast nonlinear control, local path-planning and obstacle avoidance, all embedded in the control layer.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 13:38:07 GMT" } ]
2022-07-05T00:00:00
[ [ "Lindqvist", "Björn", "" ], [ "Mansouri", "Sina Sharif", "" ], [ "Haluška", "Jakub", "" ], [ "Nikolakopoulos", "George", "" ] ]
new_dataset
0.953303
2207.01434
Yue Qin
Yue Qin and Xiaojing Liao
Cybersecurity Entity Alignment via Masked Graph Attention Networks
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cybersecurity vulnerability information is often recorded by multiple channels, including government vulnerability repositories, individual-maintained vulnerability-gathering platforms, or vulnerability-disclosure email lists and forums. Integrating vulnerability information from different channels enables comprehensive threat assessment and quick deployment to various security mechanisms. Efforts to automatically gather such information, however, are impeded by the limitations of today's entity alignment techniques. In our study, we annotate the first cybersecurity-domain entity alignment dataset and reveal the unique characteristics of security entities. Based on these observations, we propose the first cybersecurity entity alignment model, CEAM, which equips GNN-based entity alignment with two mechanisms: asymmetric masked aggregation and partitioned attention. Experimental results on cybersecurity-domain entity alignment datasets demonstrate that CEAM significantly outperforms state-of-the-art entity alignment methods.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 14:19:32 GMT" } ]
2022-07-05T00:00:00
[ [ "Qin", "Yue", "" ], [ "Liao", "Xiaojing", "" ] ]
new_dataset
0.997493
2207.01452
Jun Cen
Jun Cen, Peng Yun, Shiwei Zhang, Junhao Cai, Di Luan, Michael Yu Wang, Ming Liu, Mingqian Tang
Open-world Semantic Segmentation for LIDAR Point Clouds
Accepted by ECCV 2022. arXiv admin note: text overlap with arXiv:2011.10033, arXiv:2109.05441 by other authors
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Current methods for LIDAR semantic segmentation are not robust enough for real-world applications, e.g., autonomous driving, since it is closed-set and static. The closed-set assumption makes the network only able to output labels of trained classes, even for objects never seen before, while a static network cannot update its knowledge base according to what it has seen. Therefore, in this work, we propose the open-world semantic segmentation task for LIDAR point clouds, which aims to 1) identify both old and novel classes using open-set semantic segmentation, and 2) gradually incorporate novel objects into the existing knowledge base using incremental learning without forgetting old classes. For this purpose, we propose a REdundAncy cLassifier (REAL) framework to provide a general architecture for both the open-set semantic segmentation and incremental learning problems. The experimental results show that REAL can simultaneously achieves state-of-the-art performance in the open-set semantic segmentation task on the SemanticKITTI and nuScenes datasets, and alleviate the catastrophic forgetting problem with a large margin during incremental learning.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 14:40:35 GMT" } ]
2022-07-05T00:00:00
[ [ "Cen", "Jun", "" ], [ "Yun", "Peng", "" ], [ "Zhang", "Shiwei", "" ], [ "Cai", "Junhao", "" ], [ "Luan", "Di", "" ], [ "Wang", "Michael Yu", "" ], [ "Liu", "Ming", "" ], [ "Tang", "Mingqian", "" ] ]
new_dataset
0.982246
2207.01483
Alexander Wang
Alexander Wang, Jerry Sun, Kaitlyn Chen, Kevin Zhou, Edward Li Gu, Chenxin Fang
"COVID-19 was a FIFA conspiracy #curropt": An Investigation into the Viral Spread of COVID-19 Misinformation
Winner of the 2021 ProjectX undergraduate research competition hosted by the University of Toronto under the category of Epidemiology. Accepted by the University of Toronto AI Conference 2022. 8 pages, 4 figures
null
null
null
cs.CY cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
The outbreak of the infectious and fatal disease COVID-19 has revealed that pandemics assail public health in two waves: first, from the contagion itself and second, from plagues of suspicion and stigma. Now, we have in our hands and on our phones an outbreak of moral controversy. Modern dependency on social medias has not only facilitated access to the locations of vaccine clinics and testing sites but also-and more frequently-to the convoluted explanations of how "COVID-19 was a FIFA conspiracy"[1]. The MIT Media Lab finds that false news "diffuses significantly farther, faster, deeper, and more broadly than truth, in all categories of information, and by an order of magnitude"[2]. The question is, how does the spread of misinformation interact with a physical epidemic disease? In this paper, we estimate the extent to which misinformation has influenced the course of the COVID-19 pandemic using natural language processing models and provide a strategy to combat social media posts that are likely to cause widespread harm.
[ { "version": "v1", "created": "Sun, 12 Jun 2022 19:41:01 GMT" } ]
2022-07-05T00:00:00
[ [ "Wang", "Alexander", "" ], [ "Sun", "Jerry", "" ], [ "Chen", "Kaitlyn", "" ], [ "Zhou", "Kevin", "" ], [ "Gu", "Edward Li", "" ], [ "Fang", "Chenxin", "" ] ]
new_dataset
0.99814
2207.01492
Ruchita Bhadre
Ruchita Bhadre, Prathamesh Yeole, Tejas Ranka, Rohini Mudhalwadkar
SmartMask- Developing an automated self-care system
Presented at SIG Healthcare Indagation IEEE
null
null
null
cs.CY cs.LG cs.RO eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
COVID-19 has changed our world and has filled people with fear and anxiety. Everyone has a fear of coming in contact with people having the Coronavirus. In Spite of releasing full lockdowns, there is still a pressing need to maintain social distancing in the short- to medium-term to control the spread of coronavirus. Due to lack of self discipline or obviously pulling down the mask to get some fresh air, might pose a threat when you come near a person showing COVID symptoms. Abiding to WHO guidelines to avoid touching the mask while wearing it, we propose a wearable device for no contact pulling up of mask on face and additionally to implement social distancing with sensors mounted on the device. The SmartMask will detect if we are in the vicinity of any other person and will pull itself up. With sensors for detecting the closeness of objects around you and prompting you to take a proper action or pull the mask automatically. Along with the automated mask we will incorporate a temperature sensor to check vitals of an individual at all times and give an alert to the peers around him. This will ensure social distancing and help in avoiding spread of the virus.
[ { "version": "v1", "created": "Wed, 15 Jun 2022 03:17:01 GMT" } ]
2022-07-05T00:00:00
[ [ "Bhadre", "Ruchita", "" ], [ "Yeole", "Prathamesh", "" ], [ "Ranka", "Tejas", "" ], [ "Mudhalwadkar", "Rohini", "" ] ]
new_dataset
0.99552
2207.01505
Bohan Jiang
Bohan Jiang, Paras Sheth, Baoxin Li, Huan Liu
CoVaxNet: An Online-Offline Data Repository for COVID-19 Vaccine Hesitancy Research
10 pages
null
null
null
cs.CY cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Despite the astonishing success of COVID-19 vaccines against the virus, a substantial proportion of the population is still hesitant to be vaccinated, undermining governmental efforts to control the virus. To address this problem, we need to understand the different factors giving rise to such a behavior, including social media discourses, news media propaganda, government responses, demographic and socioeconomic statuses, and COVID-19 statistics, etc. However, existing datasets fail to cover all these aspects, making it difficult to form a complete picture in inferencing about the problem of vaccine hesitancy. In this paper, we construct a multi-source, multi-modal, and multi-feature online-offline data repository CoVaxNet. We provide descriptive analyses and insights to illustrate critical patterns in CoVaxNet. Moreover, we propose a novel approach for connecting online and offline data so as to facilitate the inference tasks that exploit complementary information sources.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 05:58:35 GMT" } ]
2022-07-05T00:00:00
[ [ "Jiang", "Bohan", "" ], [ "Sheth", "Paras", "" ], [ "Li", "Baoxin", "" ], [ "Liu", "Huan", "" ] ]
new_dataset
0.9993
2207.01600
Jin Wan
Jin Wan and Hui Yin and Zhenyao Wu and Xinyi Wu and Zhihao Liu and Song Wang
CRFormer: A Cross-Region Transformer for Shadow Removal
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aiming to restore the original intensity of shadow regions in an image and make them compatible with the remaining non-shadow regions without a trace, shadow removal is a very challenging problem that benefits many downstream image/video-related tasks. Recently, transformers have shown their strong capability in various applications by capturing global pixel interactions and this capability is highly desirable in shadow removal. However, applying transformers to promote shadow removal is non-trivial for the following two reasons: 1) The patchify operation is not suitable for shadow removal due to irregular shadow shapes; 2) shadow removal only needs one-way interaction from the non-shadow region to the shadow region instead of the common two-way interactions among all pixels in the image. In this paper, we propose a novel cross-region transformer, namely CRFormer, for shadow removal which differs from existing transformers by only considering the pixel interactions from the non-shadow region to the shadow region without splitting images into patches. This is achieved by a carefully designed region-aware cross-attention operation that can aggregate the recovered shadow region features conditioned on the non-shadow region features. Extensive experiments on ISTD, AISTD, SRD, and Video Shadow Removal datasets demonstrate the superiority of our method compared to other state-of-the-art methods.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 17:33:02 GMT" } ]
2022-07-05T00:00:00
[ [ "Wan", "Jin", "" ], [ "Yin", "Hui", "" ], [ "Wu", "Zhenyao", "" ], [ "Wu", "Xinyi", "" ], [ "Liu", "Zhihao", "" ], [ "Wang", "Song", "" ] ]
new_dataset
0.998712
2207.01605
Roland Kromes
Ilya Grishkov, Roland Kromes, Thanassis Giannetsos and Kaitai Liang
ID-based self-encryption via Hyperledger Fabric based smart contract
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
This paper offers a prototype of a Hyperledger Fabric-IPFS based network architecture including a smart contract based encryption scheme that meant to improve the security of user's data that is being uploaded to the distributed ledger. A new extension to the self-encryption scheme was deployed by integrating data owner's identity into the encryption process. Such integration allows to permanently preserve ownership of the original file and link it to the person/entity who originally uploaded it. Moreover, self-encryption provides strong security guarantees that decryption of a file is computationally not feasible under the condition that the encrypted file and the key are safely stored.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 17:37:03 GMT" } ]
2022-07-05T00:00:00
[ [ "Grishkov", "Ilya", "" ], [ "Kromes", "Roland", "" ], [ "Giannetsos", "Thanassis", "" ], [ "Liang", "Kaitai", "" ] ]
new_dataset
0.998806
2104.10340
Wangzhi Li
Mobin Zhao, Wangzhi Li, Yongjie Fu, Kangrui Ruan, Xuan Di
CVLight: Decentralized Learning for Adaptive Traffic Signal Control with Connected Vehicles
29 pages, 14 figures
Transportation Research Part C: Emerging Technologies, 141 (2022): 103728
null
null
cs.LG cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
This paper develops a decentralized reinforcement learning (RL) scheme for multi-intersection adaptive traffic signal control (TSC), called "CVLight", that leverages data collected from connected vehicles (CVs). The state and reward design facilitates coordination among agents and considers travel delays collected by CVs. A novel algorithm, Asymmetric Advantage Actor-critic (Asym-A2C), is proposed where both CV and non-CV information is used to train the critic network, while only CV information is used to execute optimal signal timing. Comprehensive experiments show the superiority of CVLight over state-of-the-art algorithms under a 2-by-2 synthetic road network with various traffic demand patterns and penetration rates. The learned policy is then visualized to further demonstrate the advantage of Asym-A2C. A pre-train technique is applied to improve the scalability of CVLight, which significantly shortens the training time and shows the advantage in performance under a 5-by-5 road network. A case study is performed on a 2-by-2 road network located in State College, Pennsylvania, USA, to further demonstrate the effectiveness of the proposed algorithm under real-world scenarios. Compared to other baseline models, the trained CVLight agent can efficiently control multiple intersections solely based on CV data and achieve the best performance, especially under low CV penetration rates.
[ { "version": "v1", "created": "Wed, 21 Apr 2021 03:38:11 GMT" }, { "version": "v2", "created": "Fri, 10 Dec 2021 22:21:45 GMT" }, { "version": "v3", "created": "Fri, 1 Jul 2022 03:28:09 GMT" } ]
2022-07-04T00:00:00
[ [ "Zhao", "Mobin", "" ], [ "Li", "Wangzhi", "" ], [ "Fu", "Yongjie", "" ], [ "Ruan", "Kangrui", "" ], [ "Di", "Xuan", "" ] ]
new_dataset
0.976769
2105.09847
Micha\"el Fonder
Micha\"el Fonder and Damien Ernst and Marc Van Droogenbroeck
M4Depth: Monocular depth estimation for autonomous vehicles in unseen environments
Main paper: 9 pages, Appendix: 4 pages, References: 2 pages. Code available on GitHub: https://github.com/michael-fonder/M4Depth
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the distance to objects is crucial for autonomous vehicles when using depth sensors is not possible. In this case, the distance has to be estimated from on-board mounted RGB cameras, which is a complex task especially in environments such as natural outdoor landscapes. In this paper, we present a new method named M4Depth for depth estimation. First, we establish a bijective relationship between depth and the visual disparity of two consecutive frames and show how to exploit it to perform motion-invariant pixel-wise depth estimation. Then, we detail M4Depth which is based on a pyramidal convolutional neural network architecture where each level refines an input disparity map estimate by using two customized cost volumes. We use these cost volumes to leverage the visual spatio-temporal constraints imposed by motion and to make the network robust for varied scenes. We benchmarked our approach both in test and generalization modes on public datasets featuring synthetic camera trajectories recorded in a wide variety of outdoor scenes. Results show that our network outperforms the state of the art on these datasets, while also performing well on a standard depth estimation benchmark. The code of our method is publicly available at https://github.com/michael-fonder/M4Depth.
[ { "version": "v1", "created": "Thu, 20 May 2021 15:46:02 GMT" }, { "version": "v2", "created": "Fri, 21 May 2021 09:13:23 GMT" }, { "version": "v3", "created": "Fri, 1 Jul 2022 10:08:30 GMT" } ]
2022-07-04T00:00:00
[ [ "Fonder", "Michaël", "" ], [ "Ernst", "Damien", "" ], [ "Van Droogenbroeck", "Marc", "" ] ]
new_dataset
0.988732
2106.14405
Andrew Szot
Andrew Szot, Alex Clegg, Eric Undersander, Erik Wijmans, Yili Zhao, John Turner, Noah Maestre, Mustafa Mukadam, Devendra Chaplot, Oleksandr Maksymets, Aaron Gokaslan, Vladimir Vondrus, Sameer Dharur, Franziska Meier, Wojciech Galuba, Angel Chang, Zsolt Kira, Vladlen Koltun, Jitendra Malik, Manolis Savva, Dhruv Batra
Habitat 2.0: Training Home Assistants to Rearrange their Habitat
null
null
null
null
cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Habitat 2.0 (H2.0), a simulation platform for training virtual robots in interactive 3D environments and complex physics-enabled scenarios. We make comprehensive contributions to all levels of the embodied AI stack - data, simulation, and benchmark tasks. Specifically, we present: (i) ReplicaCAD: an artist-authored, annotated, reconfigurable 3D dataset of apartments (matching real spaces) with articulated objects (e.g. cabinets and drawers that can open/close); (ii) H2.0: a high-performance physics-enabled 3D simulator with speeds exceeding 25,000 simulation steps per second (850x real-time) on an 8-GPU node, representing 100x speed-ups over prior work; and, (iii) Home Assistant Benchmark (HAB): a suite of common tasks for assistive robots (tidy the house, prepare groceries, set the table) that test a range of mobile manipulation capabilities. These large-scale engineering contributions allow us to systematically compare deep reinforcement learning (RL) at scale and classical sense-plan-act (SPA) pipelines in long-horizon structured tasks, with an emphasis on generalization to new objects, receptacles, and layouts. We find that (1) flat RL policies struggle on HAB compared to hierarchical ones; (2) a hierarchy with independent skills suffers from 'hand-off problems', and (3) SPA pipelines are more brittle than RL policies.
[ { "version": "v1", "created": "Mon, 28 Jun 2021 05:42:15 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 05:29:15 GMT" } ]
2022-07-04T00:00:00
[ [ "Szot", "Andrew", "" ], [ "Clegg", "Alex", "" ], [ "Undersander", "Eric", "" ], [ "Wijmans", "Erik", "" ], [ "Zhao", "Yili", "" ], [ "Turner", "John", "" ], [ "Maestre", "Noah", "" ], [ "Mukadam", "Mustafa", "" ], [ "Chaplot", "Devendra", "" ], [ "Maksymets", "Oleksandr", "" ], [ "Gokaslan", "Aaron", "" ], [ "Vondrus", "Vladimir", "" ], [ "Dharur", "Sameer", "" ], [ "Meier", "Franziska", "" ], [ "Galuba", "Wojciech", "" ], [ "Chang", "Angel", "" ], [ "Kira", "Zsolt", "" ], [ "Koltun", "Vladlen", "" ], [ "Malik", "Jitendra", "" ], [ "Savva", "Manolis", "" ], [ "Batra", "Dhruv", "" ] ]
new_dataset
0.99969
2110.01073
Ori Shapira
Ori Shapira, Ramakanth Pasunuru, Ido Dagan, Yael Amsterdamer
Multi-Document Keyphrase Extraction: Dataset, Baselines and Review
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Keyphrase extraction has been extensively researched within the single-document setting, with an abundance of methods, datasets and applications. In contrast, multi-document keyphrase extraction has been infrequently studied, despite its utility for describing sets of documents, and its use in summarization. Moreover, no prior dataset exists for multi-document keyphrase extraction, hindering the progress of the task. Recent advances in multi-text processing make the task an even more appealing challenge to pursue. To stimulate this pursuit, we present here the first dataset for the task, MK-DUC-01, which can serve as a new benchmark, and test multiple keyphrase extraction baselines on our data. In addition, we provide a brief, yet comprehensive, literature review of the task.
[ { "version": "v1", "created": "Sun, 3 Oct 2021 19:10:28 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 13:32:21 GMT" } ]
2022-07-04T00:00:00
[ [ "Shapira", "Ori", "" ], [ "Pasunuru", "Ramakanth", "" ], [ "Dagan", "Ido", "" ], [ "Amsterdamer", "Yael", "" ] ]
new_dataset
0.995841
2111.11730
Hitesh Tewari Dr
Matthew Chun, Stefan Weber and Hitesh Tewari
A Lightweight Encryption Scheme for IoT Devices in the Fog
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
The Internet of Things (IoT) is the collection of everyday smart devices which connect to the Cloud, often through Fog nodes, to transmit and receive information. These everyday devices are distinct from traditional computers because they typically have notable constraints on their RAM, flash memory, and computational power. Due to these constraints, we believe that many of the proposed encryption schemes are too heavyweight to be employed in the IoT. In this paper we present a lightweight, flexible encryption scheme that relies on the one-way information loss property of a secure hash function. Our scheme imposes minimal computational and storage requirements, and imposes no non-negligible burdens on the encrypting device, except for the hash itself. We find that the encryption algorithm is particularly lightweight, and holds up strongly in terms of its speed and memory efficiency.
[ { "version": "v1", "created": "Tue, 23 Nov 2021 08:56:10 GMT" }, { "version": "v2", "created": "Mon, 29 Nov 2021 11:22:52 GMT" }, { "version": "v3", "created": "Wed, 1 Dec 2021 23:09:15 GMT" }, { "version": "v4", "created": "Fri, 1 Jul 2022 08:26:19 GMT" } ]
2022-07-04T00:00:00
[ [ "Chun", "Matthew", "" ], [ "Weber", "Stefan", "" ], [ "Tewari", "Hitesh", "" ] ]
new_dataset
0.996091
2112.01097
Hitesh Tewari Dr
Philip Bradish, Sarang Chaudhari, Michael Clear and Hitesh Tewari
CoviChain: A Blockchain Based COVID-19 Vaccination Passport
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Vaccination passports are being issued by governments around the world in order to open up their travel and hospitality sectors. Civil liberty campaigners on the other hand argue that such mandatory instruments encroach upon our fundamental right to anonymity, freedom of movement, and are a backdoor to issuing "identity documents" to citizens by their governments. In this paper we present a privacy-preserving framework that uses two-factor authentication to create a unique identifier that can be used to locate a person's vaccination record on a blockchain, but does not store any personal information about them. Our main contribution is the employment of a locality sensitive hashing algorithm over an iris extraction technique, that can be used to authenticate users and anonymously locate vaccination records on the blockchain, without leaking any personally identifiable information to the blockchain. Our proposed system allows for the safe reopening of society, while maintaining the privacy of citizens.
[ { "version": "v1", "created": "Thu, 2 Dec 2021 10:17:23 GMT" }, { "version": "v2", "created": "Sun, 5 Dec 2021 12:23:04 GMT" }, { "version": "v3", "created": "Tue, 7 Dec 2021 10:18:40 GMT" }, { "version": "v4", "created": "Wed, 16 Feb 2022 09:21:33 GMT" }, { "version": "v5", "created": "Fri, 1 Jul 2022 08:54:30 GMT" } ]
2022-07-04T00:00:00
[ [ "Bradish", "Philip", "" ], [ "Chaudhari", "Sarang", "" ], [ "Clear", "Michael", "" ], [ "Tewari", "Hitesh", "" ] ]
new_dataset
0.999675
2202.00443
Pierre Lison
Ildik\'o Pil\'an, Pierre Lison, Lilja {\O}vrelid, Anthi Papadopoulou, David S\'anchez and Montserrat Batet
The Text Anonymization Benchmark (TAB): A Dedicated Corpus and Evaluation Framework for Text Anonymization
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We present a novel benchmark and associated evaluation metrics for assessing the performance of text anonymization methods. Text anonymization, defined as the task of editing a text document to prevent the disclosure of personal information, currently suffers from a shortage of privacy-oriented annotated text resources, making it difficult to properly evaluate the level of privacy protection offered by various anonymization methods. This paper presents TAB (Text Anonymization Benchmark), a new, open-source annotated corpus developed to address this shortage. The corpus comprises 1,268 English-language court cases from the European Court of Human Rights (ECHR) enriched with comprehensive annotations about the personal information appearing in each document, including their semantic category, identifier type, confidential attributes, and co-reference relations. Compared to previous work, the TAB corpus is designed to go beyond traditional de-identification (which is limited to the detection of predefined semantic categories), and explicitly marks which text spans ought to be masked in order to conceal the identity of the person to be protected. Along with presenting the corpus and its annotation layers, we also propose a set of evaluation metrics that are specifically tailored towards measuring the performance of text anonymization, both in terms of privacy protection and utility preservation. We illustrate the use of the benchmark and the proposed metrics by assessing the empirical performance of several baseline text anonymization models. The full corpus along with its privacy-oriented annotation guidelines, evaluation scripts and baseline models are available on: https://github.com/NorskRegnesentral/text-anonymisation-benchmark
[ { "version": "v1", "created": "Tue, 25 Jan 2022 14:34:42 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 10:27:00 GMT" } ]
2022-07-04T00:00:00
[ [ "Pilán", "Ildikó", "" ], [ "Lison", "Pierre", "" ], [ "Øvrelid", "Lilja", "" ], [ "Papadopoulou", "Anthi", "" ], [ "Sánchez", "David", "" ], [ "Batet", "Montserrat", "" ] ]
new_dataset
0.995162
2202.01340
Anthony Ortiz
Anthony Ortiz, Dhaval Negandhi, Sagar R Mysorekar, Joseph Kiesecker, Shivaprakash K Nagaraju, Caleb Robinson, Priyal Bhatia, Aditi Khurana, Jane Wang, Felipe Oviedo, Juan Lavista Ferres
An Artificial Intelligence Dataset for Solar Energy Locations in India
Accepted for publication in Nature Scientific Data
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Rapid development of renewable energy sources, particularly solar photovoltaics (PV), is critical to mitigate climate change. As a result, India has set ambitious goals to install 500 gigawatts of solar energy capacity by 2030. Given the large footprint projected to meet renewables energy targets, the potential for land use conflicts over environmental values is high. To expedite development of solar energy, land use planners will need access to up-to-date and accurate geo-spatial information of PV infrastructure. In this work, we developed a spatially explicit machine learning model to map utility-scale solar projects across India using freely available satellite imagery with a mean accuracy of 92%. Our model predictions were validated by human experts to obtain a dataset of 1363 solar PV farms. Using this dataset, we measure the solar footprint across India and quantified the degree of landcover modification associated with the development of PV infrastructure. Our analysis indicates that over 74% of solar development In India was built on landcover types that have natural ecosystem preservation, or agricultural value.
[ { "version": "v1", "created": "Mon, 31 Jan 2022 23:53:19 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 00:11:54 GMT" } ]
2022-07-04T00:00:00
[ [ "Ortiz", "Anthony", "" ], [ "Negandhi", "Dhaval", "" ], [ "Mysorekar", "Sagar R", "" ], [ "Kiesecker", "Joseph", "" ], [ "Nagaraju", "Shivaprakash K", "" ], [ "Robinson", "Caleb", "" ], [ "Bhatia", "Priyal", "" ], [ "Khurana", "Aditi", "" ], [ "Wang", "Jane", "" ], [ "Oviedo", "Felipe", "" ], [ "Ferres", "Juan Lavista", "" ] ]
new_dataset
0.999756
2203.06147
Dovydas Joksas
Dovydas Joksas, AbdulAziz AlMutairi, Oscar Lee, Murat Cubukcu, Antonio Lombardo, Hidekazu Kurebayashi, Anthony J. Kenyon, Adnan Mehonic
Memristive, Spintronic, and 2D-Materials-Based Devices to Improve and Complement Computing Hardware
28 pages, 7 figures
Adv. Intell. Syst. 2022, 2200068
10.1002/aisy.202200068
null
cs.ET
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a data-driven economy, virtually all industries benefit from advances in information technology -- powerful computing systems are critically important for rapid technological progress. However, this progress might be at risk of slowing down if we do not address the discrepancy between our current computing power demands and what the existing technologies can offer. Key limitations to improving energy efficiency are the excessive growth of data transfer costs associated with the von Neumann architecture and the fundamental limits of complementary metal-oxide-semiconductor (CMOS) technologies, such as transistors. In this perspective article, we discuss three technologies that will likely play an essential role in future computing systems: memristive electronics, spintronics, and electronics based on 2D materials. We present how these may transform conventional digital computers and contribute to the adoption of new paradigms, like neuromorphic computing.
[ { "version": "v1", "created": "Fri, 11 Mar 2022 18:18:25 GMT" }, { "version": "v2", "created": "Mon, 23 May 2022 15:07:24 GMT" }, { "version": "v3", "created": "Fri, 1 Jul 2022 17:28:32 GMT" } ]
2022-07-04T00:00:00
[ [ "Joksas", "Dovydas", "" ], [ "AlMutairi", "AbdulAziz", "" ], [ "Lee", "Oscar", "" ], [ "Cubukcu", "Murat", "" ], [ "Lombardo", "Antonio", "" ], [ "Kurebayashi", "Hidekazu", "" ], [ "Kenyon", "Anthony J.", "" ], [ "Mehonic", "Adnan", "" ] ]
new_dataset
0.999545
2203.14883
Hongkuan Zhou
Hongkuan Zhou, Da Zheng, Israt Nisa, Vasileios Ioannidis, Xiang Song, George Karypis
TGL: A General Framework for Temporal GNN Training on Billion-Scale Graphs
VLDB'22
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many real world graphs contain time domain information. Temporal Graph Neural Networks capture temporal information as well as structural and contextual information in the generated dynamic node embeddings. Researchers have shown that these embeddings achieve state-of-the-art performance in many different tasks. In this work, we propose TGL, a unified framework for large-scale offline Temporal Graph Neural Network training where users can compose various Temporal Graph Neural Networks with simple configuration files. TGL comprises five main components, a temporal sampler, a mailbox, a node memory module, a memory updater, and a message passing engine. We design a Temporal-CSR data structure and a parallel sampler to efficiently sample temporal neighbors to formtraining mini-batches. We propose a novel random chunk scheduling technique that mitigates the problem of obsolete node memory when training with a large batch size. To address the limitations of current TGNNs only being evaluated on small-scale datasets, we introduce two large-scale real-world datasets with 0.2 and 1.3 billion temporal edges. We evaluate the performance of TGL on four small-scale datasets with a single GPU and the two large datasets with multiple GPUs for both link prediction and node classification tasks. We compare TGL with the open-sourced code of five methods and show that TGL achieves similar or better accuracy with an average of 13x speedup. Our temporal parallel sampler achieves an average of 173x speedup on a multi-core CPU compared with the baselines. On a 4-GPU machine, TGL can train one epoch of more than one billion temporal edges within 1-10 hours. To the best of our knowledge, this is the first work that proposes a general framework for large-scale Temporal Graph Neural Networks training on multiple GPUs.
[ { "version": "v1", "created": "Mon, 28 Mar 2022 16:41:18 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 18:35:43 GMT" } ]
2022-07-04T00:00:00
[ [ "Zhou", "Hongkuan", "" ], [ "Zheng", "Da", "" ], [ "Nisa", "Israt", "" ], [ "Ioannidis", "Vasileios", "" ], [ "Song", "Xiang", "" ], [ "Karypis", "George", "" ] ]
new_dataset
0.998555
2205.02048
Nicholas Popovic
Nicholas Popovic, Michael F\"arber
Few-Shot Document-Level Relation Extraction
Published at NAACL 2022
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present FREDo, a few-shot document-level relation extraction (FSDLRE) benchmark. As opposed to existing benchmarks which are built on sentence-level relation extraction corpora, we argue that document-level corpora provide more realism, particularly regarding none-of-the-above (NOTA) distributions. Therefore, we propose a set of FSDLRE tasks and construct a benchmark based on two existing supervised learning data sets, DocRED and sciERC. We adapt the state-of-the-art sentence-level method MNAV to the document-level and develop it further for improved domain adaptation. We find FSDLRE to be a challenging setting with interesting new characteristics such as the ability to sample NOTA instances from the support set. The data, code, and trained models are available online (https://github.com/nicpopovic/FREDo).
[ { "version": "v1", "created": "Wed, 4 May 2022 13:16:19 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 15:38:00 GMT" } ]
2022-07-04T00:00:00
[ [ "Popovic", "Nicholas", "" ], [ "Färber", "Michael", "" ] ]
new_dataset
0.997914
2205.06779
Qiuhui Chen
Qiuhui Chen, Yi Hong
Scribble2D5: Weakly-Supervised Volumetric Image Segmentation via Scribble Annotations
Accepted by MICCAI 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, weakly-supervised image segmentation using weak annotations like scribbles has gained great attention, since such annotations are much easier to obtain compared to time-consuming and label-intensive labeling at the pixel/voxel level. However, because scribbles lack structure information of region of interest (ROI), existing scribble-based methods suffer from poor boundary localization. Furthermore, most current methods are designed for 2D image segmentation, which do not fully leverage the volumetric information if directly applied to image slices. In this paper, we propose a scribble-based volumetric image segmentation, Scribble2D5, which tackles 3D anisotropic image segmentation and improves boundary prediction. To achieve this, we augment a 2.5D attention UNet with a proposed label propagation module to extend semantic information from scribbles and a combination of static and active boundary prediction to learn ROI's boundary and regularize its shape. Extensive experiments on three public datasets demonstrate Scribble2D5 significantly outperforms current scribble-based methods and approaches the performance of fully-supervised ones. Our code is available online.
[ { "version": "v1", "created": "Fri, 13 May 2022 17:04:10 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 04:54:54 GMT" } ]
2022-07-04T00:00:00
[ [ "Chen", "Qiuhui", "" ], [ "Hong", "Yi", "" ] ]
new_dataset
0.992973
2206.07934
Chen Zhang
Chen Zhang, Honglin Sun, Chen Chen, Yandong Guo
BANet: Motion Forecasting with Boundary Aware Network
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a motion forecasting model called BANet, which means Boundary-Aware Network, and it is a variant of LaneGCN. We believe that it is not enough to use only the lane centerline as input to obtain the embedding features of the vector map nodes. The lane centerline can only provide the topology of the lanes, and other elements of the vector map also contain rich information. For example, the lane boundary can provide traffic rule constraint information such as whether it is possible to change lanes which is very important. Therefore, we achieved better performance by encoding more vector map elements in the motion forecasting model.We report our results on the 2022 Argoverse2 Motion Forecasting challenge and rank 1st on the test leaderboard.
[ { "version": "v1", "created": "Thu, 16 Jun 2022 05:56:24 GMT" }, { "version": "v2", "created": "Tue, 21 Jun 2022 03:15:36 GMT" }, { "version": "v3", "created": "Fri, 1 Jul 2022 03:19:40 GMT" } ]
2022-07-04T00:00:00
[ [ "Zhang", "Chen", "" ], [ "Sun", "Honglin", "" ], [ "Chen", "Chen", "" ], [ "Guo", "Yandong", "" ] ]
new_dataset
0.989686
2206.15147
Asier Guti\'errez-Fandi\~no
Asier Guti\'errez-Fandi\~no, David P\'erez-Fern\'andez, Jordi Armengol-Estap\'e, David Griol, Zoraida Callejas
esCorpius: A Massive Spanish Crawling Corpus
esCorpius is available on https://huggingface.co/datasets/LHF/escorpius
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
In the recent years, transformer-based models have lead to significant advances in language modelling for natural language processing. However, they require a vast amount of data to be (pre-)trained and there is a lack of corpora in languages other than English. Recently, several initiatives have presented multilingual datasets obtained from automatic web crawling. However, the results in Spanish present important shortcomings, as they are either too small in comparison with other languages, or present a low quality derived from sub-optimal cleaning and deduplication. In this paper, we introduce esCorpius, a Spanish crawling corpus obtained from near 1 Pb of Common Crawl data. It is the most extensive corpus in Spanish with this level of quality in the extraction, purification and deduplication of web textual content. Our data curation process involves a novel highly parallel cleaning pipeline and encompasses a series of deduplication mechanisms that together ensure the integrity of both document and paragraph boundaries. Additionally, we maintain both the source web page URL and the WARC shard origin URL in order to complain with EU regulations. esCorpius has been released under CC BY-NC-ND 4.0 license and is available on HuggingFace.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 09:29:18 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 08:22:32 GMT" } ]
2022-07-04T00:00:00
[ [ "Gutiérrez-Fandiño", "Asier", "" ], [ "Pérez-Fernández", "David", "" ], [ "Armengol-Estapé", "Jordi", "" ], [ "Griol", "David", "" ], [ "Callejas", "Zoraida", "" ] ]
new_dataset
0.999131
2206.15211
Ricardo Grando
Junior Costa de Jesus, Victor Augusto Kich, Alisson Henrique Kolling, Ricardo Bedin Grando, Rodrigo da Silva Guerra, Paulo Lilles Jorge Drews Jr
Depth-CUPRL: Depth-Imaged Contrastive Unsupervised Prioritized Representations in Reinforcement Learning for Mapless Navigation of Unmanned Aerial Vehicles
Accepted to the IEEE International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Reinforcement Learning (RL) has presented an impressive performance in video games through raw pixel imaging and continuous control tasks. However, RL performs poorly with high-dimensional observations such as raw pixel images. It is generally accepted that physical state-based RL policies such as laser sensor measurements give a more sample-efficient result than learning by pixels. This work presents a new approach that extracts information from a depth map estimation to teach an RL agent to perform the mapless navigation of Unmanned Aerial Vehicle (UAV). We propose the Depth-Imaged Contrastive Unsupervised Prioritized Representations in Reinforcement Learning(Depth-CUPRL) that estimates the depth of images with a prioritized replay memory. We used a combination of RL and Contrastive Learning to lead with the problem of RL based on images. From the analysis of the results with Unmanned Aerial Vehicles (UAVs), it is possible to conclude that our Depth-CUPRL approach is effective for the decision-making and outperforms state-of-the-art pixel-based approaches in the mapless navigation capability.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 11:54:01 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 01:27:15 GMT" } ]
2022-07-04T00:00:00
[ [ "de Jesus", "Junior Costa", "" ], [ "Kich", "Victor Augusto", "" ], [ "Kolling", "Alisson Henrique", "" ], [ "Grando", "Ricardo Bedin", "" ], [ "Guerra", "Rodrigo da Silva", "" ], [ "Drews", "Paulo Lilles Jorge", "Jr" ] ]
new_dataset
0.992788
2207.00035
Simon Pietro Romano
Maurizio D'Arienzo and Simon Pietro Romano
GOSPF: An energy efficient implementation of the OSPF routing protocol
18 pages
Journal of Network and Computer Applications, Volume 75, 2016, Pages 110-127, ISSN 1084-8045
10.1016/j.jnca.2016.07.011.
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Energy saving is currently one of the most challenging issues for the Internet research community. Indeed, the exponential growth of applications and services induces a remarkable increase in power consumption and hence calls for novel solutions which are capable to preserve energy of the infrastructures, at the same time maintaining the required Quality of Service guarantees. In this paper we introduce a new mechanism for saving energy through intelligent switch off of network links. The mechanism has been implemented as an extension to the Open Shortest Path First routing protocol. We first show through simulations that our solution is capable to dramatically reduce energy consumption when compared to the standard OSPF implementation. We then illustrate a real-world implementation of the proposed protocol within the Quagga routing software suite.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 18:01:34 GMT" } ]
2022-07-04T00:00:00
[ [ "D'Arienzo", "Maurizio", "" ], [ "Romano", "Simon Pietro", "" ] ]
new_dataset
0.992254
2207.00038
Sanaz Taheri Boshrooyeh
Oskar Thor\'en, Sanaz Taheri-Boshrooyeh, Hanno Cornelius
Waku: A Family of Modular P2P Protocols For Secure & Censorship-Resistant Communication
IEEE ICDCSW 2022
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Waku is a family of modular protocols that enable secure, censorship-resistant, and anonymous peer-to-peer communication. Waku protocols provide capabilities that make them suitable to run in resource-restricted environments e.g., mobile devices and web browsers. Such capabilities include (i) retrieving historical messaging for mostly-offline devices (ii) adaptive nodes; allowing for heterogeneous nodes to contribute to the network (iii) preserving bandwidth usage for resource-restricted devices, (iv) minimizing connectivity requirements for devices with a limited connection, and (v) enabling efficient, private, economic spam protection for heterogeneous nodes. Waku's modular design and resource-efficient protocols make it superior to its predecessor i.e., Whisper. In this paper, we give an overview of the Waku protocols stack, its architecture, and protocols interaction along with a sample demo scenario on configuring and running a Waku node using nwaku i.e., Waku client written in Nim.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 18:13:10 GMT" } ]
2022-07-04T00:00:00
[ [ "Thorén", "Oskar", "" ], [ "Taheri-Boshrooyeh", "Sanaz", "" ], [ "Cornelius", "Hanno", "" ] ]
new_dataset
0.99859
2207.00106
Mark Endo
Mark Endo, Kathleen L. Poston, Edith V. Sullivan, Li Fei-Fei, Kilian M. Pohl, Ehsan Adeli
GaitForeMer: Self-Supervised Pre-Training of Transformers via Human Motion Forecasting for Few-Shot Gait Impairment Severity Estimation
Accepted as a conference paper at MICCAI (Medical Image Computing and Computer Assisted Intervention) 2022
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Parkinson's disease (PD) is a neurological disorder that has a variety of observable motor-related symptoms such as slow movement, tremor, muscular rigidity, and impaired posture. PD is typically diagnosed by evaluating the severity of motor impairments according to scoring systems such as the Movement Disorder Society Unified Parkinson's Disease Rating Scale (MDS-UPDRS). Automated severity prediction using video recordings of individuals provides a promising route for non-intrusive monitoring of motor impairments. However, the limited size of PD gait data hinders model ability and clinical potential. Because of this clinical data scarcity and inspired by the recent advances in self-supervised large-scale language models like GPT-3, we use human motion forecasting as an effective self-supervised pre-training task for the estimation of motor impairment severity. We introduce GaitForeMer, Gait Forecasting and impairment estimation transforMer, which is first pre-trained on public datasets to forecast gait movements and then applied to clinical data to predict MDS-UPDRS gait impairment severity. Our method outperforms previous approaches that rely solely on clinical data by a large margin, achieving an F1 score of 0.76, precision of 0.79, and recall of 0.75. Using GaitForeMer, we show how public human movement data repositories can assist clinical use cases through learning universal motion representations. The code is available at https://github.com/markendo/GaitForeMer .
[ { "version": "v1", "created": "Thu, 30 Jun 2022 21:29:47 GMT" } ]
2022-07-04T00:00:00
[ [ "Endo", "Mark", "" ], [ "Poston", "Kathleen L.", "" ], [ "Sullivan", "Edith V.", "" ], [ "Fei-Fei", "Li", "" ], [ "Pohl", "Kilian M.", "" ], [ "Adeli", "Ehsan", "" ] ]
new_dataset
0.998892
2207.00116
Sanaz Taheri Boshrooyeh
Sanaz Taheri-Boshrooyeh, Oskar Thor\'en, Barry Whitehat, Wei Jie Koh, Onur Kilic, Kobi Gurkan
Privacy-Preserving Spam-Protected Gossip-Based Routing
IEEE ICDCS 2022
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
WAKU-RLN-RELAY is an anonymous peer-to-peer gossip-based routing protocol that features a privacy-preserving spam-protection with cryptographically guaranteed economic incentives. While being an anonymous routing protocol where routed messages are not attributable to their origin, it allows global identification and removal of spammers. It addresses the performance and privacy issues of its counterparts including proof-of-work and reputation-based schemes. Its light computational overhead makes it suitable for resource-limited environments. The spam protection works by limiting the messaging rate of each network participant where rate violation results in financial punishment. We deploy the novel construct of rate-limiting nullifier to enforce the message rate limit. We provide a proof-of-concept implementation of WAKU-RLN-RELAY to prove the efficiency and feasibility of our solution.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 22:21:51 GMT" } ]
2022-07-04T00:00:00
[ [ "Taheri-Boshrooyeh", "Sanaz", "" ], [ "Thorén", "Oskar", "" ], [ "Whitehat", "Barry", "" ], [ "Koh", "Wei Jie", "" ], [ "Kilic", "Onur", "" ], [ "Gurkan", "Kobi", "" ] ]
new_dataset
0.998897
2207.00117
Sanaz Taheri Boshrooyeh
Sanaz Taheri-Boshrooyeh, Oskar Thor\'en, Barry Whitehat, Wei Jie Koh, Onur Kilic, Kobi Gurkan
WAKU-RLN-RELAY: Privacy-Preserving Peer-to-Peer Economic Spam Protection
IEEE ICDCSW 2022
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we propose WAKU-RLN-RELAY as a spam-protected gossip-based routing protocol that can run in heterogeneous networks. It features a privacy-preserving peer-to-peer (p2p) economic spam protection mechanism. WAKU-RLN-RELAY addresses the performance and privacy issues of the state-of-the-art p2p spam prevention techniques including peer scoring utilized by libp2p, and proof-of-work used by e.g., Whisper, the p2p messaging layer of Ethereum. In WAKU-RLN-RELAY, spam protection works by limiting the messaging rate of each network participant. Rate violation is disincentivized since it results in financial punishment where the punishment is cryptographically guaranteed. Peers who identify spammers are also rewarded. To enforce the rate limit, we adopt the suggested framework of Semaphore and its extended version, however, we modify that framework to properly address the unique requirements of a network of p2p resource-restricted users. The current work dives into the end-to-end integration of Semaphore into WAKU-RLN-RELAY, the modifications required to make it suitable for resource-limited users, and the open problems and future research directions. We also provide a proof-of-concept open-source implementation of WAKU-RLN-RELAY, and its specifications together with a rough performance evaluation.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 22:22:24 GMT" } ]
2022-07-04T00:00:00
[ [ "Taheri-Boshrooyeh", "Sanaz", "" ], [ "Thorén", "Oskar", "" ], [ "Whitehat", "Barry", "" ], [ "Koh", "Wei Jie", "" ], [ "Kilic", "Onur", "" ], [ "Gurkan", "Kobi", "" ] ]
new_dataset
0.998677
2207.00119
Ana Ozaki
Tiziano Dalmonte, Andrea Mazzullo, and Ana Ozaki
Reasoning in Non-normal Modal Description Logics
ARQNL 2022
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Non-normal modal logics, interpreted on neighbourhood models which generalise the usual relational semantics, have found application in several areas, such as epistemic, deontic, and coalitional reasoning. We present here preliminary results on reasoning in a family of modal description logics obtained by combining ALC with non-normal modal operators. First, we provide a framework of terminating, correct, and complete tableau algorithms to check satisfiability of formulas in such logics with the semantics based on varying domains. We then investigate the satisfiability problems in fragments of these languages obtained by restricting the application of modal operators to formulas only, and interpreted on models with constant domains, providing tight complexity results.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 22:30:30 GMT" } ]
2022-07-04T00:00:00
[ [ "Dalmonte", "Tiziano", "" ], [ "Mazzullo", "Andrea", "" ], [ "Ozaki", "Ana", "" ] ]
new_dataset
0.967082
2207.00147
Sunyi Zheng
Sunyi Zheng, Jingxiong Li, Zhongyi Shui, Chenglu Zhu, Yunlong Zhang, Pingyi Chen, Lin Yang
ChrSNet: Chromosome Straightening using Self-attention Guided Networks
Accepted to MICCAI 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Karyotyping is an important procedure to assess the possible existence of chromosomal abnormalities. However, because of the non-rigid nature, chromosomes are usually heavily curved in microscopic images and such deformed shapes hinder the chromosome analysis for cytogeneticists. In this paper, we present a self-attention guided framework to erase the curvature of chromosomes. The proposed framework extracts spatial information and local textures to preserve banding patterns in a regression module. With complementary information from the bent chromosome, a refinement module is designed to further improve fine details. In addition, we propose two dedicated geometric constraints to maintain the length and restore the distortion of chromosomes. To train our framework, we create a synthetic dataset where curved chromosomes are generated from the real-world straight chromosomes by grid-deformation. Quantitative and qualitative experiments are conducted on synthetic and real-world data. Experimental results show that our proposed method can effectively straighten bent chromosomes while keeping banding details and length.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 02:19:49 GMT" } ]
2022-07-04T00:00:00
[ [ "Zheng", "Sunyi", "" ], [ "Li", "Jingxiong", "" ], [ "Shui", "Zhongyi", "" ], [ "Zhu", "Chenglu", "" ], [ "Zhang", "Yunlong", "" ], [ "Chen", "Pingyi", "" ], [ "Yang", "Lin", "" ] ]
new_dataset
0.975826
2207.00152
Chemseddine Benkalfate
Chemseddine Benkalfate (1 and 2), Mohammed Feham (1), Achour Ouslimani (2) and Abed-Elhak Kasbari (2) ((1) STIC laboratory, Telecommunications Department, Universitu of Tlemcen, (2) Quartz laboratory, Electrical and Electronics Engineering, ENSEA)
UWB patch antenna design and realization in the bandwidth 780 MHz to 4.22 GHz
10 Pages, 13 Figures
null
null
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
The proposed UWB antenna covers mobile communications (GSM, EDG, UMTS(3G), LTE(4G)) and wireless networks (WIFI, WiMAX), within a theoretical bandwidth defined from 780MHz to 4.22GHz. The UWB antenna is designed and realized on a FR-4 substrate with an electrical permittivity of 4.4. It presents a 98.75% average analytical efficiency and an omnidirectional radiation within the previous bandwidth. The impedance excitation port is fixed at 50 Ohm according with the SMA impedance used in the practical part. The measured results are in good agreement with those obtained using CST and ADS softwares. The measured bandwidth, defined from 980MHz to 4.2GHz, presents an efficiency of 94.14%. Furthermore, the practical radiation diagram and the excitation port impedance stay the same as that the simulation one.
[ { "version": "v1", "created": "Wed, 25 May 2022 12:44:36 GMT" } ]
2022-07-04T00:00:00
[ [ "Benkalfate", "Chemseddine", "", "1 and 2" ], [ "Feham", "Mohammed", "" ], [ "Ouslimani", "Achour", "" ], [ "Kasbari", "Abed-Elhak", "" ] ]
new_dataset
0.999343
2207.00155
Andrea Bedin
Leonardo Badia, Andrea Bedin
Blockage-Peeking Game of Mobile Strategic Nodes in Millimeter Wave Communications
8 pages, 6 figures. Published on MedComNet 2022
null
null
null
cs.NI cs.GT
http://creativecommons.org/licenses/by/4.0/
Given the importance of line-of-sight in mmWave communications, a strategic adversary can harm a transmission by obstructing the receiver, which in turn can react by trying to move around this hurdle. To expand on this point, we study one such scenario from the perspective of game theory, considering a mobile mmWave receiver and an adversary interacting strategically as players in a zero-sum game, where they want to maximize, or respectively minimize, the spectral efficiency of the communication. To do so, the adversary attempts at screening the receiver's line of sight as an obstacle, while the receiver can move around so as to avoid the blockage. We consider preset distances and the choices available to the players are to change their angular coordinates to go around each other. This is framed as a static game of complete information, for which we numerically find the Nash equilibrium in mixed strategies, drawing some interesting conclusions such as connecting it with the beamforming pattern of the transmitter.
[ { "version": "v1", "created": "Fri, 10 Jun 2022 07:41:09 GMT" } ]
2022-07-04T00:00:00
[ [ "Badia", "Leonardo", "" ], [ "Bedin", "Andrea", "" ] ]
new_dataset
0.972462
2207.00251
Gangming Zhao
Chengwei Pan, Gangming Zhao, Junjie Fang, Baolian Qi, Jiaheng Liu, Chaowei Fang, Dingwen Zhang, Jinpeng Li, and Yizhou Yu
Computer-aided Tuberculosis Diagnosis with Attribute Reasoning Assistance
Provisionally Accepted for Medical Image Computing and Computer Assisted Interventions 2022 (MICCAI 2022). arXiv admin note: text overlap with arXiv:2010.04483
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Although deep learning algorithms have been intensively developed for computer-aided tuberculosis diagnosis (CTD), they mainly depend on carefully annotated datasets, leading to much time and resource consumption. Weakly supervised learning (WSL), which leverages coarse-grained labels to accomplish fine-grained tasks, has the potential to solve this problem. In this paper, we first propose a new large-scale tuberculosis (TB) chest X-ray dataset, namely the tuberculosis chest X-ray attribute dataset (TBX-Att), and then establish an attribute-assisted weakly-supervised framework to classify and localize TB by leveraging the attribute information to overcome the insufficiency of supervision in WSL scenarios. Specifically, first, the TBX-Att dataset contains 2000 X-ray images with seven kinds of attributes for TB relational reasoning, which are annotated by experienced radiologists. It also includes the public TBX11K dataset with 11200 X-ray images to facilitate weakly supervised detection. Second, we exploit a multi-scale feature interaction model for TB area classification and detection with attribute relational reasoning. The proposed model is evaluated on the TBX-Att dataset and will serve as a solid baseline for future research. The code and data will be available at https://github.com/GangmingZhao/tb-attribute-weak-localization.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 07:50:35 GMT" } ]
2022-07-04T00:00:00
[ [ "Pan", "Chengwei", "" ], [ "Zhao", "Gangming", "" ], [ "Fang", "Junjie", "" ], [ "Qi", "Baolian", "" ], [ "Liu", "Jiaheng", "" ], [ "Fang", "Chaowei", "" ], [ "Zhang", "Dingwen", "" ], [ "Li", "Jinpeng", "" ], [ "Yu", "Yizhou", "" ] ]
new_dataset
0.999396
2207.00272
Linjie Yang
Linjie Yang, Pingzhi Fan, Li Li, Zhiguo Ding, Li Hao
Grant-Free Transmission by LDPC Matrix Mapping and Integrated Cover-MPA Detector
30pages, 11 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, a novel transceiver architecture is proposed to simultaneously achieve efficient random access and reliable data transmission in massive IoT networks. At the transmitter side, each user is assigned a unique protocol sequence which is used to identify the user and also indicate the user's channel access pattern. Hence, user identification is completed by the detection of channel access patterns. Particularly, the columns of a parity check matrix of low-density-parity-check (LDPC) code are employed as protocol sequences. The design guideline of this LDPC parity check matrix and the associated performance analysis are provided in this paper.At the receiver side, a two-stage iterative detection architecture is designed, which consists of a group testing component and a payload data decoding component. They collaborate in a way that the group testing component maps detected protocol sequences to a tanner graph, on which the second component could execute its message passing algorithm. In turn, zero symbols detected by the message passing algorithm of the second component indicate potential false alarms made by the first group testing component. Hence, the tanner graph could iteratively evolve.The provided simulation results demonstrate that our transceiver design realizes a practical one-step grant-free transmission and has a compelling performance.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 08:55:17 GMT" } ]
2022-07-04T00:00:00
[ [ "Yang", "Linjie", "" ], [ "Fan", "Pingzhi", "" ], [ "Li", "Li", "" ], [ "Ding", "Zhiguo", "" ], [ "Hao", "Li", "" ] ]
new_dataset
0.959488
2207.00421
Mark Stamp
Huy Nguyen and Fabio Di Troia and Genya Ishigaki and Mark Stamp
Generative Adversarial Networks and Image-Based Malware Classification
null
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
For efficient malware removal, determination of malware threat levels, and damage estimation, malware family classification plays a critical role. In this paper, we extract features from malware executable files and represent them as images using various approaches. We then focus on Generative Adversarial Networks (GAN) for multiclass classification and compare our GAN results to other popular machine learning techniques, including Support Vector Machine (SVM), XGBoost, and Restricted Boltzmann Machines (RBM). We find that the AC-GAN discriminator is generally competitive with other machine learning techniques. We also evaluate the utility of the GAN generative model for adversarial attacks on image-based malware detection. While AC-GAN generated images are visually impressive, we find that they are easily distinguished from real malware images using any of several learning techniques. This result indicates that our GAN generated images would be of little value in adversarial attacks.
[ { "version": "v1", "created": "Wed, 8 Jun 2022 20:59:47 GMT" } ]
2022-07-04T00:00:00
[ [ "Nguyen", "Huy", "" ], [ "Di Troia", "Fabio", "" ], [ "Ishigaki", "Genya", "" ], [ "Stamp", "Mark", "" ] ]
new_dataset
0.984919
2207.00423
Alberto Carrasco-Casado
Alberto Carrasco-Casado, Koichi Shiratama, Phuc V. Trinh, Dimitar Kolev, Femi Ishola, Tetsuharu Fuse, Hiroyuki Tsuji, Morio Toyoshima
NICT's versatile miniaturized lasercom terminals for moving platforms
5 pages, 6 figures, 1 table
Proceedings of the 2022 IEEE International Conference on Space Optical Systems and Applications (ICSOS)
10.1109/ICSOS53063.2022.9749711
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
With the goal of meeting the diverse requirements of many different types of platforms, ranging from small drones to big satellites, and being applied in a variety of diverse scenarios, ranging from fixed terrestrial links to moving platforms in general, and operating within a wide range of conditions and distances, the Japanese National Institute of Information and Communications Technology (NICT) is currently working towards the development of a series of versatile miniaturized free-space laser-communication terminals. By choosing the appropriate terminal configuration for any given scenario, the basic conditions of operations can be satisfied without the need of customization, and the adaptive design of the terminals can close the gap to achieve an optimum solution that meets the communication requirements. This paper presents NICT's current efforts regarding the development of this series of lasercom terminals and introduces the first prototypes developed for validation and test purposes.
[ { "version": "v1", "created": "Mon, 9 May 2022 06:57:02 GMT" } ]
2022-07-04T00:00:00
[ [ "Carrasco-Casado", "Alberto", "" ], [ "Shiratama", "Koichi", "" ], [ "Trinh", "Phuc V.", "" ], [ "Kolev", "Dimitar", "" ], [ "Ishola", "Femi", "" ], [ "Fuse", "Tetsuharu", "" ], [ "Tsuji", "Hiroyuki", "" ], [ "Toyoshima", "Morio", "" ] ]
new_dataset
0.997446
2207.00459
Jingxiao Ma
Jingxiao Ma and Sherief Reda
RUCA: RUntime Configurable Approximate Circuits with Self-Correcting Capability
8 pages, 7 figures, to be published in 30th International Workshop on Logic & Synthesis
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
Approximate computing is an emerging computing paradigm that offers improved power consumption by relaxing the requirement for full accuracy. Since real-world applications may have different requirements for design accuracy, one trend of approximate computing is to design runtime quality-configurable circuits, which are able to operate under different accuracy modes with different power consumption. In this paper, we present a novel framework RUCA which aims to approximate an arbitrary input circuit in a runtime configurable fashion. By factorizing and decomposing the truth table, our approach aims to approximate and separate the input circuit into multiple configuration blocks which support different accuracy levels, including a corrector circuit to restore full accuracy. By activating different blocks, the approximate circuit is able to operate at different accuracy-power configurations. To improve the scalability of our algorithm, we also provide a design space exploration scheme with circuit partitioning to navigate the search space of possible approximations of subcircuits during design time. We thoroughly evaluate our methodology on a set of benchmarks and compare against another quality-configurable approach, showcasing the benefits and flexibility of RUCA. For 3-level designs, RUCA saves power consumption by 36.57% within 1% error and by 51.32% within 2% error on average.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 14:32:42 GMT" } ]
2022-07-04T00:00:00
[ [ "Ma", "Jingxiao", "" ], [ "Reda", "Sherief", "" ] ]
new_dataset
0.991273
2207.00477
Yang Xing
Karan Kheta, Claire Delgove, Ruolin Liu, Adeola Aderogba, Marc-Olivier Pokam, Muhammed Mehmet Unal, Yang Xing, Weisi Guo
Vision-based Conflict Detection within Crowds based on High-Resolution Human Pose Estimation for Smart and Safe Airport
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Future airports are becoming more complex and congested with the increasing number of travellers. While the airports are more likely to become hotspots for potential conflicts to break out which can cause serious delays to flights and several safety issues. An intelligent algorithm which renders security surveillance more effective in detecting conflicts would bring many benefits to the passengers in terms of their safety, finance, and travelling efficiency. This paper details the development of a machine learning model to classify conflicting behaviour in a crowd. HRNet is used to segment the images and then two approaches are taken to classify the poses of people in the frame via multiple classifiers. Among them, it was found that the support vector machine (SVM) achieved the most performant achieving precision of 94.37%. Where the model falls short is against ambiguous behaviour such as a hug or losing track of a subject in the frame. The resulting model has potential for deployment within an airport if improvements are made to cope with the vast number of potential passengers in view as well as training against further ambiguous behaviours which will arise in an airport setting. In turn, will provide the capability to enhance security surveillance and improve airport safety.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 14:54:12 GMT" } ]
2022-07-04T00:00:00
[ [ "Kheta", "Karan", "" ], [ "Delgove", "Claire", "" ], [ "Liu", "Ruolin", "" ], [ "Aderogba", "Adeola", "" ], [ "Pokam", "Marc-Olivier", "" ], [ "Unal", "Muhammed Mehmet", "" ], [ "Xing", "Yang", "" ], [ "Guo", "Weisi", "" ] ]
new_dataset
0.980167
2207.00499
Arij Bouazizi
Arij Bouazizi and Adrian Holzbock and Ulrich Kressel and Klaus Dietmayer and Vasileios Belagiannis
MotionMixer: MLP-based 3D Human Body Pose Forecasting
Accepted by IJCAI-ECAI'22 (Oral-Long presentation)
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present MotionMixer, an efficient 3D human body pose forecasting model based solely on multi-layer perceptrons (MLPs). MotionMixer learns the spatial-temporal 3D body pose dependencies by sequentially mixing both modalities. Given a stacked sequence of 3D body poses, a spatial-MLP extracts fine grained spatial dependencies of the body joints. The interaction of the body joints over time is then modelled by a temporal MLP. The spatial-temporal mixed features are finally aggregated and decoded to obtain the future motion. To calibrate the influence of each time step in the pose sequence, we make use of squeeze-and-excitation (SE) blocks. We evaluate our approach on Human3.6M, AMASS, and 3DPW datasets using the standard evaluation protocols. For all evaluations, we demonstrate state-of-the-art performance, while having a model with a smaller number of parameters. Our code is available at: https://github.com/MotionMLP/MotionMixer
[ { "version": "v1", "created": "Fri, 1 Jul 2022 15:36:08 GMT" } ]
2022-07-04T00:00:00
[ [ "Bouazizi", "Arij", "" ], [ "Holzbock", "Adrian", "" ], [ "Kressel", "Ulrich", "" ], [ "Dietmayer", "Klaus", "" ], [ "Belagiannis", "Vasileios", "" ] ]
new_dataset
0.999134
2207.00526
Matthew Earnshaw
Matthew Earnshaw, Pawe{\l} Soboci\'nski
Regular Monoidal Languages
Full version of a paper accepted for MFCS 2022
null
null
null
cs.FL math.CT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce regular languages of morphisms in free monoidal categories, with their associated grammars and automata. These subsume the classical theory of regular languages of words and trees, but also open up a much wider class of languages over string diagrams. We use the algebra of monoidal and cartesian restriction categories to investigate the properties of regular monoidal languages, and provide sufficient conditions for their recognizability by deterministic monoidal automata.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 16:18:52 GMT" } ]
2022-07-04T00:00:00
[ [ "Earnshaw", "Matthew", "" ], [ "Sobociński", "Paweł", "" ] ]
new_dataset
0.998658
1706.06696
Mat\'ias Mattamala
Mat\'ias Mattamala, Gonzalo Olave, Clayder Gonz\'alez, Nicol\'as Hasb\'un and Javier Ruiz-del-Solar
The NAO Backpack: An Open-hardware Add-on for Fast Software Development with the NAO Robot
Accepted in the RoboCup Symposium 2017. Final version will be published at Springer
null
10.1007/978-3-030-00308-1_25
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present an open-source accessory for the NAO robot, which enables to test computationally demanding algorithms in an external platform while preserving robot's autonomy and mobility. The platform has the form of a backpack, which can be 3D printed and replicated, and holds an ODROID XU4 board to process algorithms externally with ROS compatibility. We provide also a software bridge between the B-Human's framework and ROS to have access to the robot's sensors close to real-time. We tested the platform in several robotics applications such as data logging, visual SLAM, and robot vision with deep learning techniques. The CAD model, hardware specifications and software are available online for the benefit of the community: https://github.com/uchile-robotics/nao-backpack
[ { "version": "v1", "created": "Tue, 20 Jun 2017 22:53:16 GMT" } ]
2022-07-01T00:00:00
[ [ "Mattamala", "Matías", "" ], [ "Olave", "Gonzalo", "" ], [ "González", "Clayder", "" ], [ "Hasbún", "Nicolás", "" ], [ "Ruiz-del-Solar", "Javier", "" ] ]
new_dataset
0.997232
1711.02513
Jose-Luis Aragon
E. Alejandra Ortiz-Duran and Jose L. Aragon
CGAlgebra: a Mathematica package for conformal geometric algebra. v.2.0
Improved version, one figure
null
null
null
cs.MS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A tutorial of the Mathematica package CGAlgebra, for conformal geometric algebra calculations is presented. Using rule-based programming, the 5-dimensional conformal geometric algebra is implemented and defined functions simplify the calculations of geometric, outer and inner products, as well as many other calculations related with geometric transformations. CGAlgebra is available from https://github.com/jlaragonvera/Geometric-Algebra
[ { "version": "v1", "created": "Fri, 3 Nov 2017 23:29:00 GMT" }, { "version": "v2", "created": "Thu, 23 Aug 2018 01:28:24 GMT" }, { "version": "v3", "created": "Wed, 29 Jun 2022 22:52:58 GMT" } ]
2022-07-01T00:00:00
[ [ "Ortiz-Duran", "E. Alejandra", "" ], [ "Aragon", "Jose L.", "" ] ]
new_dataset
0.99973
2003.05691
Yiduo Wang
Milad Ramezani, Yiduo Wang, Marco Camurri, David Wisth, Matias Mattamala and Maurice Fallon
The Newer College Dataset: Handheld LiDAR, Inertial and Vision with Ground Truth
null
null
10.1109/IROS45743.2020.9340849
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a large dataset with a variety of mobile mapping sensors collected using a handheld device carried at typical walking speeds for nearly 2.2 km through New College, Oxford. The dataset includes data from two commercially available devices - a stereoscopic-inertial camera and a multi-beam 3D LiDAR, which also provides inertial measurements. Additionally, we used a tripod-mounted survey grade LiDAR scanner to capture a detailed millimeter-accurate 3D map of the test location (containing $\sim$290 million points). Using the map we inferred centimeter-accurate 6 Degree of Freedom (DoF) ground truth for the position of the device for each LiDAR scan to enable better evaluation of LiDAR and vision localisation, mapping and reconstruction systems. This ground truth is the particular novel contribution of this dataset and we believe that it will enable systematic evaluation which many similar datasets have lacked. The dataset combines both built environments, open spaces and vegetated areas so as to test localization and mapping systems such as vision-based navigation, visual and LiDAR SLAM, 3D LIDAR reconstruction and appearance-based place recognition. The dataset is available at: ori.ox.ac.uk/datasets/newer-college-dataset
[ { "version": "v1", "created": "Thu, 12 Mar 2020 10:17:16 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 14:33:50 GMT" } ]
2022-07-01T00:00:00
[ [ "Ramezani", "Milad", "" ], [ "Wang", "Yiduo", "" ], [ "Camurri", "Marco", "" ], [ "Wisth", "David", "" ], [ "Mattamala", "Matias", "" ], [ "Fallon", "Maurice", "" ] ]
new_dataset
0.999883
2004.12502
Paulo Almeida
Paulo Almeida, Manuel Marques-Pita and Joana Gon\c{c}alves-S\'a
PTPARL-D: Annotated Corpus of 44 years of Portuguese Parliament debates
null
Corpora, Volume 16 Issue 3, Page 337-348, ISSN 1749-5032 Available Online Nov 2021
10.3366/cor.2021.0226
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In a representative democracy, some decide in the name of the rest, and these elected officials are commonly gathered in public assemblies, such as parliaments, where they discuss policies, legislate, and vote on fundamental initiatives. A core aspect of such democratic processes are the plenary debates, where important public discussions take place. Many parliaments around the world are increasingly keeping the transcripts of such debates, and other parliamentary data, in digital formats accessible to the public, increasing transparency and accountability. Furthermore, some parliaments are bringing old paper transcripts to semi-structured digital formats. However, these records are often only provided as raw text or even as images, with little to no annotation, and inconsistent formats, making them difficult to analyze and study, reducing both transparency and public reach. Here, we present PTPARL-D, an annotated corpus of debates in the Portuguese Parliament, from 1976 to 2019, covering the entire period of Portuguese democracy.
[ { "version": "v1", "created": "Sun, 26 Apr 2020 23:22:41 GMT" } ]
2022-07-01T00:00:00
[ [ "Almeida", "Paulo", "" ], [ "Marques-Pita", "Manuel", "" ], [ "Gonçalves-Sá", "Joana", "" ] ]
new_dataset
0.999565
2101.07383
Kai Yao
Kai Yao, Alberto Ortiz, Francisco Bonnin-Pascual
A DCNN-based Arbitrarily-Oriented Object Detector for Quality Control and Inspection Application
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Following the success of machine vision systems for on-line automated quality control and inspection processes, an object recognition solution is presented in this work for two different specific applications, i.e., the detection of quality control items in surgery toolboxes prepared for sterilizing in a hospital, as well as the detection of defects in vessel hulls to prevent potential structural failures. The solution has two stages. First, a feature pyramid architecture based on Single Shot MultiBox Detector (SSD) is used to improve the detection performance, and a statistical analysis based on ground truth is employed to select parameters of a range of default boxes. Second, a lightweight neural network is exploited to achieve oriented detection results using a regression method. The first stage of the proposed method is capable of detecting the small targets considered in the two scenarios. In the second stage, despite the simplicity, it is efficient to detect elongated targets while maintaining high running efficiency.
[ { "version": "v1", "created": "Tue, 19 Jan 2021 00:23:27 GMT" }, { "version": "v2", "created": "Mon, 28 Feb 2022 12:12:54 GMT" }, { "version": "v3", "created": "Thu, 30 Jun 2022 05:41:36 GMT" } ]
2022-07-01T00:00:00
[ [ "Yao", "Kai", "" ], [ "Ortiz", "Alberto", "" ], [ "Bonnin-Pascual", "Francisco", "" ] ]
new_dataset
0.971955
2104.09647
Alexander Spangher
Alexander Spangher and Jonathan May
NewsEdits: A Dataset of Revision Histories for News Articles (Technical Report: Data Processing)
11 pages
null
null
null
cs.CL cs.DL
http://creativecommons.org/licenses/by/4.0/
News article revision histories have the potential to give us novel insights across varied fields of linguistics and social sciences. In this work, we present, to our knowledge, the first publicly available dataset of news article revision histories, or NewsEdits. Our dataset is multilingual; it contains 1,278,804 articles with 4,609,430 versions from over 22 English- and French-language newspaper sources based in three countries. Across version pairs, we count 10.9 million added sentences; 8.9 million changed sentences and 6.8 million removed sentences. Within the changed sentences, we derive 72 million atomic edits. NewsEdits is, to our knowledge, the largest corpus of revision histories of any domain.
[ { "version": "v1", "created": "Mon, 19 Apr 2021 21:15:30 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 16:58:41 GMT" } ]
2022-07-01T00:00:00
[ [ "Spangher", "Alexander", "" ], [ "May", "Jonathan", "" ] ]
new_dataset
0.99985
2106.15314
Gareth Simons
Gareth D. Simons
The cityseer Python package for pedestrian-scale network-based urban analysis
Revision incorporating additional figure
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
cityseer-api is a Python package consisting of computational tools for fine-grained street-network and land-use analysis, helpful in assessing the morphological precursors to vibrant neighbourhoods. It is underpinned by network-based methods developed specifically for urban analysis at the pedestrian scale. cityseer-api computes a variety of node and segment-based network centrality methods, land-use accessibility and mixed-use measures, and statistical aggregations. Accessibilities and aggregations are computed dynamically over the street-network while taking walking distance thresholds and the direction of approach into account, and can optionally incorporate spatial impedances and network decomposition to increase spatial precision. The use of Python facilitates compatibility with popular computational tools for network manipulation (NetworkX), geospatial topology (shapely), geospatial data state management (GeoPandas), and the NumPy stack of scientific packages. The provision of robust network cleaning tools aids the use of OpenStreetMap data for network analysis. Underlying loop-intensive algorithms are implemented in Numba JIT compiled code so that the methods scale efficiently to larger cities and regions. Online documentation is available from https://cityseer.benchmarkurbanism.com, and the Github repository is available at https://github.com/benchmark-urbanism/cityseer. Example notebooks are available at https://cityseer.benchmarkurbanism.com/examples/.
[ { "version": "v1", "created": "Sat, 26 Jun 2021 14:51:38 GMT" }, { "version": "v2", "created": "Wed, 30 Jun 2021 20:12:07 GMT" }, { "version": "v3", "created": "Wed, 29 Jun 2022 16:54:43 GMT" }, { "version": "v4", "created": "Thu, 30 Jun 2022 17:06:50 GMT" } ]
2022-07-01T00:00:00
[ [ "Simons", "Gareth D.", "" ] ]
new_dataset
0.999783
2111.12423
Jiaming Ye
Yinxing Xue, Jiaming Ye, Wei Zhang, Jun Sun, Lei Ma, Haijun Wang, Jianjun Zhao
xFuzz: Machine Learning Guided Cross-Contract Fuzzing
IEEE Transactions on Dependable and Secure Computing (2022)
null
10.1109/TDSC.2022.3182373
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart contract transactions are increasingly interleaved by cross-contract calls. While many tools have been developed to identify a common set of vulnerabilities, the cross-contract vulnerability is overlooked by existing tools. Cross-contract vulnerabilities are exploitable bugs that manifest in the presence of more than two interacting contracts. Existing methods are however limited to analyze a maximum of two contracts at the same time. Detecting cross-contract vulnerabilities is highly non-trivial. With multiple interacting contracts, the search space is much larger than that of a single contract. To address this problem, we present xFuzz, a machine learning guided smart contract fuzzing framework. The machine learning models are trained with novel features (e.g., word vectors and instructions) and are used to filter likely benign program paths. Comparing with existing static tools, machine learning model is proven to be more robust, avoiding directly adopting manually-defined rules in specific tools. We compare xFuzz with three state-of-the-art tools on 7,391 contracts. xFuzz detects 18 exploitable cross-contract vulnerabilities, of which 15 vulnerabilities are exposed for the first time. Furthermore, our approach is shown to be efficient in detecting non-cross-contract vulnerabilities as well -- using less than 20% time as that of other fuzzing tools, xFuzz detects twice as many vulnerabilities.
[ { "version": "v1", "created": "Wed, 24 Nov 2021 11:09:49 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 05:54:51 GMT" } ]
2022-07-01T00:00:00
[ [ "Xue", "Yinxing", "" ], [ "Ye", "Jiaming", "" ], [ "Zhang", "Wei", "" ], [ "Sun", "Jun", "" ], [ "Ma", "Lei", "" ], [ "Wang", "Haijun", "" ], [ "Zhao", "Jianjun", "" ] ]
new_dataset
0.997547
2202.11703
Shouchang Guo
Shouchang Guo, Valentin Deschaintre, Douglas Noll, Arthur Roullier
U-Attention to Textures: Hierarchical Hourglass Vision Transformer for Universal Texture Synthesis
null
null
null
null
cs.CV cs.GR eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a novel U-Attention vision Transformer for universal texture synthesis. We exploit the natural long-range dependencies enabled by the attention mechanism to allow our approach to synthesize diverse textures while preserving their structures in a single inference. We propose a hierarchical hourglass backbone that attends to the global structure and performs patch mapping at varying scales in a coarse-to-fine-to-coarse stream. Completed by skip connection and convolution designs that propagate and fuse information at different scales, our hierarchical U-Attention architecture unifies attention to features from macro structures to micro details, and progressively refines synthesis results at successive stages. Our method achieves stronger 2$\times$ synthesis than previous work on both stochastic and structured textures while generalizing to unseen textures without fine-tuning. Ablation studies demonstrate the effectiveness of each component of our architecture.
[ { "version": "v1", "created": "Wed, 23 Feb 2022 18:58:56 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 07:16:09 GMT" } ]
2022-07-01T00:00:00
[ [ "Guo", "Shouchang", "" ], [ "Deschaintre", "Valentin", "" ], [ "Noll", "Douglas", "" ], [ "Roullier", "Arthur", "" ] ]
new_dataset
0.998169
2203.04406
Alex Berke
Geoffrey Ding, Alex Berke, Karthik Gopalakrishnan, Kwassi H. Degue, Hamsa Balakrishnan, Max Z. Li
Routing with Privacy for Drone Package Delivery Systems
null
International Conference on Research in Air Transportation (ICRAT) 2022
null
null
cs.CR cs.CY cs.SI cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Unmanned aerial vehicles (UAVs), or drones, are increasingly being used to deliver goods from vendors to customers. To safely conduct these operations at scale, drones are required to broadcast position information as codified in remote identification (remote ID) regulations. However, location broadcast of package delivery drones introduces a privacy risk for customers using these delivery services: Third-party observers may leverage broadcast drone trajectories to link customers with their purchases, potentially resulting in a wide range of privacy risks. We propose a probabilistic definition of privacy risk based on the likelihood of associating a customer to a vendor given a package delivery route. Next, we quantify these risks, enabling drone operators to assess privacy risks when planning delivery routes. We then evaluate the impacts of various factors (e.g., drone capacity) on privacy and consider the trade-offs between privacy and delivery wait times. Finally, we propose heuristics for generating routes with privacy guarantees to avoid exhaustive enumeration of all possible routes and evaluate their performance on several realistic delivery scenarios.
[ { "version": "v1", "created": "Fri, 4 Mar 2022 18:50:53 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 02:11:25 GMT" } ]
2022-07-01T00:00:00
[ [ "Ding", "Geoffrey", "" ], [ "Berke", "Alex", "" ], [ "Gopalakrishnan", "Karthik", "" ], [ "Degue", "Kwassi H.", "" ], [ "Balakrishnan", "Hamsa", "" ], [ "Li", "Max Z.", "" ] ]
new_dataset
0.998161
2203.07060
Joey Wilson
Joey Wilson, Jingyu Song, Yuewei Fu, Arthur Zhang, Andrew Capodieci, Paramsothy Jayakumar, Kira Barton, and Maani Ghaffari
MotionSC: Data Set and Network for Real-Time Semantic Mapping in Dynamic Environments
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
This work addresses a gap in semantic scene completion (SSC) data by creating a novel outdoor data set with accurate and complete dynamic scenes. Our data set is formed from randomly sampled views of the world at each time step, which supervises generalizability to complete scenes without occlusions or traces. We create SSC baselines from state-of-the-art open source networks and construct a benchmark real-time dense local semantic mapping algorithm, MotionSC, by leveraging recent 3D deep learning architectures to enhance SSC with temporal information. Our network shows that the proposed data set can quantify and supervise accurate scene completion in the presence of dynamic objects, which can lead to the development of improved dynamic mapping algorithms. All software is available at https://github.com/UMich-CURLY/3DMapping.
[ { "version": "v1", "created": "Mon, 14 Mar 2022 13:00:33 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 16:28:39 GMT" } ]
2022-07-01T00:00:00
[ [ "Wilson", "Joey", "" ], [ "Song", "Jingyu", "" ], [ "Fu", "Yuewei", "" ], [ "Zhang", "Arthur", "" ], [ "Capodieci", "Andrew", "" ], [ "Jayakumar", "Paramsothy", "" ], [ "Barton", "Kira", "" ], [ "Ghaffari", "Maani", "" ] ]
new_dataset
0.996851
2204.02090
Venkatesh Shenoy Kadandale
Venkatesh S. Kadandale, Juan F. Montesinos, Gloria Haro
VocaLiST: An Audio-Visual Synchronisation Model for Lips and Voices
Paper accepted to Interspeech 2022; Project Page: https://ipcv.github.io/VocaLiST/
null
null
null
cs.CV cs.IR cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we address the problem of lip-voice synchronisation in videos containing human face and voice. Our approach is based on determining if the lips motion and the voice in a video are synchronised or not, depending on their audio-visual correspondence score. We propose an audio-visual cross-modal transformer-based model that outperforms several baseline models in the audio-visual synchronisation task on the standard lip-reading speech benchmark dataset LRS2. While the existing methods focus mainly on lip synchronisation in speech videos, we also consider the special case of the singing voice. The singing voice is a more challenging use case for synchronisation due to sustained vowel sounds. We also investigate the relevance of lip synchronisation models trained on speech datasets in the context of singing voice. Finally, we use the frozen visual features learned by our lip synchronisation model in the singing voice separation task to outperform a baseline audio-visual model which was trained end-to-end. The demos, source code, and the pre-trained models are available on https://ipcv.github.io/VocaLiST/
[ { "version": "v1", "created": "Tue, 5 Apr 2022 10:02:39 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 11:46:24 GMT" } ]
2022-07-01T00:00:00
[ [ "Kadandale", "Venkatesh S.", "" ], [ "Montesinos", "Juan F.", "" ], [ "Haro", "Gloria", "" ] ]
new_dataset
0.985322
2204.07763
Jiangeng Chang
Jiangeng Chang, Yucheng Ruan, Cui Shaoze, John Soong Tshon Yit, Mengling Feng
UFRC: A Unified Framework for Reliable COVID-19 Detection on Crowdsourced Cough Audio
null
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We suggested a unified system with core components of data augmentation, ImageNet-pretrained ResNet-50, cost-sensitive loss, deep ensemble learning, and uncertainty estimation to quickly and consistently detect COVID-19 using acoustic evidence. To increase the model's capacity to identify a minority class, data augmentation and cost-sensitive loss are incorporated (infected samples). In the COVID-19 detection challenge, ImageNet-pretrained ResNet-50 has been found to be effective. The unified framework also integrates deep ensemble learning and uncertainty estimation to integrate predictions from various base classifiers for generalisation and reliability. We ran a series of tests using the DiCOVA2021 challenge dataset to assess the efficacy of our proposed method, and the results show that our method has an AUC-ROC of 85.43 percent, making it a promising method for COVID-19 detection. The unified framework also demonstrates that audio may be used to quickly diagnose different respiratory disorders.
[ { "version": "v1", "created": "Sat, 16 Apr 2022 09:24:16 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 06:19:42 GMT" } ]
2022-07-01T00:00:00
[ [ "Chang", "Jiangeng", "" ], [ "Ruan", "Yucheng", "" ], [ "Shaoze", "Cui", "" ], [ "Yit", "John Soong Tshon", "" ], [ "Feng", "Mengling", "" ] ]
new_dataset
0.989842
2205.04108
Claudio Soriente
Alessandro Sforzin, Matteo Maso, Claudio Soriente, Ghassan Karame
On the Storage Overhead of Proof-of-Work Blockchains
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Permissionless blockchains such as Bitcoin have long been criticized for their high computational and storage overhead. Unfortunately, while a number of proposals address the energy consumption of existing Proof-of-Work deployments, little attention has been given so far to remedy the storage overhead incurred by those blockchains. In fact, it seems widely acceptable that full nodes supporting the blockchains have to volunteer hundreds of GBs of their storage, to store and verify all transactions exchanged in the system. In this paper, we explore the solution space to effectively reduce the storage footprint of Proof-of-Work based blockchains. To do so, we analyze, by means of thorough empirical measurements, how existing full blockchain nodes utilize data from the shared ledger to validate incoming transactions/blocks. Based on this analysis, we show that it is possible for full nodes to locally reduce their storage footprint to approximately 15 GB, without any modification to the underlying protocol. We also discuss other client-side strategies to further reduce the storage footprint while incurring negligible computational overhead on the nodes.
[ { "version": "v1", "created": "Mon, 9 May 2022 08:19:35 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 08:00:45 GMT" } ]
2022-07-01T00:00:00
[ [ "Sforzin", "Alessandro", "" ], [ "Maso", "Matteo", "" ], [ "Soriente", "Claudio", "" ], [ "Karame", "Ghassan", "" ] ]
new_dataset
0.954933
2206.14286
Felix Chern
Felix Chern, Blake Hechtman, Andy Davis, Ruiqi Guo, David Majnemer, Sanjiv Kumar
TPU-KNN: K Nearest Neighbor Search at Peak FLOP/s
null
null
null
null
cs.PF cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a novel nearest neighbor search algorithm achieving TPU (Google Tensor Processing Unit) peak performance, outperforming state-of-the-art GPU algorithms with similar level of recall. The design of the proposed algorithm is motivated by an accurate accelerator performance model that takes into account both the memory and instruction bottlenecks. Our algorithm comes with an analytical guarantee of recall in expectation and does not require maintaining sophisticated index data structure or tuning, making it suitable for applications with frequent updates. Our work is available in the open-source package of Jax and Tensorflow on TPU.
[ { "version": "v1", "created": "Tue, 28 Jun 2022 20:53:25 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 10:48:01 GMT" } ]
2022-07-01T00:00:00
[ [ "Chern", "Felix", "" ], [ "Hechtman", "Blake", "" ], [ "Davis", "Andy", "" ], [ "Guo", "Ruiqi", "" ], [ "Majnemer", "David", "" ], [ "Kumar", "Sanjiv", "" ] ]
new_dataset
0.959687
2206.14898
Fabrizio Montecchiani
Patrizio Angelini, Michael A. Bekos, Giordano Da Lozzo, Martin Gronemann, Fabrizio Montecchiani, Alessandra Tappini
Recognizing Map Graphs of Bounded Treewidth
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A map graph is a graph admitting a representation in which vertices are nations on a spherical map and edges are shared curve segments or points between nations. We present an explicit fixed-parameter tractable algorithm for recognizing map graphs parameterized by treewidth. The algorithm has time complexity that is linear in the size of the graph and, if the input is a yes-instance, it reports a certificate in the form of a so-called witness. Furthermore, this result is developed within a more general algorithmic framework that allows to test, for any $k$, if the input graph admits a $k$-map (where at most $k$ nations meet at a common point) or a hole-free~$k$-map (where each point of the sphere is covered by at least one nation). We point out that, although bounding the treewidth of the input graph also bounds the size of its largest clique, the latter alone does not seem to be a strong enough structural limitation to obtain an efficient time complexity. In fact, while the largest clique in a $k$-map graph is $\lfloor 3k/2 \rfloor$, the recognition of $k$-map graphs is still open for any fixed $k \ge 5$.
[ { "version": "v1", "created": "Wed, 29 Jun 2022 20:35:01 GMT" } ]
2022-07-01T00:00:00
[ [ "Angelini", "Patrizio", "" ], [ "Bekos", "Michael A.", "" ], [ "Da Lozzo", "Giordano", "" ], [ "Gronemann", "Martin", "" ], [ "Montecchiani", "Fabrizio", "" ], [ "Tappini", "Alessandra", "" ] ]
new_dataset
0.999491
2206.14909
Maximilian Pfister
Patrizio Angelini, Michael A. Bekos, Julia Katheder, Michael Kaufmann, Maximilian Pfister
RAC Drawings of Graphs with Low Degree
Extended version of a paper presented at MFCS 2022
null
null
null
cs.CG cs.DM cs.DS
http://creativecommons.org/licenses/by/4.0/
Motivated by cognitive experiments providing evidence that large crossing-angles do not impair the readability of a graph drawing, RAC (Right Angle Crossing) drawings were introduced to address the problem of producing readable representations of non-planar graphs by supporting the optimal case in which all crossings form 90{\deg} angles. In this work, we make progress on the problem of finding RAC drawings of graphs of low degree. In this context, a long-standing open question asks whether all degree-3 graphs admit straight-line RAC drawings. This question has been positively answered for the Hamiltonian degree-3 graphs. We improve on this result by extending to the class of 3-edge-colorable degree-3 graphs. When each edge is allowed to have one bend, we prove that degree-4 graphs admit such RAC drawings, a result which was previously known only for degree-3 graphs. Finally, we show that 7-edge-colorable degree-7 graphs admit RAC drawings with two bends per edge. This improves over the previous result on degree-6 graphs.
[ { "version": "v1", "created": "Wed, 29 Jun 2022 20:51:44 GMT" } ]
2022-07-01T00:00:00
[ [ "Angelini", "Patrizio", "" ], [ "Bekos", "Michael A.", "" ], [ "Katheder", "Julia", "" ], [ "Kaufmann", "Michael", "" ], [ "Pfister", "Maximilian", "" ] ]
new_dataset
0.991034
2206.14913
Pawan Sahu
Pawan Kumar Sahu, Saksham Aggarwal, Taneesh Gupta, Gyanendra Das
GPTs at Factify 2022: Prompt Aided Fact-Verification
Accepted in AAAI'22: First Workshop on Multimodal Fact-Checking and Hate Speech Detection, Februrary 22 - March 1, 2022,Vancouver, BC, Canada
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle the problem, fact verification becomes crucial and thus has been a topic of interest among diverse research communities. Using only the textual form of data we propose our solution to the problem and achieve competitive results with other approaches. We present our solution based on two approaches - PLM (pre-trained language model) based method and Prompt based method. The PLM-based approach uses the traditional supervised learning, where the model is trained to take 'x' as input and output prediction 'y' as P(y|x). Whereas, Prompt-based learning reflects the idea to design input to fit the model such that the original objective may be re-framed as a problem of (masked) language modeling. We may further stimulate the rich knowledge provided by PLMs to better serve downstream tasks by employing extra prompts to fine-tune PLMs. Our experiments showed that the proposed method performs better than just fine-tuning PLMs. We achieved an F1 score of 0.6946 on the FACTIFY dataset and a 7th position on the competition leader-board.
[ { "version": "v1", "created": "Wed, 29 Jun 2022 21:07:39 GMT" } ]
2022-07-01T00:00:00
[ [ "Sahu", "Pawan Kumar", "" ], [ "Aggarwal", "Saksham", "" ], [ "Gupta", "Taneesh", "" ], [ "Das", "Gyanendra", "" ] ]
new_dataset
0.989438
2206.14977
Hongliang Liang
Hongliang Liang, Xianglin Cheng, Jie Liu, Jin Li
Multiple Targets Directed Greybox Fuzzing
14 pages, 5 figures, 10 tables
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Directed greybox fuzzing (DGF) can quickly discover or reproduce bugs in programs by seeking to reach a program location or explore some locations in order. However, due to their static stage division and coarse-grained energy scheduling, prior DGF tools perform poorly when facing multiple target locations (targets for short). In this paper, we present multiple targets directed greybox fuzzing which aims to reach multiple programs locations in a fuzzing campaign. Specifically, we propose a novel strategy to adaptively coordinate exploration and exploitation stages, and a novel energy scheduling strategy by considering more relations between seeds and target locations. We implement our approaches in a tool called LeoFuzz and evaluate it on crash reproduction, true positives verification, and vulnerability exposure in real-world programs. Experimental results show that LeoFuzz outperforms six state-of-the-art fuzzers, i.e., QYSM, AFLGo, Lolly, Berry, Beacon and WindRanger in terms of effectiveness and efficiency. Moreover, LeoFuzz has detected 23 new vulnerabilities in real-world programs, and 11 of them have been assigned CVE IDs.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 02:01:26 GMT" } ]
2022-07-01T00:00:00
[ [ "Liang", "Hongliang", "" ], [ "Cheng", "Xianglin", "" ], [ "Liu", "Jie", "" ], [ "Li", "Jin", "" ] ]
new_dataset
0.979964
2206.14992
Brian Hempel
Brian Hempel and Ravi Chugh
Maniposynth: Bimodal Tangible Functional Programming
ECOOP 2022 Paper + Appendices. 34 pages, 15 figures. For video figure and artifact, see https://maniposynth.org/
null
10.4230/LIPIcs.ECOOP.2022.16
null
cs.PL cs.HC
http://creativecommons.org/licenses/by/4.0/
Traditionally, writing code is a non-graphical, abstract, and linear process. Not everyone is comfortable with this way of thinking at all times. Can programming be transformed into a graphical, concrete, non-linear activity? While nodes-and-wires and blocks-based programming environments do leverage graphical direct manipulation, users perform their manipulations on abstract syntax tree elements, which are still abstract. Is it possible to be more concrete - could users instead directly manipulate live program values to create their program? We present a system, Maniposynth, that reimagines functional programming as a non-linear workflow where program expressions are spread on a 2D canvas. The live results of those expressions are continuously displayed and available for direct manipulation. The non-linear canvas liberates users to work out-of-order, and the live values can be interacted with via drag-and-drop. Incomplete programs are gracefully handled via hole expressions, which allow Maniposynth to offer program synthesis. Throughout the workflow, the program is valid OCaml code which the user may inspect and edit in their preferred text editor at any time. With Maniposynth's direct manipulation features, we created 38 programs drawn from a functional data structures course. We additionally hired two professional OCaml developers to implement a subset of these programs. We report on these experiences and discuss to what degree Maniposynth meets its goals of providing a non-linear, concrete, graphical programming workflow.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 02:52:46 GMT" } ]
2022-07-01T00:00:00
[ [ "Hempel", "Brian", "" ], [ "Chugh", "Ravi", "" ] ]
new_dataset
0.997674
2206.15007
Zhiying Zhu
Zhiying Zhu, Weixin Liang, James Zou
GSCLIP : A Framework for Explaining Distribution Shifts in Natural Language
Accepted by ICML 2022 DataPerf
null
null
null
cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Helping end users comprehend the abstract distribution shifts can greatly facilitate AI deployment. Motivated by this, we propose a novel task, dataset explanation. Given two image data sets, dataset explanation aims to automatically point out their dataset-level distribution shifts with natural language. Current techniques for monitoring distribution shifts provide inadequate information to understand datasets with the goal of improving data quality. Therefore, we introduce GSCLIP, a training-free framework to solve the dataset explanation task. In GSCLIP, we propose the selector as the first quantitative evaluation method to identify explanations that are proper to summarize dataset shifts. Furthermore, we leverage this selector to demonstrate the superiority of a generator based on language model generation. Systematic evaluation on natural data shift verifies that GSCLIP, a combined system of a hybrid generator group and an efficient selector is not only easy-to-use but also powerful for dataset explanation at scale.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 04:06:26 GMT" } ]
2022-07-01T00:00:00
[ [ "Zhu", "Zhiying", "" ], [ "Liang", "Weixin", "" ], [ "Zou", "James", "" ] ]
new_dataset
0.999737
2206.15086
Ameya Pore
Ameya Pore, Martina Finocchiaro, Diego Dall'Alba, Albert Hernansanz, Gastone Ciuti, Alberto Arezzo, Arianna Menciassi, Alicia Casals, Paolo Fiorini
Colonoscopy Navigation using End-to-End Deep Visuomotor Control: A User Study
Accepted in IROS2022
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Flexible endoscopes for colonoscopy present several limitations due to their inherent complexity, resulting in patient discomfort and lack of intuitiveness for clinicians. Robotic devices together with autonomous control represent a viable solution to reduce the workload of endoscopists and the training time while improving the overall procedure outcome. Prior works on autonomous endoscope control use heuristic policies that limit their generalisation to the unstructured and highly deformable colon environment and require frequent human intervention. This work proposes an image-based control of the endoscope using Deep Reinforcement Learning, called Deep Visuomotor Control (DVC), to exhibit adaptive behaviour in convoluted sections of the colon tract. DVC learns a mapping between the endoscopic images and the control signal of the endoscope. A first user study of 20 expert gastrointestinal endoscopists was carried out to compare their navigation performance with DVC policies using a realistic virtual simulator. The results indicate that DVC shows equivalent performance on several assessment parameters, being more safer. Moreover, a second user study with 20 novice participants was performed to demonstrate easier human supervision compared to a state-of-the-art heuristic control policy. Seamless supervision of colonoscopy procedures would enable interventionists to focus on the medical decision rather than on the control problem of the endoscope.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 07:42:21 GMT" } ]
2022-07-01T00:00:00
[ [ "Pore", "Ameya", "" ], [ "Finocchiaro", "Martina", "" ], [ "Dall'Alba", "Diego", "" ], [ "Hernansanz", "Albert", "" ], [ "Ciuti", "Gastone", "" ], [ "Arezzo", "Alberto", "" ], [ "Menciassi", "Arianna", "" ], [ "Casals", "Alicia", "" ], [ "Fiorini", "Paolo", "" ] ]
new_dataset
0.997617
2206.15091
Viktoriia Korchemna
Robert Ganian and Viktoriia Korchemna
Slim Tree-Cut Width
18 pages, 5 figures, 1 table
null
null
null
cs.CC cs.DS
http://creativecommons.org/licenses/by/4.0/
Tree-cut width is a parameter that has been introduced as an attempt to obtain an analogue of treewidth for edge cuts. Unfortunately, in spite of its desirable structural properties, it turned out that tree-cut width falls short as an edge-cut based alternative to treewidth in algorithmic aspects. This has led to the very recent introduction of a simple edge-based parameter called edge-cut width [WG 2022], which has precisely the algorithmic applications one would expect from an analogue of treewidth for edge cuts, but does not have the desired structural properties. In this paper, we study a variant of tree-cut width obtained by changing the threshold for so-called thin nodes in tree-cut decompositions from 2 to 1. We show that this "slim tree-cut width" satisfies all the requirements of an edge-cut based analogue of treewidth, both structural and algorithmic, while being less restrictive than edge-cut width. Our results also include an alternative characterization of slim tree-cut width via an easy-to-use spanning-tree decomposition akin to the one used for edge-cut width, a characterization of slim tree-cut width in terms of forbidden immersions as well as approximation algorithm for computing the parameter.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 07:51:08 GMT" } ]
2022-07-01T00:00:00
[ [ "Ganian", "Robert", "" ], [ "Korchemna", "Viktoriia", "" ] ]
new_dataset
0.996872
2206.15102
Tingxiang Fan
Tingxiang Fan, Bowen Shen, Hua Chen, Wei Zhang and Jia Pan
DynamicFilter: an Online Dynamic Objects Removal Framework for Highly Dynamic Environments
ICRA 2022
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Emergence of massive dynamic objects will diversify spatial structures when robots navigate in urban environments. Therefore, the online removal of dynamic objects is critical. In this paper, we introduce a novel online removal framework for highly dynamic urban environments. The framework consists of the scan-to-map front-end and the map-to-map back-end modules. Both the front- and back-ends deeply integrate the visibility-based approach and map-based approach. The experiments validate the framework in highly dynamic simulation scenarios and real-world datasets.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 08:07:47 GMT" } ]
2022-07-01T00:00:00
[ [ "Fan", "Tingxiang", "" ], [ "Shen", "Bowen", "" ], [ "Chen", "Hua", "" ], [ "Zhang", "Wei", "" ], [ "Pan", "Jia", "" ] ]
new_dataset
0.996673
2206.15154
Georgi Pramatarov
Georgi Pramatarov, Daniele De Martini, Matthew Gadd, Paul Newman
BoxGraph: Semantic Place Recognition and Pose Estimation from 3D LiDAR
Accepted for publication at the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is about extremely robust and lightweight localisation using LiDAR point clouds based on instance segmentation and graph matching. We model 3D point clouds as fully-connected graphs of semantically identified components where each vertex corresponds to an object instance and encodes its shape. Optimal vertex association across graphs allows for full 6-Degree-of-Freedom (DoF) pose estimation and place recognition by measuring similarity. This representation is very concise, condensing the size of maps by a factor of 25 against the state-of-the-art, requiring only 3kB to represent a 1.4MB laser scan. We verify the efficacy of our system on the SemanticKITTI dataset, where we achieve a new state-of-the-art in place recognition, with an average of 88.4% recall at 100% precision where the next closest competitor follows with 64.9%. We also show accurate metric pose estimation performance - estimating 6-DoF pose with median errors of 10 cm and 0.33 deg.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 09:39:08 GMT" } ]
2022-07-01T00:00:00
[ [ "Pramatarov", "Georgi", "" ], [ "De Martini", "Daniele", "" ], [ "Gadd", "Matthew", "" ], [ "Newman", "Paul", "" ] ]
new_dataset
0.997858
2206.15170
Ardi Tampuu
Ardi Tampuu, Romet Aidla, Jan Are van Gent, Tambet Matiisen
LiDAR-as-Camera for End-to-End Driving
null
null
null
null
cs.AI cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
The core task of any autonomous driving system is to transform sensory inputs into driving commands. In end-to-end driving, this is achieved via a neural network, with one or multiple cameras as the most commonly used input and low-level driving command, e.g. steering angle, as output. However, depth-sensing has been shown in simulation to make the end-to-end driving task easier. On a real car, combining depth and visual information can be challenging, due to the difficulty of obtaining good spatial and temporal alignment of the sensors. To alleviate alignment problems, Ouster LiDARs can output surround-view LiDAR-images with depth, intensity, and ambient radiation channels. These measurements originate from the same sensor, rendering them perfectly aligned in time and space. We demonstrate that such LiDAR-images are sufficient for the real-car road-following task and perform at least equally to camera-based models in the tested conditions, with the difference increasing when needing to generalize to new weather conditions. In the second direction of study, we reveal that the temporal smoothness of off-policy prediction sequences correlates equally well with actual on-policy driving ability as the commonly used mean absolute error.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 10:06:49 GMT" } ]
2022-07-01T00:00:00
[ [ "Tampuu", "Ardi", "" ], [ "Aidla", "Romet", "" ], [ "van Gent", "Jan Are", "" ], [ "Matiisen", "Tambet", "" ] ]
new_dataset
0.999761
2206.15219
alexander lerch
Alexander Lerch
libACA, pyACA, and ACA-Code: Audio Content Analysis in 3 Languages
Preprint submitted to "Software Impacts"
null
null
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
The three packages libACA, pyACA, and ACA-Code provide reference implementations for basic approaches and algorithms for the analysis of musical audio signals in three different languages: C++, Python, and Matlab. All three packages cover the same algorithms, such as extraction of low level audio features, fundamental frequency estimation, as well as simple approaches to chord recognition, musical key detection, and onset detection. In addition, it implementations of more generic algorithms useful in audio content analysis such as dynamic time warping and the Viterbi algorithm are provided. The three packages thus provide a practical cross-language and cross-platform reference to students and engineers implementing audio analysis algorithms and enable implementation-focused learning of algorithms for audio content analysis and music information retrieval.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 12:09:41 GMT" } ]
2022-07-01T00:00:00
[ [ "Lerch", "Alexander", "" ] ]
new_dataset
0.999687
2206.15276
Kyle Kastner
Kyle Kastner, Aaron Courville
R-MelNet: Reduced Mel-Spectral Modeling for Neural TTS
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
This paper introduces R-MelNet, a two-part autoregressive architecture with a frontend based on the first tier of MelNet and a backend WaveRNN-style audio decoder for neural text-to-speech synthesis. Taking as input a mixed sequence of characters and phonemes, with an optional audio priming sequence, this model produces low-resolution mel-spectral features which are interpolated and used by a WaveRNN decoder to produce an audio waveform. Coupled with half precision training, R-MelNet uses under 11 gigabytes of GPU memory on a single commodity GPU (NVIDIA 2080Ti). We detail a number of critical implementation details for stable half precision training, including an approximate, numerically stable mixture of logistics attention. Using a stochastic, multi-sample per step inference scheme, the resulting model generates highly varied audio, while enabling text and audio based controls to modify output waveforms. Qualitative and quantitative evaluations of an R-MelNet system trained on a single speaker TTS dataset demonstrate the effectiveness of our approach.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 13:29:31 GMT" } ]
2022-07-01T00:00:00
[ [ "Kastner", "Kyle", "" ], [ "Courville", "Aaron", "" ] ]
new_dataset
0.963633