id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2209.13718
Sanjana Chintalapati
Sanjana Chintalapati, Jonathan Bragg, Lucy Lu Wang
A Dataset of Alt Texts from HCI Publications: Analyses and Uses Towards Producing More Descriptive Alt Texts of Data Visualizations in Scientific Papers
11 pages, 4 figures, 4 tables, published at ASSETS 2022
null
10.1145/3517428.3544796
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Figures in scientific publications contain important information and results, and alt text is needed for blind and low vision readers to engage with their content. We conduct a study to characterize the semantic content of alt text in HCI publications based on a framework introduced by Lundgard and Satyanarayan. Our study focuses on alt text for graphs, charts, and plots extracted from HCI and accessibility publications; we focus on these communities due to the lack of alt text in papers published outside of these disciplines. We find that the capacity of author-written alt text to fulfill blind and low vision user needs is mixed; for example, only 50% of alt texts in our sample contain information about extrema or outliers, and only 31% contain information about major trends or comparisons conveyed by the graph. We release our collected dataset of author-written alt text, and outline possible ways that it can be used to develop tools and models to assist future authors in writing better alt text. Based on our findings, we also discuss recommendations that can be acted upon by publishers and authors to encourage inclusion of more types of semantic content in alt text.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 22:06:04 GMT" } ]
2022-09-29T00:00:00
[ [ "Chintalapati", "Sanjana", "" ], [ "Bragg", "Jonathan", "" ], [ "Wang", "Lucy Lu", "" ] ]
new_dataset
0.999861
2209.13738
Vitor Jeronymo
Vitor Jeronymo, Mauricio Nascimento, Roberto Lotufo and Rodrigo Nogueira
mRobust04: A Multilingual Version of the TREC Robust 2004 Benchmark
4 pages
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Robust 2004 is an information retrieval benchmark whose large number of judgments per query make it a reliable evaluation dataset. In this paper, we present mRobust04, a multilingual version of Robust04 that was translated to 8 languages using Google Translate. We also provide results of three different multilingual retrievers on this dataset. The dataset is available at https://huggingface.co/datasets/unicamp-dl/mrobust
[ { "version": "v1", "created": "Tue, 27 Sep 2022 23:14:37 GMT" } ]
2022-09-29T00:00:00
[ [ "Jeronymo", "Vitor", "" ], [ "Nascimento", "Mauricio", "" ], [ "Lotufo", "Roberto", "" ], [ "Nogueira", "Rodrigo", "" ] ]
new_dataset
0.999841
2209.13750
Andrey Kutuzov
Anna Aksenova, Ekaterina Gavrishina, Elisey Rykov, Andrey Kutuzov
RuDSI: graph-based word sense induction dataset for Russian
TextGraphs-16 workshop at the CoLING-2022 conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present RuDSI, a new benchmark for word sense induction (WSI) in Russian. The dataset was created using manual annotation and semi-automatic clustering of Word Usage Graphs (WUGs). Unlike prior WSI datasets for Russian, RuDSI is completely data-driven (based on texts from Russian National Corpus), with no external word senses imposed on annotators. Depending on the parameters of graph clustering, different derivative datasets can be produced from raw annotation. We report the performance that several baseline WSI methods obtain on RuDSI and discuss possibilities for improving these scores.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 00:08:24 GMT" } ]
2022-09-29T00:00:00
[ [ "Aksenova", "Anna", "" ], [ "Gavrishina", "Ekaterina", "" ], [ "Rykov", "Elisey", "" ], [ "Kutuzov", "Andrey", "" ] ]
new_dataset
0.999668
2209.13773
Peilin Zhou
Peilin Zhou, Zeqiang Wang, Dading Chong, Zhijiang Guo, Yining Hua, Zichang Su, Zhiyang Teng, Jiageng Wu, Jie Yang
METS-CoV: A Dataset of Medical Entity and Targeted Sentiment on COVID-19 Related Tweets
10 pages, 6 figures, 6 tables, accepted by NeurIPS 2022 Datasets and Benchmarks track
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 pandemic continues to bring up various topics discussed or debated on social media. In order to explore the impact of pandemics on people's lives, it is crucial to understand the public's concerns and attitudes towards pandemic-related entities (e.g., drugs, vaccines) on social media. However, models trained on existing named entity recognition (NER) or targeted sentiment analysis (TSA) datasets have limited ability to understand COVID-19-related social media texts because these datasets are not designed or annotated from a medical perspective. This paper releases METS-CoV, a dataset containing medical entities and targeted sentiments from COVID-19-related tweets. METS-CoV contains 10,000 tweets with 7 types of entities, including 4 medical entity types (Disease, Drug, Symptom, and Vaccine) and 3 general entity types (Person, Location, and Organization). To further investigate tweet users' attitudes toward specific entities, 4 types of entities (Person, Organization, Drug, and Vaccine) are selected and annotated with user sentiments, resulting in a targeted sentiment dataset with 9,101 entities (in 5,278 tweets). To the best of our knowledge, METS-CoV is the first dataset to collect medical entities and corresponding sentiments of COVID-19-related tweets. We benchmark the performance of classical machine learning models and state-of-the-art deep learning models on NER and TSA tasks with extensive experiments. Results show that the dataset has vast room for improvement for both NER and TSA tasks. METS-CoV is an important resource for developing better medical social media tools and facilitating computational social science research, especially in epidemiology. Our data, annotation guidelines, benchmark models, and source code are publicly available (https://github.com/YLab-Open/METS-CoV) to ensure reproducibility.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 01:55:14 GMT" } ]
2022-09-29T00:00:00
[ [ "Zhou", "Peilin", "" ], [ "Wang", "Zeqiang", "" ], [ "Chong", "Dading", "" ], [ "Guo", "Zhijiang", "" ], [ "Hua", "Yining", "" ], [ "Su", "Zichang", "" ], [ "Teng", "Zhiyang", "" ], [ "Wu", "Jiageng", "" ], [ "Yang", "Jie", "" ] ]
new_dataset
0.999728
2209.13801
Maoxun Yuan
Maoxun Yuan, Yinyan Wang, Xingxing Wei
Translation, Scale and Rotation: Cross-Modal Alignment Meets RGB-Infrared Vehicle Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrating multispectral data in object detection, especially visible and infrared images, has received great attention in recent years. Since visible (RGB) and infrared (IR) images can provide complementary information to handle light variations, the paired images are used in many fields, such as multispectral pedestrian detection, RGB-IR crowd counting and RGB-IR salient object detection. Compared with natural RGB-IR images, we find detection in aerial RGB-IR images suffers from cross-modal weakly misalignment problems, which are manifested in the position, size and angle deviations of the same object. In this paper, we mainly address the challenge of cross-modal weakly misalignment in aerial RGB-IR images. Specifically, we firstly explain and analyze the cause of the weakly misalignment problem. Then, we propose a Translation-Scale-Rotation Alignment (TSRA) module to address the problem by calibrating the feature maps from these two modalities. The module predicts the deviation between two modality objects through an alignment process and utilizes Modality-Selection (MS) strategy to improve the performance of alignment. Finally, a two-stream feature alignment detector (TSFADet) based on the TSRA module is constructed for RGB-IR object detection in aerial images. With comprehensive experiments on the public DroneVehicle datasets, we verify that our method reduces the effect of the cross-modal misalignment and achieve robust detection results.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 03:06:18 GMT" } ]
2022-09-29T00:00:00
[ [ "Yuan", "Maoxun", "" ], [ "Wang", "Yinyan", "" ], [ "Wei", "Xingxing", "" ] ]
new_dataset
0.998067
2209.13815
Yuntao Wang
Yuntao Wang, Zhou Su, Abderrahim Benslimane, Qichao Xu, Minghui Dai, and Ruidong Li
A Learning-based Honeypot Game for Collaborative Defense in UAV Networks
Accepted by IEEE Globecom2022
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The proliferation of unmanned aerial vehicles (UAVs) opens up new opportunities for on-demand service provisioning anywhere and anytime, but it also exposes UAVs to various cyber threats. Low/medium-interaction honeypot is regarded as a promising lightweight defense to actively protect mobile Internet of things, especially UAV networks. Existing works primarily focused on honeypot design and attack pattern recognition, the incentive issue for motivating UAVs' participation (e.g., sharing trapped attack data in honeypots) to collaboratively resist distributed and sophisticated attacks is still under-explored. This paper proposes a novel game-based collaborative defense approach to address optimal, fair, and feasible incentive mechanism design, in the presence of network dynamics and UAVs' multi-dimensional private information (e.g., valid defense data (VDD) volume, communication delay, and UAV cost). Specifically, we first develop a honeypot game between UAVs under both partial and complete information asymmetry scenarios. We then devise a contract-theoretic method to solve the optimal VDD-reward contract design problem with partial information asymmetry, while ensuring truthfulness, fairness, and computational efficiency. Furthermore, under complete information asymmetry, we devise a reinforcement learning based distributed method to dynamically design optimal contracts for distinct types of UAVs in the fast-changing network. Experimental simulations show that the proposed scheme can motivate UAV's collaboration in VDD sharing and enhance defensive effectiveness, compared with existing solutions.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 03:40:06 GMT" } ]
2022-09-29T00:00:00
[ [ "Wang", "Yuntao", "" ], [ "Su", "Zhou", "" ], [ "Benslimane", "Abderrahim", "" ], [ "Xu", "Qichao", "" ], [ "Dai", "Minghui", "" ], [ "Li", "Ruidong", "" ] ]
new_dataset
0.9717
2209.13833
Yang Shen
Yang Shen, Xuhao Sun, Xiu-Shen Wei, Qing-Yuan Jiang, Jian Yang
SEMICON: A Learning-to-hash Solution for Large-scale Fine-grained Image Retrieval
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose Suppression-Enhancing Mask based attention and Interactive Channel transformatiON (SEMICON) to learn binary hash codes for dealing with large-scale fine-grained image retrieval tasks. In SEMICON, we first develop a suppression-enhancing mask (SEM) based attention to dynamically localize discriminative image regions. More importantly, different from existing attention mechanism simply erasing previous discriminative regions, our SEM is developed to restrain such regions and then discover other complementary regions by considering the relation between activated regions in a stage-by-stage fashion. In each stage, the interactive channel transformation (ICON) module is afterwards designed to exploit correlations across channels of attended activation tensors. Since channels could generally correspond to the parts of fine-grained objects, the part correlation can be also modeled accordingly, which further improves fine-grained retrieval accuracy. Moreover, to be computational economy, ICON is realized by an efficient two-step process. Finally, the hash learning of our SEMICON consists of both global- and local-level branches for better representing fine-grained objects and then generating binary hash codes explicitly corresponding to multiple levels. Experiments on five benchmark fine-grained datasets show our superiority over competing methods.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 04:38:04 GMT" } ]
2022-09-29T00:00:00
[ [ "Shen", "Yang", "" ], [ "Sun", "Xuhao", "" ], [ "Wei", "Xiu-Shen", "" ], [ "Jiang", "Qing-Yuan", "" ], [ "Yang", "Jian", "" ] ]
new_dataset
0.997907
2209.13846
Haotian Xia
Haotian Xia, Rhys Tracy, Yun Zhao, Erwan Fraisse, Yuan-Fang Wang, Linda Petzold
VREN: Volleyball Rally Dataset with Expression Notation Language
ICKG 2022
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
This research is intended to accomplish two goals: The first goal is to curate a large and information rich dataset that contains crucial and succinct summaries on the players' actions and positions and the back-and-forth travel patterns of the volleyball in professional and NCAA Div-I indoor volleyball games. While several prior studies have aimed to create similar datasets for other sports (e.g. badminton and soccer), creating such a dataset for indoor volleyball is not yet realized. The second goal is to introduce a volleyball descriptive language to fully describe the rally processes in the games and apply the language to our dataset. Based on the curated dataset and our descriptive sports language, we introduce three tasks for automated volleyball action and tactic analysis using our dataset: (1) Volleyball Rally Prediction, aimed at predicting the outcome of a rally and helping players and coaches improve decision-making in practice, (2) Setting Type and Hitting Type Prediction, to help coaches and players prepare more effectively for the game, and (3) Volleyball Tactics and Attacking Zone Statistics, to provide advanced volleyball statistics and help coaches understand the game and opponent's tactics better. We conducted case studies to show how experimental results can provide insights to the volleyball analysis community. Furthermore, experimental evaluation based on real-world data establishes a baseline for future studies and applications of our dataset and language. This study bridges the gap between the indoor volleyball field and computer science.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 05:52:35 GMT" } ]
2022-09-29T00:00:00
[ [ "Xia", "Haotian", "" ], [ "Tracy", "Rhys", "" ], [ "Zhao", "Yun", "" ], [ "Fraisse", "Erwan", "" ], [ "Wang", "Yuan-Fang", "" ], [ "Petzold", "Linda", "" ] ]
new_dataset
0.999898
2209.13850
Tuba Girgin
T. Baturhan Akbulut, G. Tuba C. Girgin, Arash Mehrabi, Minoru Asada, Emre Ugur, Erhan Oztop
Bimanual rope manipulation skill synthesis through context dependent correction policy learning from human demonstration
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Learning from demonstration (LfD) provides a convenient means to equip robots with dexterous skills when demonstration can be obtained in robot intrinsic coordinates. However, the problem of compounding errors in long and complex skills reduces its wide deployment. Since most such complex skills are composed of smaller movements that are combined, considering the target skill as a sequence of compact motor primitives seems reasonable. Here the problem that needs to be tackled is to ensure that a motor primitive ends in a state that allows the successful execution of the subsequent primitive. In this study, we focus on this problem by proposing to learn an explicit correction policy when the expected transition state between primitives is not achieved. The correction policy is itself learned via behavior cloning by the use of a state-of-the-art movement primitive learning architecture, Conditional Neural Motor Primitives (CNMPs). The learned correction policy is then able to produce diverse movement trajectories in a context dependent way. The advantage of the proposed system over learning the complete task as a single action is shown with a table-top setup in simulation, where an object has to be pushed through a corridor in two steps. Then, the applicability of the proposed method to bi-manual knotting in the real world is shown by equipping an upper-body humanoid robot with the skill of making knots over a bar in 3D space. The experiments show that the robot can perform successful knotting even when the faced correction cases are not part of the human demonstration set.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 06:07:40 GMT" } ]
2022-09-29T00:00:00
[ [ "Akbulut", "T. Baturhan", "" ], [ "Girgin", "G. Tuba C.", "" ], [ "Mehrabi", "Arash", "" ], [ "Asada", "Minoru", "" ], [ "Ugur", "Emre", "" ], [ "Oztop", "Erhan", "" ] ]
new_dataset
0.950433
2209.13875
Thanh-Trung Ngo Mr
Thanh-Trung Ngo and Hajime Nagahara
A General Scattering Phase Function for Inverse Rendering
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
We tackle the problem of modeling light scattering in homogeneous translucent material and estimating its scattering parameters. A scattering phase function is one of such parameters which affects the distribution of scattered radiation. It is the most complex and challenging parameter to be modeled in practice, and empirical phase functions are usually used. Empirical phase functions (such as Henyey-Greenstein (HG) phase function or its modified ones) are usually presented and limited to a specific range of scattering materials. This limitation raises concern for an inverse rendering problem where the target material is generally unknown. In such a situation, a more general phase function is preferred. Although there exists such a general phase function in the polynomial form using a basis such as Legendre polynomials \cite{Fowler1983}, inverse rendering with this phase function is not straightforward. This is because the base polynomials may be negative somewhere, while a phase function cannot. This research presents a novel general phase function that can avoid this issue and an inverse rendering application using this phase function. The proposed phase function was positively evaluated with a wide range of materials modeled with Mie scattering theory. The scattering parameters estimation with the proposed phase function was evaluated with simulation and real-world experiments.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 07:19:05 GMT" } ]
2022-09-29T00:00:00
[ [ "Ngo", "Thanh-Trung", "" ], [ "Nagahara", "Hajime", "" ] ]
new_dataset
0.993373
2209.13894
Fabian Huch
Fabian Huch, Vincent Bode
The Isabelle Community Benchmark
null
Proceedings of the Workshop on Practical Aspects of Automated Reasoning Vol-3201 (2022)
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Choosing hardware for theorem proving is no simple task: automated provers are highly complex and optimized programs, often utilizing a parallel computation model, and there is little prior research on the hardware impact on prover performance. To alleviate the problem for Isabelle, we initiated a community benchmark where the build time of HOL-Analysis is measured. On $54$ distinct CPUs, a total of $669$ runs with different Isabelle configurations were reported by Isabelle users. Results range from $107$s to over $11$h. We found that current consumer CPUs performed best, with an optimal number of $8$ to $16$ threads, largely independent of heap memory. As for hardware parameters, CPU base clock affected multi-threaded execution most with a linear correlation of $0.37$, whereas boost frequency was the most influential parameter for single-threaded runs (correlation coefficient $0.55$); cache size played no significant role. When comparing our benchmark scores with popular high-performance computing benchmarks, we found a strong linear relationship with Dolfyn ($R^2 = 0.79$) in the single-threaded scenario. Using data from the 3DMark CPU Profile consumer benchmark, we created a linear model for optimal (multi-threaded) Isabelle performance. When validating, the model has an average $R^2$-score of $0.87$; the mean absolute error in the final model corresponds to a wall-clock time of $46.6$s. With a dataset of true median values for the 3DMark, the error improves to $37.1$s.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 07:48:50 GMT" } ]
2022-09-29T00:00:00
[ [ "Huch", "Fabian", "" ], [ "Bode", "Vincent", "" ] ]
new_dataset
0.99301
2209.13916
Changyi Lin
Changyi Lin, Ziqi Lin, Shaoxiong Wang, Huazhe Xu
DTact: A Vision-Based Tactile Sensor that Measures High-Resolution 3D Geometry Directly from Darkness
Project website of DTact: https://sites.google.com/view/dtact-sensor
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-based tactile sensors that can measure 3D geometry of the contacting objects are crucial for robots to perform dexterous manipulation tasks. However, the existing sensors are usually complicated to fabricate and delicate to extend. In this work, we novelly take advantage of the reflection property of semitransparent elastomer to design a robust, low-cost, and easy-to-fabricate tactile sensor named DTact. DTact measures high-resolution 3D geometry accurately from the darkness shown in the captured tactile images with only a single image for calibration. In contrast to previous sensors, DTact is robust under various illumination conditions. Then, we build prototypes of DTact that have non-planar contact surfaces with minimal extra efforts and costs. Finally, we perform two intelligent robotic tasks including pose estimation and object recognition using DTact, in which DTact shows large potential in applications.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 08:39:27 GMT" } ]
2022-09-29T00:00:00
[ [ "Lin", "Changyi", "" ], [ "Lin", "Ziqi", "" ], [ "Wang", "Shaoxiong", "" ], [ "Xu", "Huazhe", "" ] ]
new_dataset
0.996138
2209.13925
Jiayin Cai
Jiayin Cai, Changlin Li, Xin Tao, Chun Yuan and Yu-Wing Tai
DeViT: Deformed Vision Transformers in Video Inpainting
null
ACMMM'22, October 10-14, 2022, Lisboa, Portugal
10.1145/3503161.3548395
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper proposes a novel video inpainting method. We make three main contributions: First, we extended previous Transformers with patch alignment by introducing Deformed Patch-based Homography (DePtH), which improves patch-level feature alignments without additional supervision and benefits challenging scenes with various deformation. Second, we introduce Mask Pruning-based Patch Attention (MPPA) to improve patch-wised feature matching by pruning out less essential features and using saliency map. MPPA enhances matching accuracy between warped tokens with invalid pixels. Third, we introduce a Spatial-Temporal weighting Adaptor (STA) module to obtain accurate attention to spatial-temporal tokens under the guidance of the Deformation Factor learned from DePtH, especially for videos with agile motions. Experimental results demonstrate that our method outperforms recent methods qualitatively and quantitatively and achieves a new state-of-the-art.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 08:57:14 GMT" } ]
2022-09-29T00:00:00
[ [ "Cai", "Jiayin", "" ], [ "Li", "Changlin", "" ], [ "Tao", "Xin", "" ], [ "Yuan", "Chun", "" ], [ "Tai", "Yu-Wing", "" ] ]
new_dataset
0.987192
2209.13948
Zhiyang Chen
Zhiyang Chen, Yousong Zhu, Zhaowen Li, Fan Yang, Wei Li, Haixin Wang, Chaoyang Zhao, Liwei Wu, Rui Zhao, Jinqiao Wang, Ming Tang
Obj2Seq: Formatting Objects as Sequences with Class Prompt for Visual Tasks
Accepted by NeurIPS 2022. Code available at https://github.com/CASIA-IVA-Lab/Obj2Seq
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual tasks vary a lot in their output formats and concerned contents, therefore it is hard to process them with an identical structure. One main obstacle lies in the high-dimensional outputs in object-level visual tasks. In this paper, we propose an object-centric vision framework, Obj2Seq. Obj2Seq takes objects as basic units, and regards most object-level visual tasks as sequence generation problems of objects. Therefore, these visual tasks can be decoupled into two steps. First recognize objects of given categories, and then generate a sequence for each of these objects. The definition of the output sequences varies for different tasks, and the model is supervised by matching these sequences with ground-truth targets. Obj2Seq is able to flexibly determine input categories to satisfy customized requirements, and be easily extended to different visual tasks. When experimenting on MS COCO, Obj2Seq achieves 45.7% AP on object detection, 89.0% AP on multi-label classification and 65.0% AP on human pose estimation. These results demonstrate its potential to be generally applied to different visual tasks. Code has been made available at: https://github.com/CASIA-IVA-Lab/Obj2Seq.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 09:24:04 GMT" } ]
2022-09-29T00:00:00
[ [ "Chen", "Zhiyang", "" ], [ "Zhu", "Yousong", "" ], [ "Li", "Zhaowen", "" ], [ "Yang", "Fan", "" ], [ "Li", "Wei", "" ], [ "Wang", "Haixin", "" ], [ "Zhao", "Chaoyang", "" ], [ "Wu", "Liwei", "" ], [ "Zhao", "Rui", "" ], [ "Wang", "Jinqiao", "" ], [ "Tang", "Ming", "" ] ]
new_dataset
0.99953
2209.13959
Fengyuan Shi
Fengyuan Shi, Ruopeng Gao, Weilin Huang, Limin Wang
Dynamic MDETR: A Dynamic Multimodal Transformer Decoder for Visual Grounding
Technical report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal transformer exhibits high capacity and flexibility to align image and text for visual grounding. However, the encoder-only grounding framework (e.g., TransVG) suffers from heavy computation due to the self-attention operation with quadratic time complexity. To address this issue, we present a new multimodal transformer architecture, coined as Dynamic MDETR, by decoupling the whole grounding process into encoding and decoding phases. The key observation is that there exists high spatial redundancy in images. Thus, we devise a new dynamic multimodal transformer decoder by exploiting this sparsity prior to speed up the visual grounding process. Specifically, our dynamic decoder is composed of a 2D adaptive sampling module and a text-guided decoding module. The sampling module aims to select these informative patches by predicting the offsets with respect to a reference point, while the decoding module works for extracting the grounded object information by performing cross attention between image features and text features. These two modules are stacked alternatively to gradually bridge the modality gap and iteratively refine the reference point of grounded object, eventually realizing the objective of visual grounding. Extensive experiments on five benchmarks demonstrate that our proposed Dynamic MDETR achieves competitive trade-offs between computation and accuracy. Notably, using only 9% feature points in the decoder, we can reduce ~44% GLOPs of the multimodal transformer, but still get higher accuracy than the encoder-only counterpart. In addition, to verify its generalization ability and scale up our Dynamic MDETR, we build the first one-stage CLIP empowered visual grounding framework, and achieve the state-of-the-art performance on these benchmarks.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 09:43:02 GMT" } ]
2022-09-29T00:00:00
[ [ "Shi", "Fengyuan", "" ], [ "Gao", "Ruopeng", "" ], [ "Huang", "Weilin", "" ], [ "Wang", "Limin", "" ] ]
new_dataset
0.996848
2209.13999
Ahmab Baraani
Fereshteh Khoshnam, Ahmad Baraani-Dastjerdi, M.J. Liaghatdar
CEFER: A Four Facets Framework based on Context and Emotion embedded features for Implicit and Explicit Emotion Recognition
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People's conduct and reactions are driven by their emotions. Online social media is becoming a great instrument for expressing emotions in written form. Paying attention to the context and the entire sentence help us to detect emotion from texts. However, this perspective inhibits us from noticing some emotional words or phrases in the text, particularly when the words express an emotion implicitly rather than explicitly. On the other hand, focusing only on the words and ignoring the context results in a distorted understanding of the sentence meaning and feeling. In this paper, we propose a framework that analyses text at both the sentence and word levels. We name it CEFER (Context and Emotion embedded Framework for Emotion Recognition). Our four approach facets are to extracting data by considering the entire sentence and each individual word simultaneously, as well as implicit and explicit emotions. The knowledge gained from these data not only mitigates the impact of flaws in the preceding approaches but also it strengthens the feature vector. We evaluate several feature spaces using BERT family and design the CEFER based on them. CEFER combines the emotional vector of each word, including explicit and implicit emotions, with the feature vector of each word based on context. CEFER performs better than the BERT family. The experimental results demonstrate that identifying implicit emotions are more challenging than detecting explicit emotions. CEFER, improves the accuracy of implicit emotion recognition. According to the results, CEFER perform 5% better than the BERT family in recognizing explicit emotions and 3% in implicit.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 11:16:32 GMT" } ]
2022-09-29T00:00:00
[ [ "Khoshnam", "Fereshteh", "" ], [ "Baraani-Dastjerdi", "Ahmad", "" ], [ "Liaghatdar", "M. J.", "" ] ]
new_dataset
0.997572
2209.14003
Rajitha de Silva
Rajitha de Silva, Grzegorz Cielniak, Junfeng Gao
Vision based Crop Row Navigation under Varying Field Conditions in Arable Fields
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Accurate crop row detection is often challenged by the varying field conditions present in real-world arable fields. Traditional colour based segmentation is unable to cater for all such variations. The lack of comprehensive datasets in agricultural environments limits the researchers from developing robust segmentation models to detect crop rows. We present a dataset for crop row detection with 11 field variations from Sugar Beet and Maize crops. We also present a novel crop row detection algorithm for visual servoing in crop row fields. Our algorithm can detect crop rows against varying field conditions such as curved crop rows, weed presence, discontinuities, growth stages, tramlines, shadows and light levels. Our method only uses RGB images from a front-mounted camera on a Husky robot to predict crop rows. Our method outperformed the classic colour based crop row detection baseline. Dense weed presence within inter-row space and discontinuities in crop rows were the most challenging field conditions for our crop row detection algorithm. Our method can detect the end of the crop row and navigate the robot towards the headland area when it reaches the end of the crop row.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 11:23:34 GMT" } ]
2022-09-29T00:00:00
[ [ "de Silva", "Rajitha", "" ], [ "Cielniak", "Grzegorz", "" ], [ "Gao", "Junfeng", "" ] ]
new_dataset
0.98626
2209.14024
Jiale Tao
Jiale Tao, Biao Wang, Tiezheng Ge, Yuning Jiang, Wen Li, and Lixin Duan
Motion Transformer for Unsupervised Image Animation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image animation aims to animate a source image by using motion learned from a driving video. Current state-of-the-art methods typically use convolutional neural networks (CNNs) to predict motion information, such as motion keypoints and corresponding local transformations. However, these CNN based methods do not explicitly model the interactions between motions; as a result, the important underlying motion relationship may be neglected, which can potentially lead to noticeable artifacts being produced in the generated animation video. To this end, we propose a new method, the motion transformer, which is the first attempt to build a motion estimator based on a vision transformer. More specifically, we introduce two types of tokens in our proposed method: i) image tokens formed from patch features and corresponding position encoding; and ii) motion tokens encoded with motion information. Both types of tokens are sent into vision transformers to promote underlying interactions between them through multi-head self attention blocks. By adopting this process, the motion information can be better learned to boost the model performance. The final embedded motion tokens are then used to predict the corresponding motion keypoints and local transformations. Extensive experiments on benchmark datasets show that our proposed method achieves promising results to the state-of-the-art baselines. Our source code will be public available.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 12:04:58 GMT" } ]
2022-09-29T00:00:00
[ [ "Tao", "Jiale", "" ], [ "Wang", "Biao", "" ], [ "Ge", "Tiezheng", "" ], [ "Jiang", "Yuning", "" ], [ "Li", "Wen", "" ], [ "Duan", "Lixin", "" ] ]
new_dataset
0.977435
2209.14085
Pauline Puteaux
Moctar Abdoul Latif Sawadogo, Furkan Pala, Gurkirat Singh, Imen Selmi, Pauline Puteaux and Alice Othmani
PTSD in the Wild: A Video Database for Studying Post-Traumatic Stress Disorder Recognition in Unconstrained Environments
null
null
null
null
cs.HC cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
POST-traumatic stress disorder (PTSD) is a chronic and debilitating mental condition that is developed in response to catastrophic life events, such as military combat, sexual assault, and natural disasters. PTSD is characterized by flashbacks of past traumatic events, intrusive thoughts, nightmares, hypervigilance, and sleep disturbance, all of which affect a person's life and lead to considerable social, occupational, and interpersonal dysfunction. The diagnosis of PTSD is done by medical professionals using self-assessment questionnaire of PTSD symptoms as defined in the Diagnostic and Statistical Manual of Mental Disorders (DSM). In this paper, and for the first time, we collected, annotated, and prepared for public distribution a new video database for automatic PTSD diagnosis, called PTSD in the wild dataset. The database exhibits "natural" and big variability in acquisition conditions with different pose, facial expression, lighting, focus, resolution, age, gender, race, occlusions and background. In addition to describing the details of the dataset collection, we provide a benchmark for evaluating computer vision and machine learning based approaches on PTSD in the wild dataset. In addition, we propose and we evaluate a deep learning based approach for PTSD detection in respect to the given benchmark. The proposed approach shows very promising results. Interested researcher can download a copy of PTSD-in-the wild dataset from: http://www.lissi.fr/PTSD-Dataset/
[ { "version": "v1", "created": "Wed, 28 Sep 2022 13:30:26 GMT" } ]
2022-09-29T00:00:00
[ [ "Sawadogo", "Moctar Abdoul Latif", "" ], [ "Pala", "Furkan", "" ], [ "Singh", "Gurkirat", "" ], [ "Selmi", "Imen", "" ], [ "Puteaux", "Pauline", "" ], [ "Othmani", "Alice", "" ] ]
new_dataset
0.999438
2209.14130
Pino Caballero-Gil
J Su\'arez-Armas, P Caballero-Gil, C Caballero-Gil
Video surveillance robot powered by raspberry pi
null
Proceedings of the 1st International Conference on Internet of Things and Machine Learning 1-4, 2017
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Video surveillance systems are increasingly used in different fields, from the domestic to the commercial environment. Current systems are being improved and complemented with new elements and functionalities. This paper proposes the design of a video surveillance robot based on Raspberry Pi with the abilities to perform tasks of motion detection, send video on real time, fire detection and also, the possibility of control it remotely from the Internet. In order to check the information received from the robot, as well as the video sent, a client application has been developed to any device with an Internet connection. In addition to this, in order to protect the information obtained by the robot, a secure system is proposed, which uses different security mechanisms to achieve this goal.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 14:27:54 GMT" } ]
2022-09-29T00:00:00
[ [ "Suárez-Armas", "J", "" ], [ "Caballero-Gil", "P", "" ], [ "Caballero-Gil", "C", "" ] ]
new_dataset
0.998305
2209.14138
He Li
He Li, Tingnan Zhang, Wenhao Yu, Patrick M. Wensing
Versatile Real-Time Motion Synthesis via Kino-Dynamic MPC with Hybrid-Systems DDP
7 pages, 7 figures, submitted to 2023 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Specialized motions such as jumping are often achieved on quadruped robots by solving a trajectory optimization problem once and executing the trajectory using a tracking controller. This approach is in parallel with Model Predictive Control (MPC) strategies that commonly control regular gaits via online re-planning. In this work, we present a nonlinear MPC (NMPC) technique that unlocks on-the-fly re-planning of specialized motion skills and regular locomotion within a unified framework. The NMPC reasons about a hybrid kinodynamic model, and is solved using a variant of a constrained Differential Dynamic Programming (DDP) solver. The proposed NMPC enables the robot to perform a variety of agile skills like jumping, bounding, and trotting, and the rapid transition between these skills. We evaluated the proposed algorithm with three challenging motion sequences that combine multiple agile skills, on two quadruped platforms, Unitree A1, and MIT Mini Cheetah, showing its effectiveness and generality.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 14:35:00 GMT" } ]
2022-09-29T00:00:00
[ [ "Li", "He", "" ], [ "Zhang", "Tingnan", "" ], [ "Yu", "Wenhao", "" ], [ "Wensing", "Patrick M.", "" ] ]
new_dataset
0.962085
2209.14142
Toms Bergmanis
Toms Bergmanis and M\=arcis Pinnis
From Zero to Production: Baltic-Ukrainian Machine Translation Systems to Aid Refugees
To be published in Baltic HLT 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
In this paper, we examine the development and usage of six low-resource machine translation systems translating between the Ukrainian language and each of the official languages of the Baltic states. We developed these systems in reaction to the escalating Ukrainian refugee crisis caused by the Russian military aggression in Ukraine in the hope that they might be helpful for refugees and public administrations. Now, two months after MT systems were made public, we analyze their usage patterns and statistics. Our findings show that the Latvian-Ukrainian and Lithuanian-Ukrainian systems are integrated into the public services of Baltic states, leading to more than 127 million translated sentences for the Lithuanian-Ukrainian system. Motivated by these findings, we further enhance our MT systems by better Ukrainian toponym translation and publish an improved version of the Lithuanian-Ukrainian system.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 14:46:01 GMT" } ]
2022-09-29T00:00:00
[ [ "Bergmanis", "Toms", "" ], [ "Pinnis", "Mārcis", "" ] ]
new_dataset
0.997754
2209.14147
Yitong Wang
Yitong Wang, Jun Zhao
Mobile Edge Computing, Metaverse, 6G Wireless Communications, Artificial Intelligence, and Blockchain: Survey and Their Convergence
This paper appears in the Proceedings of 2022 IEEE 8th World Forum on Internet of Things (WF-IoT). Please feel free to contact us for questions or remarks
null
null
null
cs.DC cs.AI cs.LG cs.SE
http://creativecommons.org/licenses/by/4.0/
With the advances of the Internet of Things (IoT) and 5G/6G wireless communications, the paradigms of mobile computing have developed dramatically in recent years, from centralized mobile cloud computing to distributed fog computing and mobile edge computing (MEC). MEC pushes compute-intensive assignments to the edge of the network and brings resources as close to the endpoints as possible, addressing the shortcomings of mobile devices with regard to storage space, resource optimisation, computational performance and efficiency. Compared to cloud computing, as the distributed and closer infrastructure, the convergence of MEC with other emerging technologies, including the Metaverse, 6G wireless communications, artificial intelligence (AI), and blockchain, also solves the problems of network resource allocation, more network load as well as latency requirements. Accordingly, this paper investigates the computational paradigms used to meet the stringent requirements of modern applications. The application scenarios of MEC in mobile augmented reality (MAR) are provided. Furthermore, this survey presents the motivation of MEC-based Metaverse and introduces the applications of MEC to the Metaverse. Particular emphasis is given on a set of technical fusions mentioned above, e.g., 6G with MEC paradigm, MEC strengthened by blockchain, etc.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 14:54:06 GMT" } ]
2022-09-29T00:00:00
[ [ "Wang", "Yitong", "" ], [ "Zhao", "Jun", "" ] ]
new_dataset
0.96952
2209.14195
Pino Caballero-Gil
I Santos-Gonz\'alez, A Rivero-Garc\'ia, P Caballero-Gil
Secure Indoor Location for Airport Environments
null
018 4th International Conference on Big Data Innovations and Applications (Innovate-Data) 2018
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work presents a secure novel solution based on inertial measurement units to provide indoor location and positioning in airports. The use of different technologies allows to locate people with precision in this kind of indoor places where the use of the GPS is not possible. The system has been developed thinking in the low cost and in a possible future expansion of this kind of systems to improve the Quality of Service of the users in airports. The use of QR codes and low cost IMU devices through the use of people smartphones ensure this premise. An Android application has been developed to show the applicability and performance of the system. The security in this kind of systems is essential. This kind of systems needs to avoid the traceability of the IMU devices when users are using it. To solve this problem, the FourQ elliptic curve has been used to generate a shared key using the elliptic curve Diffie-Hellman protocol. The key generated with the FourQ is used then to cipher all communications through the use of the SNOW 3G stream cipher. The developed system offers promising results.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 16:07:41 GMT" } ]
2022-09-29T00:00:00
[ [ "Santos-González", "I", "" ], [ "Rivero-García", "A", "" ], [ "Caballero-Gil", "P", "" ] ]
new_dataset
0.999185
2209.14200
Pino Caballero-Gil
N Garc\'ia-Moreno, P Caballero-Gil, C Caballero-Gil, J Molina-Gil
Building an Ethereum-Based Decentralized Vehicle Rental System
null
Computational Intelligence in Security for Information Systems Conference, 45-53, 2019
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Blockchain technology, beyond cryptocurrencies, is called to be the new information exchange ecosystem due to its unique properties, such as immutability and transparency. The main objective of this work is to introduce the design of a decentralized rental system, which leverages smart contracts and the Ethereum public blockchain. The work started from an exhaustive investigation on the Ethereum platform, emphasizing the aspect of cryptography and all the technology behind this platform. In order to test the proposed scheme in a realistic use, the implementation of a web application for the rental of vehicles has been carried out. The application covers the entire vehicle rental process offered in traditional web applications, adding more autonomy and ease of use to users. Following Ethereum application development guidelines, all business logic is located in the smart contracts implemented in the Ethereum network, where these contracts control the entire vehicle rental system of customers. While this is a work in progress, the results obtained in the first proof of concept have been very promising.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 16:16:34 GMT" } ]
2022-09-29T00:00:00
[ [ "García-Moreno", "N", "" ], [ "Caballero-Gil", "P", "" ], [ "Caballero-Gil", "C", "" ], [ "Molina-Gil", "J", "" ] ]
new_dataset
0.998115
2209.14213
Angelo Marotta
Angelo Marotta
On abelian and cyclic group codes
14 pages
null
null
null
cs.IT math.GR math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We determine a condition on the minimum Hamming weight of some special abelian group codes and, as a consequence of this result, we establish that any such code is, up to permutational equivalence, a subspace of the direct sum of $s$ copies of the repetition code of length $t$, for some suitable positive integers $s$ and $t$. Moreover, we provide a complete characterisation of permutation automorphisms of the linear code $C=\bigoplus_{i=1}^{s}Rep_{t}(\mathbb{F}_{q})$ and we establish that such a code is an abelian group code, for every pair of integers $s,t\geq1$. Finally, in a similar fashion as for abelian group codes, we give an equivalent characterisation of cyclic group codes.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 16:40:12 GMT" } ]
2022-09-29T00:00:00
[ [ "Marotta", "Angelo", "" ] ]
new_dataset
0.992526
2209.14218
Alexander Mathis
Alberto Silvio Chiappa and Alessandro Marin Vargas and Alexander Mathis
DMAP: a Distributed Morphological Attention Policy for Learning to Locomote with a Changing Body
null
null
null
null
cs.RO cs.AI q-bio.NC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Biological and artificial agents need to deal with constant changes in the real world. We study this problem in four classical continuous control environments, augmented with morphological perturbations. Learning to locomote when the length and the thickness of different body parts vary is challenging, as the control policy is required to adapt to the morphology to successfully balance and advance the agent. We show that a control policy based on the proprioceptive state performs poorly with highly variable body configurations, while an (oracle) agent with access to a learned encoding of the perturbation performs significantly better. We introduce DMAP, a biologically-inspired, attention-based policy network architecture. DMAP combines independent proprioceptive processing, a distributed policy with individual controllers for each joint, and an attention mechanism, to dynamically gate sensory information from different body parts to different controllers. Despite not having access to the (hidden) morphology information, DMAP can be trained end-to-end in all the considered environments, overall matching or surpassing the performance of an oracle agent. Thus DMAP, implementing principles from biological motor control, provides a strong inductive bias for learning challenging sensorimotor tasks. Overall, our work corroborates the power of these principles in challenging locomotion tasks.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 16:45:35 GMT" } ]
2022-09-29T00:00:00
[ [ "Chiappa", "Alberto Silvio", "" ], [ "Vargas", "Alessandro Marin", "" ], [ "Mathis", "Alexander", "" ] ]
new_dataset
0.999476
2209.14227
Ana de Almeida Borges
Ana de Almeida Borges, Mireia Gonz\'alez Bedmar, Juan Conejero Rodr\'iguez, Eduardo Hermo Reyes, Joaquim Casals Bu\~nuel and Joost J. Joosten
FV Time: a formally verified Coq library
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
FV Time is a small-scale verification project developed in the Coq proof assistant using the Mathematical Components libraries. It is a library for managing conversions between time formats (UTC and timestamps), as well as commonly used functions for time arithmetic. As a library for time conversions, its novelty is the implementation of leap seconds, which are part of the UTC standard but usually not implemented in existing libraries. Since the verified functions of FV Time are reasonably simple yet non-trivial, it nicely illustrates our methodology for verifying software with Coq. In this paper we present a description of the project, emphasizing the main problems faced while developing the library, as well as some general-purpose solutions that were produced as by-products and may be used in other verification projects. These include a refinement package between proof-oriented MathComp numbers and computation-oriented primitive numbers from the Coq standard library, as well as a set of tactics to automatically prove certain arithmetical statements through brute-force computation.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 16:56:18 GMT" } ]
2022-09-29T00:00:00
[ [ "Borges", "Ana de Almeida", "" ], [ "Bedmar", "Mireia González", "" ], [ "Rodríguez", "Juan Conejero", "" ], [ "Reyes", "Eduardo Hermo", "" ], [ "Buñuel", "Joaquim Casals", "" ], [ "Joosten", "Joost J.", "" ] ]
new_dataset
0.998945
2209.14250
Gautam Choudhary
Atanu R. Sinha, Gautam Choudhary, Mansi Agarwal, Shivansh Bindal, Abhishek Pande, Camille Girabawe
B2B Advertising: Joint Dynamic Scoring of Account and Users
Published at KDD Workshop: AdKDD 2022
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When a business sells to another business (B2B), the buying business is represented by a group of individuals, termed account, who collectively decide whether to buy. The seller advertises to each individual and interacts with them, mostly by digital means. The sales cycle is long, most often over a few months. There is heterogeneity among individuals belonging to an account in seeking information and hence the seller needs to score the interest of each individual over a long horizon to decide which individuals must be reached and when. Moreover, the buy decision rests with the account and must be scored to project the likelihood of purchase, a decision that is subject to change all the way up to the actual decision, emblematic of group decision making. We score decision of the account and its individuals in a dynamic manner. Dynamic scoring allows opportunity to influence different individual members at different time points over the long horizon. The dataset contains behavior logs of each individual's communication activities with the seller; but, there are no data on consultations among individuals which result in the decision. Using neural network architecture, we propose several ways to aggregate information from individual members' activities, to predict the group's collective decision. Multiple evaluations find strong model performance.
[ { "version": "v1", "created": "Wed, 28 Sep 2022 17:10:03 GMT" } ]
2022-09-29T00:00:00
[ [ "Sinha", "Atanu R.", "" ], [ "Choudhary", "Gautam", "" ], [ "Agarwal", "Mansi", "" ], [ "Bindal", "Shivansh", "" ], [ "Pande", "Abhishek", "" ], [ "Girabawe", "Camille", "" ] ]
new_dataset
0.996089
2209.14284
Qiuyu Chen
Zoey Qiuyu Chen, Karl Van Wyk, Yu-Wei Chao, Wei Yang, Arsalan Mousavian, Abhishek Gupta, Dieter Fox
DexTransfer: Real World Multi-fingered Dexterous Grasping with Minimal Human Demonstrations
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Teaching a multi-fingered dexterous robot to grasp objects in the real world has been a challenging problem due to its high dimensional state and action space. We propose a robot-learning system that can take a small number of human demonstrations and learn to grasp unseen object poses given partially occluded observations. Our system leverages a small motion capture dataset and generates a large dataset with diverse and successful trajectories for a multi-fingered robot gripper. By adding domain randomization, we show that our dataset provides robust grasping trajectories that can be transferred to a policy learner. We train a dexterous grasping policy that takes the point clouds of the object as input and predicts continuous actions to grasp objects from different initial robot states. We evaluate the effectiveness of our system on a 22-DoF floating Allegro Hand in simulation and a 23-DoF Allegro robot hand with a KUKA arm in real world. The policy learned from our dataset can generalize well on unseen object poses in both simulation and the real world
[ { "version": "v1", "created": "Wed, 28 Sep 2022 17:51:49 GMT" } ]
2022-09-29T00:00:00
[ [ "Chen", "Zoey Qiuyu", "" ], [ "Van Wyk", "Karl", "" ], [ "Chao", "Yu-Wei", "" ], [ "Yang", "Wei", "" ], [ "Mousavian", "Arsalan", "" ], [ "Gupta", "Abhishek", "" ], [ "Fox", "Dieter", "" ] ]
new_dataset
0.999818
2104.01242
Peter Turney
Peter D. Turney
Evolution of Symbiosis in the Game of Life: Three Characteristics of Successful Symbiotes
null
null
null
null
cs.NE nlin.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In past work, we developed a computational model of the evolution of symbiotic entities (Model-S), based on Conway's Game of Life. In this article, we examine three trends that biologists have observed in the evolution of symbiotes. (1) Management: If one partner is able to control the symbiotic relation, this control can reduce conflict; thus, evolutionary selection favours symbiotes that have a manager. (2) Mutualism: Although partners in a symbiote often have conflicting needs, evolutionary selection favours symbiotes in which partners are better off together inside the symbiote than they would be as individuals outside of the symbiote. (3) Interaction: Repeated interaction among partners in symbiosis tends to promote increasing fitness due to evolutionary selection. We have added new components to Model-S that allow us to observe these three trends in runs of Model-S. The new components are analogous to the practice of staining cells in biology research, to reveal patterns that are not usually visible. When we measure the fitness of a symbiote by making it compete with other symbiotes, we find that fitter symbiotes have significantly more management, mutualism, and interaction than less fit symbiotes. These results confirm the trends observed in nature by biologists. Model-S allows biologists to study these evolutionary trends and other characteristics of symbiosis in ways that are not tractable with living organisms.
[ { "version": "v1", "created": "Fri, 2 Apr 2021 21:23:48 GMT" }, { "version": "v2", "created": "Fri, 20 Aug 2021 19:26:26 GMT" }, { "version": "v3", "created": "Tue, 11 Jan 2022 21:54:23 GMT" }, { "version": "v4", "created": "Thu, 30 Jun 2022 20:20:06 GMT" }, { "version": "v5", "created": "Tue, 27 Sep 2022 00:20:57 GMT" } ]
2022-09-28T00:00:00
[ [ "Turney", "Peter D.", "" ] ]
new_dataset
0.979217
2107.00613
Fengmin Zhu
Fengmin Zhu and Fei He
EqFix: Fixing LaTeX Equation Errors by Examples
null
null
null
null
cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
LaTeX is a widely-used document preparation system. Its powerful ability in mathematical equation editing is perhaps the main reason for its popularity in academia. Sometimes, however, even an expert user may spend much time fixing an erroneous equation. In this paper, we present EqFix, a synthesis-based repairing system for LaTeX equations. It employs a set of fixing rules and can suggest possible repairs for common errors in LaTeX equations. A domain-specific language is proposed for formally expressing the fixing rules. The fixing rules can be automatically synthesized from a set of input-output examples. An extension of relaxers is also introduced to enhance the practicality of EqFix. We evaluate EqFix on real-world examples and find that it can synthesize rules with high generalization ability. Compared with a state-of-the-art string transformation synthesizer, EqFix solved 37% more cases and spent less than half of their synthesis time.
[ { "version": "v1", "created": "Thu, 1 Jul 2021 17:04:56 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 12:14:33 GMT" } ]
2022-09-28T00:00:00
[ [ "Zhu", "Fengmin", "" ], [ "He", "Fei", "" ] ]
new_dataset
0.993275
2110.07954
Hans Wang
Gordon King, Hans Wang
HTTPA: HTTPS Attestable Protocol
10 pages, 8 figures
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hypertext Transfer Protocol Secure (HTTPS) protocol has become an integral part of modern Internet technology. Currently, it is the primary protocol for commercialized web applications. It can provide a fast, secure connection with a certain level of privacy and integrity, and it has become a basic assumption on most web services on the Internet. However, HTTPS alone cannot provide security assurances on request data in computing, so the computing environment remains uncertain of risks and vulnerabilities. A hardware-based trusted execution environment (TEE) such as Intel Software Guard Extension (Intel SGX) or Intel Trust Domain Extensions (Intel TDX) provides in-memory encryption to help protect runtime computation to reduce risks of illegal leaking or modifying private information. (Note that we use SGX as an example for illustration in the following texts.) The central concept of SGX enables computation inside an enclave, a protected environment that encrypts the codes and data pertaining to a security-sensitive computation. In addition, SGX provides security assurances via remote attestation to the web client to verify, including TCB identity, vendor identity, and verification identity. Here, we propose an HTTP protocol extension, called HTTPS Attestable (HTTPA), by including a remote attestation process onto the HTTPS protocol to address the privacy and security concerns on the web and the access of trust over the Internet. With HTTPA, we can provide security assurances for verification to establish trustworthiness with web services and ensure the integrity of request handling for web users. We expect that remote attestation will become a new trend adopted to reduce the security risks of web services. We propose the HTTPA protocol to unify the web attestation and accessing Internet services in a standard and efficient way.
[ { "version": "v1", "created": "Fri, 15 Oct 2021 09:14:03 GMT" }, { "version": "v2", "created": "Thu, 27 Jan 2022 22:11:43 GMT" }, { "version": "v3", "created": "Mon, 26 Sep 2022 23:14:13 GMT" } ]
2022-09-28T00:00:00
[ [ "King", "Gordon", "" ], [ "Wang", "Hans", "" ] ]
new_dataset
0.998925
2112.10869
Mohammadali Mohammadi
Mohammadali Mohammadi and Hien Quoc Ngo and Michail Matthaiou
Cell-Free Massive MIMO Meets OTFS Modulation
4 figures, Accepted in IEEE Transactions on Communications
null
null
null
cs.IT cs.PF math.IT
http://creativecommons.org/licenses/by/4.0/
We provide the first-ever performance evaluation of orthogonal time frequency space (OTFS) modulation in cell-free massive multiple-input multiple-output (MIMO) systems. To investigate trade-off between performance and overhead, we apply embedded pilot-aided and superimposed pilot-based channel estimation methods. We then derive a closed-form expression for the individual user downlink and uplink spectral efficiencies as a function of the numbers of APs, users and delay-Doppler domain channel estimate parameters. Based on these analytical results, we also present new scaling laws that the AP's and user's transmit power should satisfy, to sustain a desirable quality of service. It is found that when the number of APs, $M_a$, grows without bound, we can reduce the transmit power of each user and AP proportionally to $1/M_a$ and $1/M_a^2$, respectively, during the uplink and downlink phases. We compare the OTFS performance with that of orthogonal frequency division multiplexing (OFDM) at high-mobility conditions. Our findings reveal that with shadowing correlation, OTFS modulation with embedded pilot-based channel estimation provides $30$-folds gain over the OFDM counterpart in terms of $95\%$-likely per-user downlink rate. Finally, with superimposed pilot-based channel estimation, the increase in the per-user throughput is more pronounced at the median rates over the correlated shadowing channels.
[ { "version": "v1", "created": "Mon, 20 Dec 2021 21:27:14 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 13:17:25 GMT" } ]
2022-09-28T00:00:00
[ [ "Mohammadi", "Mohammadali", "" ], [ "Ngo", "Hien Quoc", "" ], [ "Matthaiou", "Michail", "" ] ]
new_dataset
0.996187
2201.02279
Felix Wimbauer
Felix Wimbauer, Shangzhe Wu, Christian Rupprecht
De-rendering 3D Objects in the Wild
null
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2022, pp. 18490-18499
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With increasing focus on augmented and virtual reality applications (XR) comes the demand for algorithms that can lift objects from images and videos into representations that are suitable for a wide variety of related 3D tasks. Large-scale deployment of XR devices and applications means that we cannot solely rely on supervised learning, as collecting and annotating data for the unlimited variety of objects in the real world is infeasible. We present a weakly supervised method that is able to decompose a single image of an object into shape (depth and normals), material (albedo, reflectivity and shininess) and global lighting parameters. For training, the method only relies on a rough initial shape estimate of the training objects to bootstrap the learning process. This shape supervision can come for example from a pretrained depth network or - more generically - from a traditional structure-from-motion pipeline. In our experiments, we show that the method can successfully de-render 2D images into a decomposed 3D representation and generalizes to unseen object categories. Since in-the-wild evaluation is difficult due to the lack of ground truth data, we also introduce a photo-realistic synthetic test set that allows for quantitative evaluation.
[ { "version": "v1", "created": "Thu, 6 Jan 2022 23:50:09 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 14:36:16 GMT" } ]
2022-09-28T00:00:00
[ [ "Wimbauer", "Felix", "" ], [ "Wu", "Shangzhe", "" ], [ "Rupprecht", "Christian", "" ] ]
new_dataset
0.996675
2202.00868
Youngsun Wi
Youngsun Wi, Pete Florence, Andy Zeng, Nima Fazeli
VIRDO: Visio-tactile Implicit Representations of Deformable Objects
This work has been accepted to ICRA 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deformable object manipulation requires computationally efficient representations that are compatible with robotic sensing modalities. In this paper, we present VIRDO:an implicit, multi-modal, and continuous representation for deformable-elastic objects. VIRDO operates directly on visual (point cloud) and tactile (reaction forces) modalities and learns rich latent embeddings of contact locations and forces to predict object deformations subject to external contacts.Here, we demonstrate VIRDOs ability to: i) produce high-fidelity cross-modal reconstructions with dense unsupervised correspondences, ii) generalize to unseen contact formations,and iii) state-estimation with partial visio-tactile feedback
[ { "version": "v1", "created": "Wed, 2 Feb 2022 04:10:23 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 21:17:54 GMT" } ]
2022-09-28T00:00:00
[ [ "Wi", "Youngsun", "" ], [ "Florence", "Pete", "" ], [ "Zeng", "Andy", "" ], [ "Fazeli", "Nima", "" ] ]
new_dataset
0.999737
2203.04541
Zhuozhu Jian
Zhuozhu Jian, Zihong Lu, Xiao Zhou, Bin Lan, Anxing Xiao, Xueqian Wang, Bin Liang
PUTN: A Plane-fitting based Uneven Terrain Navigation Framework
Accepted by IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous navigation of ground robots has been widely used in indoor structured 2D environments, but there are still many challenges in outdoor 3D unstructured environments, especially in rough, uneven terrains. This paper proposed a plane-fitting based uneven terrain navigation framework (PUTN) to solve this problem. The implementation of PUTN is divided into three steps. First, based on Rapidly-exploring Random Trees (RRT), an improved sample-based algorithm called Plane Fitting RRT* (PF-RRT*) is proposed to obtain a sparse trajectory. Each sampling point corresponds to a custom traversability index and a fitted plane on the point cloud. These planes are connected in series to form a traversable strip. Second, Gaussian Process Regression is used to generate traversability of the dense trajectory interpolated from the sparse trajectory, and the sampling tree is used as the training set. Finally, local planning is performed using nonlinear model predictive control (NMPC). By adding the traversability index and uncertainty to the cost function, and adding obstacles generated by the real-time point cloud to the constraint function, a safe motion planning algorithm with smooth speed and strong robustness is available. Experiments in real scenarios are conducted to verify the effectiveness of the method. The source code is released for the reference of the community.
[ { "version": "v1", "created": "Wed, 9 Mar 2022 06:22:14 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 08:13:23 GMT" } ]
2022-09-28T00:00:00
[ [ "Jian", "Zhuozhu", "" ], [ "Lu", "Zihong", "" ], [ "Zhou", "Xiao", "" ], [ "Lan", "Bin", "" ], [ "Xiao", "Anxing", "" ], [ "Wang", "Xueqian", "" ], [ "Liang", "Bin", "" ] ]
new_dataset
0.975175
2204.03040
Georgia Maniati
Georgia Maniati, Alexandra Vioni, Nikolaos Ellinas, Karolos Nikitaras, Konstantinos Klapsas, June Sig Sung, Gunu Jho, Aimilios Chalamandaris and Pirros Tsiakoulis
SOMOS: The Samsung Open MOS Dataset for the Evaluation of Neural Text-to-Speech Synthesis
Accepted to INTERSPEECH 2022
null
10.21437/Interspeech.2022-10922
null
cs.SD cs.CL cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present the SOMOS dataset, the first large-scale mean opinion scores (MOS) dataset consisting of solely neural text-to-speech (TTS) samples. It can be employed to train automatic MOS prediction systems focused on the assessment of modern synthesizers, and can stimulate advancements in acoustic model evaluation. It consists of 20K synthetic utterances of the LJ Speech voice, a public domain speech dataset which is a common benchmark for building neural acoustic models and vocoders. Utterances are generated from 200 TTS systems including vanilla neural acoustic models as well as models which allow prosodic variations. An LPCNet vocoder is used for all systems, so that the samples' variation depends only on the acoustic models. The synthesized utterances provide balanced and adequate domain and length coverage. We collect MOS naturalness evaluations on 3 English Amazon Mechanical Turk locales and share practices leading to reliable crowdsourced annotations for this task. We provide baseline results of state-of-the-art MOS prediction models on the SOMOS dataset and show the limitations that such models face when assigned to evaluate TTS utterances.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 18:45:20 GMT" }, { "version": "v2", "created": "Wed, 24 Aug 2022 14:24:57 GMT" } ]
2022-09-28T00:00:00
[ [ "Maniati", "Georgia", "" ], [ "Vioni", "Alexandra", "" ], [ "Ellinas", "Nikolaos", "" ], [ "Nikitaras", "Karolos", "" ], [ "Klapsas", "Konstantinos", "" ], [ "Sung", "June Sig", "" ], [ "Jho", "Gunu", "" ], [ "Chalamandaris", "Aimilios", "" ], [ "Tsiakoulis", "Pirros", "" ] ]
new_dataset
0.99979
2205.04643
Debajyoti Mondal
J. Mark Keil, Debajyoti Mondal, Ehsan Moradi
Burning Number for the Points in the Plane
null
null
null
null
cs.CG cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The burning process on a graph $G$ starts with a single burnt vertex, and at each subsequent step, burns the neighbors of the currently burnt vertices, as well as one other unburnt vertex. The burning number of $G$ is the smallest number of steps required to burn all the vertices of the graph. In this paper, we examine the problem of computing the burning number in a geometric setting. The input is a set of points $P$ in the Euclidean plane. The burning process starts with a single burnt point, and at each subsequent step, burns all the points that are within a distance of one unit from the currently burnt points and one other unburnt point. The burning number of $P$ is the smallest number of steps required to burn all the points of $P$. We call this variant \emph{point burning}. We consider another variant called \emph{anywhere burning}, where we are allowed to burn any point of the plane. We show that point burning and anywhere burning problems are both NP-complete, but $(2+\varepsilon)$ approximable for every $\varepsilon>0$. Moreover, if we put a restriction on the number of burning sources that can be used, then the anywhere burning problem becomes NP-hard to approximate within a factor of $\frac{2}{\sqrt{3}}-\varepsilon$.
[ { "version": "v1", "created": "Tue, 10 May 2022 03:20:13 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 21:12:32 GMT" } ]
2022-09-28T00:00:00
[ [ "Keil", "J. Mark", "" ], [ "Mondal", "Debajyoti", "" ], [ "Moradi", "Ehsan", "" ] ]
new_dataset
0.995924
2205.06093
Baudouin Denis de Senneville PhD
Vincent Estrade, Michel Daudon, Emmanuel Richard, Jean-Christophe Bernhard, Franck Bladou, Gregoire Robert, Laurent Facq, Baudouin Denis de Senneville
Deep morphological recognition of kidney stones using intra-operative endoscopic digital videos
16 pages, 4 figures, 3 tables
Physics in Medicine & Biology 2022
10.1088/1361-6560/ac8592
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The collection and the analysis of kidney stone morphological criteria are essential for an aetiological diagnosis of stone disease. However, in-situ LASER-based fragmentation of urinary stones, which is now the most established chirurgical intervention, may destroy the morphology of the targeted stone. In the current study, we assess the performance and added value of processing complete digital endoscopic video sequences for the automatic recognition of stone morphological features during a standard-of-care intra-operative session. To this end, a computer-aided video classifier was developed to predict in-situ the morphology of stone using an intra-operative digital endoscopic video acquired in a clinical setting. The proposed technique was evaluated on pure (i.e. include one morphology) and mixed (i.e. include at least two morphologies) stones involving "Ia/Calcium Oxalate Monohydrate (COM)", "IIb/ Calcium Oxalate Dihydrate (COD)" and "IIIb/Uric Acid (UA)" morphologies. 71 digital endoscopic videos (50 exhibited only one morphological type and 21 displayed two) were analyzed using the proposed video classifier (56840 frames processed in total). Using the proposed approach, diagnostic performances (averaged over both pure and mixed stone types) were as follows: balanced accuracy=88%, sensitivity=80%, specificity=95%, precision=78% and F1-score=78%. The obtained results demonstrate that AI applied on digital endoscopic video sequences is a promising tool for collecting morphological information during the time-course of the stone fragmentation process without resorting to any human intervention for stone delineation or selection of good quality steady frames. To this end, irrelevant image information must be removed from the prediction process at both frame and pixel levels, which is now feasible thanks to the use of AI-dedicated networks.
[ { "version": "v1", "created": "Thu, 12 May 2022 13:58:57 GMT" } ]
2022-09-28T00:00:00
[ [ "Estrade", "Vincent", "" ], [ "Daudon", "Michel", "" ], [ "Richard", "Emmanuel", "" ], [ "Bernhard", "Jean-Christophe", "" ], [ "Bladou", "Franck", "" ], [ "Robert", "Gregoire", "" ], [ "Facq", "Laurent", "" ], [ "de Senneville", "Baudouin Denis", "" ] ]
new_dataset
0.979118
2206.12628
Yongzhi Fan
Yongzhi Fan, Xin Du, Lun Luo, Jizhong Shen
FreSCo: Frequency-Domain Scan Context for LiDAR-based Place Recognition with Translation and Rotation Invariance
8 pages, 10 figures. Accepted for ICARCV 2022
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Place recognition plays a crucial role in re-localization and loop closure detection tasks for robots and vehicles. This paper seeks a well-defined global descriptor for LiDAR-based place recognition. Compared to local descriptors, global descriptors show remarkable performance in urban road scenes but are usually viewpoint-dependent. To this end, we propose a simple yet robust global descriptor dubbed FreSCo that decomposes the viewpoint difference during revisit and achieves both translation and rotation invariance by leveraging Fourier Transform and circular shift technique. Besides, a fast two-stage pose estimation method is proposed to estimate the relative pose after place retrieval by utilizing the compact 2D point clouds extracted from the original data. Experiments show that FreSCo exhibited superior performance than contemporaneous methods on sequences of different scenes from multiple datasets. Code will be publicly available at https://github.com/soytony/FreSCo.
[ { "version": "v1", "created": "Sat, 25 Jun 2022 11:47:35 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 14:51:53 GMT" } ]
2022-09-28T00:00:00
[ [ "Fan", "Yongzhi", "" ], [ "Du", "Xin", "" ], [ "Luo", "Lun", "" ], [ "Shen", "Jizhong", "" ] ]
new_dataset
0.998102
2207.02596
Shai Guendelman
Shaull Almagor, Shai Guendelman
Concurrent Games with Multiple Topologies
null
null
null
null
cs.GT cs.FL
http://creativecommons.org/licenses/by/4.0/
Concurrent multi-player games with $\omega$-regular objectives are a standard model for systems that consist of several interacting components, each with its own objective. The standard solution concept for such games is Nash Equilibrium, which is a "stable" strategy profile for the players. In many settings, the system is not fully observable by the interacting components, e.g., due to internal variables. Then, the interaction is modelled by a partial information game. Unfortunately, the problem of whether a partial information game has an NE is not known to be decidable. A particular setting of partial information arises naturally when processes are assigned IDs by the system, but these IDs are not known to the processes. Then, the processes have full information about the state of the system, but are uncertain of the effect of their actions on the transitions. We generalize the setting above and introduce Multi-Topology Games (MTGs) -- concurrent games with several possible topologies, where the players do not know which topology is actually used. We show that extending the concept of NE to these games can take several forms. To this end, we propose two notions of NE: Conservative NE, in which a player deviates if she can strictly add topologies to her winning set, and Greedy NE, where she deviates if she can win in a previously-losing topology. We study the properties of these NE, and show that the problem of whether a game admits them is decidable.
[ { "version": "v1", "created": "Wed, 6 Jul 2022 11:19:39 GMT" }, { "version": "v2", "created": "Sat, 13 Aug 2022 15:12:21 GMT" }, { "version": "v3", "created": "Tue, 27 Sep 2022 06:40:25 GMT" } ]
2022-09-28T00:00:00
[ [ "Almagor", "Shaull", "" ], [ "Guendelman", "Shai", "" ] ]
new_dataset
0.951347
2207.09258
Sahidul Islam
Sahidul Islam, Shanglin Zhou, Ran Ran, Yufang Jin, Wujie Wen, Caiwen Ding and Mimi Xie
EVE: Environmental Adaptive Neural Network Models for Low-power Energy Harvesting System
null
null
null
null
cs.LG cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
IoT devices are increasingly being implemented with neural network models to enable smart applications. Energy harvesting (EH) technology that harvests energy from ambient environment is a promising alternative to batteries for powering those devices due to the low maintenance cost and wide availability of the energy sources. However, the power provided by the energy harvester is low and has an intrinsic drawback of instability since it varies with the ambient environment. This paper proposes EVE, an automated machine learning (autoML) co-exploration framework to search for desired multi-models with shared weights for energy harvesting IoT devices. Those shared models incur significantly reduced memory footprint with different levels of model sparsity, latency, and accuracy to adapt to the environmental changes. An efficient on-device implementation architecture is further developed to efficiently execute each model on device. A run-time model extraction algorithm is proposed that retrieves individual model with negligible overhead when a specific model mode is triggered.Experimental results show that the neural networks models generated by EVE is on average 2.5X times faster than the baseline models without pruning and shared weights.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 20:53:46 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 18:44:22 GMT" } ]
2022-09-28T00:00:00
[ [ "Islam", "Sahidul", "" ], [ "Zhou", "Shanglin", "" ], [ "Ran", "Ran", "" ], [ "Jin", "Yufang", "" ], [ "Wen", "Wujie", "" ], [ "Ding", "Caiwen", "" ], [ "Xie", "Mimi", "" ] ]
new_dataset
0.950974
2207.11919
Seungjae Lee
Seungjae Lee, Hyungtae Lim, and Hyun Myung
Patchwork++: Fast and Robust Ground Segmentation Solving Partial Under-Segmentation Using 3D Point Cloud
This paper has been accepted for publication in the proceedings of IROS 2022
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the field of 3D perception using 3D LiDAR sensors, ground segmentation is an essential task for various purposes, such as traversable area detection and object recognition. Under these circumstances, several ground segmentation methods have been proposed. However, some limitations are still encountered. First, some ground segmentation methods require fine-tuning of parameters depending on the surroundings, which is excessively laborious and time-consuming. Moreover, even if the parameters are well adjusted, a partial under-segmentation problem can still emerge, which implies ground segmentation failures in some regions. Finally, ground segmentation methods typically fail to estimate an appropriate ground plane when the ground is above another structure, such as a retaining wall. To address these problems, we propose a robust ground segmentation method called Patchwork++, an extension of Patchwork. Patchwork++ exploits adaptive ground likelihood estimation (A-GLE) to calculate appropriate parameters adaptively based on the previous ground segmentation results. Moreover, temporal ground revert (TGR) alleviates a partial under-segmentation problem by using the temporary ground property. Also, region-wise vertical plane fitting (R-VPF) is introduced to segment the ground plane properly even if the ground is elevated with different layers. Finally, we present reflected noise removal (RNR) to eliminate virtual noise points efficiently based on the 3D LiDAR reflection model. We demonstrate the qualitative and quantitative evaluations using a SemanticKITTI dataset. Our code is available at https://github.com/url-kaist/patchwork-plusplus
[ { "version": "v1", "created": "Mon, 25 Jul 2022 06:09:02 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 04:26:13 GMT" } ]
2022-09-28T00:00:00
[ [ "Lee", "Seungjae", "" ], [ "Lim", "Hyungtae", "" ], [ "Myung", "Hyun", "" ] ]
new_dataset
0.99975
2207.12319
Piera Riccio
Piera Riccio and Bill Psomas and Francesco Galati and Francisco Escolano and Thomas Hofmann and Nuria Oliver
OpenFilter: A Framework to Democratize Research Access to Social Media AR Filters
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Augmented Reality or AR filters on selfies have become very popular on social media platforms for a variety of applications, including marketing, entertainment and aesthetics. Given the wide adoption of AR face filters and the importance of faces in our social structures and relations, there is increased interest by the scientific community to analyze the impact of such filters from a psychological, artistic and sociological perspective. However, there are few quantitative analyses in this area mainly due to a lack of publicly available datasets of facial images with applied AR filters. The proprietary, close nature of most social media platforms does not allow users, scientists and practitioners to access the code and the details of the available AR face filters. Scraping faces from these platforms to collect data is ethically unacceptable and should, therefore, be avoided in research. In this paper, we present OpenFilter, a flexible framework to apply AR filters available in social media platforms on existing large collections of human faces. Moreover, we share FairBeauty and B-LFW, two beautified versions of the publicly available FairFace and LFW datasets and we outline insights derived from the analysis of these beautified datasets.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 17:05:25 GMT" }, { "version": "v2", "created": "Mon, 1 Aug 2022 20:27:16 GMT" }, { "version": "v3", "created": "Tue, 27 Sep 2022 09:25:40 GMT" } ]
2022-09-28T00:00:00
[ [ "Riccio", "Piera", "" ], [ "Psomas", "Bill", "" ], [ "Galati", "Francesco", "" ], [ "Escolano", "Francisco", "" ], [ "Hofmann", "Thomas", "" ], [ "Oliver", "Nuria", "" ] ]
new_dataset
0.996593
2209.11035
Leandro Souza
Hugo Abonizio, Leandro Rodrigues de Souza, Roberto Lotufo, Rodrigo Nogueira
MonoByte: A Pool of Monolingual Byte-level Language Models
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The zero-shot cross-lingual ability of models pretrained on multilingual and even monolingual corpora has spurred many hypotheses to explain this intriguing empirical result. However, due to the costs of pretraining, most research uses public models whose pretraining methodology, such as the choice of tokenization, corpus size, and computational budget, might differ drastically. When researchers pretrain their own models, they often do so under a constrained budget, and the resulting models might underperform significantly compared to SOTA models. These experimental differences led to various inconsistent conclusions about the nature of the cross-lingual ability of these models. To help further research on the topic, we released 10 monolingual byte-level models rigorously pretrained under the same configuration with a large compute budget (equivalent to 420 days on a V100) and corpora that are 4 times larger than the original BERT's. Because they are tokenizer-free, the problem of unseen token embeddings is eliminated, thus allowing researchers to try a wider range of cross-lingual experiments in languages with different scripts. Additionally, we release two models pretrained on non-natural language texts that can be used in sanity-check experiments. Experiments on QA and NLI tasks show that our monolingual models achieve competitive performance to the multilingual one, and hence can be served to strengthen our understanding of cross-lingual transferability in language models.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 14:32:48 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 11:55:33 GMT" } ]
2022-09-28T00:00:00
[ [ "Abonizio", "Hugo", "" ], [ "de Souza", "Leandro Rodrigues", "" ], [ "Lotufo", "Roberto", "" ], [ "Nogueira", "Rodrigo", "" ] ]
new_dataset
0.995267
2209.11304
Erez Posner
Aniruddha Tamhane and Tse'ela Mida and Erez Posner and Moshe Bouhnik
Colonoscopy Landmark Detection using Vision Transformers
Accepted for publication at Imaging Systems for GI Endoscopy workshop at the 25th International Conference on Medical Image Computing and Computer Assisted Intervention- MICCAI 2022 ISGIE
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Colonoscopy is a routine outpatient procedure used to examine the colon and rectum for any abnormalities including polyps, diverticula and narrowing of colon structures. A significant amount of the clinician's time is spent in post-processing snapshots taken during the colonoscopy procedure, for maintaining medical records or further investigation. Automating this step can save time and improve the efficiency of the process. In our work, we have collected a dataset of 120 colonoscopy videos and 2416 snapshots taken during the procedure, that have been annotated by experts. Further, we have developed a novel, vision-transformer based landmark detection algorithm that identifies key anatomical landmarks (the appendiceal orifice, ileocecal valve/cecum landmark and rectum retroflexion) from snapshots taken during colonoscopy. Our algorithm uses an adaptive gamma correction during preprocessing to maintain a consistent brightness for all images. We then use a vision transformer as the feature extraction backbone and a fully connected network based classifier head to categorize a given frame into four classes: the three landmarks or a non-landmark frame. We compare the vision transformer (ViT-B/16) backbone with ResNet-101 and ConvNext-B backbones that have been trained similarly. We report an accuracy of 82% with the vision transformer backbone on a test dataset of snapshots.
[ { "version": "v1", "created": "Thu, 22 Sep 2022 20:39:07 GMT" }, { "version": "v2", "created": "Tue, 27 Sep 2022 12:11:22 GMT" } ]
2022-09-28T00:00:00
[ [ "Tamhane", "Aniruddha", "" ], [ "Mida", "Tse'ela", "" ], [ "Posner", "Erez", "" ], [ "Bouhnik", "Moshe", "" ] ]
new_dataset
0.999716
2209.12513
Ruihao Zhou
Ruihao Zhou, Li He, Hong Zhang, Xubin Lin, Yisheng Guan
NDD: A 3D Point Cloud Descriptor Based on Normal Distribution for Loop Closure Detection
null
Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems 2022
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Loop closure detection is a key technology for long-term robot navigation in complex environments. In this paper, we present a global descriptor, named Normal Distribution Descriptor (NDD), for 3D point cloud loop closure detection. The descriptor encodes both the probability density score and entropy of a point cloud as the descriptor. We also propose a fast rotation alignment process and use correlation coefficient as the similarity between descriptors. Experimental results show that our approach outperforms the state-of-the-art point cloud descriptors in both accuracy and efficency. The source code is available and can be integrated into existing LiDAR odometry and mapping (LOAM) systems.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 08:39:54 GMT" } ]
2022-09-28T00:00:00
[ [ "Zhou", "Ruihao", "" ], [ "He", "Li", "" ], [ "Zhang", "Hong", "" ], [ "Lin", "Xubin", "" ], [ "Guan", "Yisheng", "" ] ]
new_dataset
0.999711
2209.12962
Joel Brogan
Joel Brogan and Nell Barber and David Cornett and David Bolme
FaRO 2: an Open Source, Configurable Smart City Framework for Real-Time Distributed Vision and Biometric Systems
null
null
null
null
cs.CV cs.AI cs.CR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Recent global growth in the interest of smart cities has led to trillions of dollars of investment toward research and development. These connected cities have the potential to create a symbiosis of technology and society and revolutionize the cost of living, safety, ecological sustainability, and quality of life of societies on a world-wide scale. Some key components of the smart city construct are connected smart grids, self-driving cars, federated learning systems, smart utilities, large-scale public transit, and proactive surveillance systems. While exciting in prospect, these technologies and their subsequent integration cannot be attempted without addressing the potential societal impacts of such a high degree of automation and data sharing. Additionally, the feasibility of coordinating so many disparate tasks will require a fast, extensible, unifying framework. To that end, we propose FaRO2, a completely reimagined successor to FaRO1, built from the ground up. FaRO2 affords all of the same functionality as its predecessor, serving as a unified biometric API harness that allows for seamless evaluation, deployment, and simple pipeline creation for heterogeneous biometric software. FaRO2 additionally provides a fully declarative capability for defining and coordinating custom machine learning and sensor pipelines, allowing the distribution of processes across otherwise incompatible hardware and networks. FaRO2 ultimately provides a way to quickly configure, hot-swap, and expand large coordinated or federated systems online without interruptions for maintenance. Because much of the data collected in a smart city contains Personally Identifying Information (PII), FaRO2 also provides built-in tools and layers to ensure secure and encrypted streaming, storage, and access of PII data across distributed systems.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 18:52:53 GMT" } ]
2022-09-28T00:00:00
[ [ "Brogan", "Joel", "" ], [ "Barber", "Nell", "" ], [ "Cornett", "David", "" ], [ "Bolme", "David", "" ] ]
new_dataset
0.998463
2209.13015
Ehsan Gholami
Ehsan Gholami, Mohammad Motamedi, Ashwin Aravindakshan
PARSRec: Explainable Personalized Attention-fused Recurrent Sequential Recommendation Using Session Partial Actions
10 pages, 4 figures, this is the author's version of the work. The definitive Version of Record was published in Proceedings of the 28th ACM SIGKDD Conference on Knowledge Discovery and Data Mining (KDD '22), August 14-18, 2022, Washington, DC, USA
null
10.1145/3534678.3539432
null
cs.IR cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The emerging meta- and multi-verse landscape is yet another step towards the more prevalent use of already ubiquitous online markets. In such markets, recommender systems play critical roles by offering items of interest to the users, thereby narrowing down a vast search space that comprises hundreds of thousands of products. Recommender systems are usually designed to learn common user behaviors and rely on them for inference. This approach, while effective, is oblivious to subtle idiosyncrasies that differentiate humans from each other. Focusing on this observation, we propose an architecture that relies on common patterns as well as individual behaviors to tailor its recommendations for each person. Simulations under a controlled environment show that our proposed model learns interpretable personalized user behaviors. Our empirical results on Nielsen Consumer Panel dataset indicate that the proposed approach achieves up to 27.9% performance improvement compared to the state-of-the-art.
[ { "version": "v1", "created": "Fri, 16 Sep 2022 12:07:43 GMT" } ]
2022-09-28T00:00:00
[ [ "Gholami", "Ehsan", "" ], [ "Motamedi", "Mohammad", "" ], [ "Aravindakshan", "Ashwin", "" ] ]
new_dataset
0.983372
2209.13023
Kai-Robin Lange
Kai-Robin Lange, Jonas Rieger, Carsten Jentsch
Lex2Sent: A bagging approach to unsupervised sentiment analysis
10 pages, 1 figure
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised sentiment analysis is traditionally performed by counting those words in a text that are stored in a sentiment lexicon and then assigning a label depending on the proportion of positive and negative words registered. While these "counting" methods are considered to be beneficial as they rate a text deterministically, their classification rates decrease when the analyzed texts are short or the vocabulary differs from what the lexicon considers default. The model proposed in this paper, called Lex2Sent, is an unsupervised sentiment analysis method to improve the classification of sentiment lexicon methods. For this purpose, a Doc2Vec-model is trained to determine the distances between document embeddings and the embeddings of the positive and negative part of a sentiment lexicon. These distances are then evaluated for multiple executions of Doc2Vec on resampled documents and are averaged to perform the classification task. For three benchmark datasets considered in this paper, the proposed Lex2Sent outperforms every evaluated lexicon, including state-of-the-art lexica like VADER or the Opinion Lexicon in terms of classification rate.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 20:49:18 GMT" } ]
2022-09-28T00:00:00
[ [ "Lange", "Kai-Robin", "" ], [ "Rieger", "Jonas", "" ], [ "Jentsch", "Carsten", "" ] ]
new_dataset
0.952567
2209.13064
Dima Damen
Ahmad Darkhalil, Dandan Shan, Bin Zhu, Jian Ma, Amlan Kar, Richard Higgins, Sanja Fidler, David Fouhey, Dima Damen
EPIC-KITCHENS VISOR Benchmark: VIdeo Segmentations and Object Relations
10 pages main, 38 pages appendix. Accepted at NeurIPS 2022 Track on Datasets and Benchmarks Data, code and leaderboards from: http://epic-kitchens.github.io/VISOR
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce VISOR, a new dataset of pixel annotations and a benchmark suite for segmenting hands and active objects in egocentric video. VISOR annotates videos from EPIC-KITCHENS, which comes with a new set of challenges not encountered in current video segmentation datasets. Specifically, we need to ensure both short- and long-term consistency of pixel-level annotations as objects undergo transformative interactions, e.g. an onion is peeled, diced and cooked - where we aim to obtain accurate pixel-level annotations of the peel, onion pieces, chopping board, knife, pan, as well as the acting hands. VISOR introduces an annotation pipeline, AI-powered in parts, for scalability and quality. In total, we publicly release 272K manual semantic masks of 257 object classes, 9.9M interpolated dense masks, 67K hand-object relations, covering 36 hours of 179 untrimmed videos. Along with the annotations, we introduce three challenges in video object segmentation, interaction understanding and long-term reasoning. For data, code and leaderboards: http://epic-kitchens.github.io/VISOR
[ { "version": "v1", "created": "Mon, 26 Sep 2022 23:03:26 GMT" } ]
2022-09-28T00:00:00
[ [ "Darkhalil", "Ahmad", "" ], [ "Shan", "Dandan", "" ], [ "Zhu", "Bin", "" ], [ "Ma", "Jian", "" ], [ "Kar", "Amlan", "" ], [ "Higgins", "Richard", "" ], [ "Fidler", "Sanja", "" ], [ "Fouhey", "David", "" ], [ "Damen", "Dima", "" ] ]
new_dataset
0.999821
2209.13101
Hoang Thang Ta Mr.
Hoang Thang Ta, Abu Bakar Siddiqur Rahman, Navonil Majumder, Amir Hussain, Lotfollah Najjar, Newton Howard, Soujanya Poria and Alexander Gelbukh
WikiDes: A Wikipedia-Based Dataset for Generating Short Descriptions from Paragraphs
27 pages, 8 figures, 15 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
As free online encyclopedias with massive volumes of content, Wikipedia and Wikidata are key to many Natural Language Processing (NLP) tasks, such as information retrieval, knowledge base building, machine translation, text classification, and text summarization. In this paper, we introduce WikiDes, a novel dataset to generate short descriptions of Wikipedia articles for the problem of text summarization. The dataset consists of over 80k English samples on 6987 topics. We set up a two-phase summarization method - description generation (Phase I) and candidate ranking (Phase II) - as a strong approach that relies on transfer and contrastive learning. For description generation, T5 and BART show their superiority compared to other small-scale pre-trained models. By applying contrastive learning with the diverse input from beam search, the metric fusion-based ranking models outperform the direct description generation models significantly up to 22 ROUGE in topic-exclusive split and topic-independent split. Furthermore, the outcome descriptions in Phase II are supported by human evaluation in over 45.33% chosen compared to 23.66% in Phase I against the gold descriptions. In the aspect of sentiment analysis, the generated descriptions cannot effectively capture all sentiment polarities from paragraphs while doing this task better from the gold descriptions. The automatic generation of new descriptions reduces the human efforts in creating them and enriches Wikidata-based knowledge graphs. Our paper shows a practical impact on Wikipedia and Wikidata since there are thousands of missing descriptions. Finally, we expect WikiDes to be a useful dataset for related works in capturing salient information from short paragraphs. The curated dataset is publicly available at: https://github.com/declare-lab/WikiDes.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 01:28:02 GMT" } ]
2022-09-28T00:00:00
[ [ "Ta", "Hoang Thang", "" ], [ "Rahman", "Abu Bakar Siddiqur", "" ], [ "Majumder", "Navonil", "" ], [ "Hussain", "Amir", "" ], [ "Najjar", "Lotfollah", "" ], [ "Howard", "Newton", "" ], [ "Poria", "Soujanya", "" ], [ "Gelbukh", "Alexander", "" ] ]
new_dataset
0.999808
2209.13202
Paraskevi Nousi
Paraskevi Nousi, Emmanouil Mpampis, Nikolaos Passalis, Ole Green, Anastasios Tefas
A Novel Dataset for Evaluating and Alleviating Domain Shift for Human Detection in Agricultural Fields
null
null
null
null
cs.CV cs.NE
http://creativecommons.org/licenses/by/4.0/
In this paper we evaluate the impact of domain shift on human detection models trained on well known object detection datasets when deployed on data outside the distribution of the training set, as well as propose methods to alleviate such phenomena based on the available annotations from the target domain. Specifically, we introduce the OpenDR Humans in Field dataset, collected in the context of agricultural robotics applications, using the Robotti platform, allowing for quantitatively measuring the impact of domain shift in such applications. Furthermore, we examine the importance of manual annotation by evaluating three distinct scenarios concerning the training data: a) only negative samples, i.e., no depicted humans, b) only positive samples, i.e., only images which contain humans, and c) both negative and positive samples. Our results indicate that good performance can be achieved even when using only negative samples, if additional consideration is given to the training process. We also find that positive samples increase performance especially in terms of better localization. The dataset is publicly available for download at https://github.com/opendr-eu/datasets.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 07:04:28 GMT" } ]
2022-09-28T00:00:00
[ [ "Nousi", "Paraskevi", "" ], [ "Mpampis", "Emmanouil", "" ], [ "Passalis", "Nikolaos", "" ], [ "Green", "Ole", "" ], [ "Tefas", "Anastasios", "" ] ]
new_dataset
0.983099
2209.13204
Xuefei Zhe
Weiqiang Wang, Xuefei Zhe, Huan Chen, Di Kang, Tingguang Li, Ruizhi Chen, and Linchao Bao
NEURAL MARIONETTE: A Transformer-based Multi-action Human Motion Synthesis System
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present a neural network-based system for long-term, multi-action human motion synthesis. The system, dubbed as NEURAL MARIONETTE, can produce high-quality and meaningful motions with smooth transitions from simple user input, including a sequence of action tags with expected action duration, and optionally a hand-drawn moving trajectory if the user specifies. The core of our system is a novel Transformer-based motion generation model, namely MARIONET, which can generate diverse motions given action tags. Different from existing motion generation models, MARIONET utilizes contextual information from the past motion clip and future action tag, dedicated to generating actions that can smoothly blend historical and future actions. Specifically, MARIONET first encodes target action tag and contextual information into an action-level latent code. The code is unfolded into frame-level control signals via a time unrolling module, which could be then combined with other frame-level control signals like the target trajectory. Motion frames are then generated in an auto-regressive way. By sequentially applying MARIONET, the system NEURAL MARIONETTE can robustly generate long-term, multi-action motions with the help of two simple schemes, namely "Shadow Start" and "Action Revision". Along with the novel system, we also present a new dataset dedicated to the multi-action motion synthesis task, which contains both action tags and their contextual information. Extensive experiments are conducted to study the action accuracy, naturalism, and transition smoothness of the motions generated by our system.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 07:10:20 GMT" } ]
2022-09-28T00:00:00
[ [ "Wang", "Weiqiang", "" ], [ "Zhe", "Xuefei", "" ], [ "Chen", "Huan", "" ], [ "Kang", "Di", "" ], [ "Li", "Tingguang", "" ], [ "Chen", "Ruizhi", "" ], [ "Bao", "Linchao", "" ] ]
new_dataset
0.999549
2209.13219
Zhengyan Tong
Zhengyan Tong, Xiaohang Wang, Shengchao Yuan, Xuanhong Chen, Junjie Wang, Xiangzhong Fang
Im2Oil: Stroke-Based Oil Painting Rendering with Linearly Controllable Fineness Via Adaptive Sampling
ACM MM 2022 oral paper, accepted by the 30th ACM International Conference on Multimedia
null
10.1145/3503161.3547759
null
cs.CV cs.LG cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a novel stroke-based rendering (SBR) method that translates images into vivid oil paintings. Previous SBR techniques usually formulate the oil painting problem as pixel-wise approximation. Different from this technique route, we treat oil painting creation as an adaptive sampling problem. Firstly, we compute a probability density map based on the texture complexity of the input image. Then we use the Voronoi algorithm to sample a set of pixels as the stroke anchors. Next, we search and generate an individual oil stroke at each anchor. Finally, we place all the strokes on the canvas to obtain the oil painting. By adjusting the hyper-parameter maximum sampling probability, we can control the oil painting fineness in a linear manner. Comparison with existing state-of-the-art oil painting techniques shows that our results have higher fidelity and more realistic textures. A user opinion test demonstrates that people behave more preference toward our oil paintings than the results of other methods. More interesting results and the code are in https://github.com/TZYSJTU/Im2Oil.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 07:41:04 GMT" } ]
2022-09-28T00:00:00
[ [ "Tong", "Zhengyan", "" ], [ "Wang", "Xiaohang", "" ], [ "Yuan", "Shengchao", "" ], [ "Chen", "Xuanhong", "" ], [ "Wang", "Junjie", "" ], [ "Fang", "Xiangzhong", "" ] ]
new_dataset
0.996697
2209.13252
Hao Yu
Hao Yu, Ji Hou, Zheng Qin, Mahdi Saleh, Ivan Shugurov, Kai Wang, Benjamin Busam, Slobodan Ilic
RIGA: Rotation-Invariant and Globally-Aware Descriptors for Point Cloud Registration
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Successful point cloud registration relies on accurate correspondences established upon powerful descriptors. However, existing neural descriptors either leverage a rotation-variant backbone whose performance declines under large rotations, or encode local geometry that is less distinctive. To address this issue, we introduce RIGA to learn descriptors that are Rotation-Invariant by design and Globally-Aware. From the Point Pair Features (PPFs) of sparse local regions, rotation-invariant local geometry is encoded into geometric descriptors. Global awareness of 3D structures and geometric context is subsequently incorporated, both in a rotation-invariant fashion. More specifically, 3D structures of the whole frame are first represented by our global PPF signatures, from which structural descriptors are learned to help geometric descriptors sense the 3D world beyond local regions. Geometric context from the whole scene is then globally aggregated into descriptors. Finally, the description of sparse regions is interpolated to dense point descriptors, from which correspondences are extracted for registration. To validate our approach, we conduct extensive experiments on both object- and scene-level data. With large rotations, RIGA surpasses the state-of-the-art methods by a margin of 8\degree in terms of the Relative Rotation Error on ModelNet40 and improves the Feature Matching Recall by at least 5 percentage points on 3DLoMatch.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 08:45:56 GMT" } ]
2022-09-28T00:00:00
[ [ "Yu", "Hao", "" ], [ "Hou", "Ji", "" ], [ "Qin", "Zheng", "" ], [ "Saleh", "Mahdi", "" ], [ "Shugurov", "Ivan", "" ], [ "Wang", "Kai", "" ], [ "Busam", "Benjamin", "" ], [ "Ilic", "Slobodan", "" ] ]
new_dataset
0.999375
2209.13304
Manuel Vidigueira
Martina Camaioni, Rachid Guerraoui, Matteo Monti, Manuel Vidigueira
Oracular Byzantine Reliable Broadcast [Extended Version]
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Byzantine Reliable Broadcast (BRB) is a fundamental distributed computing primitive, with applications ranging from notifications to asynchronous payment systems. Motivated by practical consideration, we study Client-Server Byzantine Reliable Broadcast (CSB), a multi-shot variant of BRB whose interface is split between broadcasting clients and delivering servers. We present Draft, an optimally resilient implementation of CSB. Like most implementations of BRB, Draft guarantees both liveness and safety in an asynchronous environment. Under good conditions, however, Draft achieves unparalleled efficiency. In a moment of synchrony, free from Byzantine misbehaviour, and at the limit of infinitely many broadcasting clients, a Draft server delivers a $b$-bits payload at an asymptotic amortized cost of $0$ signature verifications, and $\log_2(c) + b$ bits exchanged, where $c$ is the number of clients in the system. This is the information-theoretical minimum number of bits required to convey the payload ($b$ bits, assuming it is compressed), along with an identifier for its sender ($\log_2(c)$ bits, necessary to enumerate any set of $c$ elements, and optimal if broadcasting frequencies are uniform or unknown). These two achievements have profound practical implications. Real-world BRB implementations are often bottlenecked either by expensive signature verifications, or by communication overhead. For Draft, instead, the network is the limit: a server can deliver payloads as quickly as it would receive them from an infallible oracle.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 11:09:54 GMT" } ]
2022-09-28T00:00:00
[ [ "Camaioni", "Martina", "" ], [ "Guerraoui", "Rachid", "" ], [ "Monti", "Matteo", "" ], [ "Vidigueira", "Manuel", "" ] ]
new_dataset
0.999194
2209.13323
Shantanu Pal
Hejia Zhou, Shantanu Pal, Zahra Jadidi, Alireza Jolfaei
A Fog-Based Security Framework for Large-Scale Industrial Internet of Things Environments
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The Industrial Internet of Things (IIoT) is a developing research area with potential global Internet connectivity, turning everyday objects into intelligent devices with more autonomous activities. IIoT services and applications are not only being used in smart homes and smart cities, but they have also become an essential element of the Industry 4.0 concept. The emergence of the IIoT helps traditional industries simplify production processes, reduce production costs, and improve industrial efficiency. However, the involvement of many heterogeneous devices, the use of third-party software, and the resource-constrained nature of the IoT devices bring new security risks to the production chain and expose vulnerabilities to the systems. The Distributed Denial of Service (DDoS) attacks are significant, among others. This article analyzes the threats and attacks in the IIoT and discusses how DDoS attacks impact the production process and communication dysfunctions with IIoT services and applications. This article also proposes a reference security framework that enhances the advantages of fog computing to demonstrate countermeasures against DDoS attacks and possible strategies to mitigate such attacks at scale.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 11:58:13 GMT" } ]
2022-09-28T00:00:00
[ [ "Zhou", "Hejia", "" ], [ "Pal", "Shantanu", "" ], [ "Jadidi", "Zahra", "" ], [ "Jolfaei", "Alireza", "" ] ]
new_dataset
0.967439
2209.13331
Jane Dwivedi-Yu
Jane Dwivedi-Yu, Timo Schick, Zhengbao Jiang, Maria Lomeli, Patrick Lewis, Gautier Izacard, Edouard Grave, Sebastian Riedel, Fabio Petroni
EditEval: An Instruction-Based Benchmark for Text Improvements
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Evaluation of text generation to date has primarily focused on content created sequentially, rather than improvements on a piece of text. Writing, however, is naturally an iterative and incremental process that requires expertise in different modular skills such as fixing outdated information or making the style more consistent. Even so, comprehensive evaluation of a model's capacity to perform these skills and the ability to edit remains sparse. This work presents EditEval: An instruction-based, benchmark and evaluation suite that leverages high-quality existing and new datasets for automatic evaluation of editing capabilities such as making text more cohesive and paraphrasing. We evaluate several pre-trained models, which shows that InstructGPT and PEER perform the best, but that most baselines fall below the supervised SOTA, particularly when neutralizing and updating information. Our analysis also shows that commonly used metrics for editing tasks do not always correlate well, and that optimization for prompts with the highest performance does not necessarily entail the strongest robustness to different models. Through the release of this benchmark and a publicly available leaderboard challenge, we hope to unlock future research in developing models capable of iterative and more controllable editing.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 12:26:05 GMT" } ]
2022-09-28T00:00:00
[ [ "Dwivedi-Yu", "Jane", "" ], [ "Schick", "Timo", "" ], [ "Jiang", "Zhengbao", "" ], [ "Lomeli", "Maria", "" ], [ "Lewis", "Patrick", "" ], [ "Izacard", "Gautier", "" ], [ "Grave", "Edouard", "" ], [ "Riedel", "Sebastian", "" ], [ "Petroni", "Fabio", "" ] ]
new_dataset
0.998934
2209.13362
Yijin Li
Yijin Li, Xinyang Liu, Wenqi Dong, Han Zhou, Hujun Bao, Guofeng Zhang, Yinda Zhang, Zhaopeng Cui
DELTAR: Depth Estimation from a Light-weight ToF Sensor and RGB Image
Accepted to ECCV 2022. Project Page: https://zju3dv.github.io/deltar/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Light-weight time-of-flight (ToF) depth sensors are small, cheap, low-energy and have been massively deployed on mobile devices for the purposes like autofocus, obstacle detection, etc. However, due to their specific measurements (depth distribution in a region instead of the depth value at a certain pixel) and extremely low resolution, they are insufficient for applications requiring high-fidelity depth such as 3D reconstruction. In this paper, we propose DELTAR, a novel method to empower light-weight ToF sensors with the capability of measuring high resolution and accurate depth by cooperating with a color image. As the core of DELTAR, a feature extractor customized for depth distribution and an attention-based neural architecture is proposed to fuse the information from the color and ToF domain efficiently. To evaluate our system in real-world scenarios, we design a data collection device and propose a new approach to calibrate the RGB camera and ToF sensor. Experiments show that our method produces more accurate depth than existing frameworks designed for depth completion and depth super-resolution and achieves on par performance with a commodity-level RGB-D sensor. Code and data are available at https://zju3dv.github.io/deltar/.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 13:11:37 GMT" } ]
2022-09-28T00:00:00
[ [ "Li", "Yijin", "" ], [ "Liu", "Xinyang", "" ], [ "Dong", "Wenqi", "" ], [ "Zhou", "Han", "" ], [ "Bao", "Hujun", "" ], [ "Zhang", "Guofeng", "" ], [ "Zhang", "Yinda", "" ], [ "Cui", "Zhaopeng", "" ] ]
new_dataset
0.988847
2209.13373
Ville Salo
Ville Salo
On von Neumann regularity of cellular automata
16 pages, 3 figures; comments welcome! arXiv admin note: text overlap with arXiv:1804.03913
null
null
null
cs.FL math.DS math.RA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We show that a cellular automaton on a one-dimensional two-sided mixing subshift of finite type is a von Neumann regular element in the semigroup of cellular automata if and only if it is split epic onto its image in the category of sofic shifts and block maps. It follows from previous joint work of the author and T\"orm\"a that von Neumann regularity is a decidable condition, and we decide it for all elementary CA, obtaining the optimal radii for weak generalized inverses. Two sufficient conditions for non-regularity are having a proper sofic image or having a point in the image with no preimage of the same period. We show that the non-regular ECA 9 and 28 cannot be proven non-regular using these methods. We also show that a random cellular automaton is non-regular with high probability.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 13:19:13 GMT" } ]
2022-09-28T00:00:00
[ [ "Salo", "Ville", "" ] ]
new_dataset
0.950996
2209.13391
InnaSosunova
Inna Sosunova, Jari Porras, Ekaterina Makarova and Andrei Rybin
Waste Management Hackathon Providing New Ideas to Increase Citizen Awareness, Motivation and Engagement
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes the International Disruptive Information Solutions hackathon and one the winning solutions. The purpose of the hackathon was to promote the use of disruptive ICT technologies (e.g. IoT, Big data, AI, blockchain) in urban infrastructures to create innovative waste management solutions in a smart city context. 29 students enrolled into this hackathon and in the end 4 teams submitted their solutions to the challenges. The winning proposal EcoQ, an approach for plogging collecting trashes while jogging, answered more than well to the presented challenge on waste management and engagement. The original idea was extended and partly refocused during an internship. As the outcome of the internship a mobile application for organizing and holding waste collection events was developed. This mobile application was shortly tested in a real environment and it provides a working citizen-centric platform, which enables anyone to arrange waste management events, and motivates other residents to participate in these activities.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 13:55:05 GMT" } ]
2022-09-28T00:00:00
[ [ "Sosunova", "Inna", "" ], [ "Porras", "Jari", "" ], [ "Makarova", "Ekaterina", "" ], [ "Rybin", "Andrei", "" ] ]
new_dataset
0.989331
2209.13418
Kushagra Srivastava
Kushagra Srivastava, Dhruv Patel, Aditya Kumar Jha, Mohhit Kumar Jha, Jaskirat Singh, Ravi Kiran Sarvadevabhatla, Pradeep Kumar Ramancharla, Harikumar Kandath and K. Madhava Krishna
UAV-based Visual Remote Sensing for Automated Building Inspection
Paper accepted at CVCIE Workshop at ECCV, 2022 and the project page is https://uvrsabi.github.io/
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Unmanned Aerial Vehicle (UAV) based remote sensing system incorporated with computer vision has demonstrated potential for assisting building construction and in disaster management like damage assessment during earthquakes. The vulnerability of a building to earthquake can be assessed through inspection that takes into account the expected damage progression of the associated component and the component's contribution to structural system performance. Most of these inspections are done manually, leading to high utilization of manpower, time, and cost. This paper proposes a methodology to automate these inspections through UAV-based image data collection and a software library for post-processing that helps in estimating the seismic structural parameters. The key parameters considered here are the distances between adjacent buildings, building plan-shape, building plan area, objects on the rooftop and rooftop layout. The accuracy of the proposed methodology in estimating the above-mentioned parameters is verified through field measurements taken using a distance measuring sensor and also from the data obtained through Google Earth. Additional details and code can be accessed from https://uvrsabi.github.io/ .
[ { "version": "v1", "created": "Tue, 27 Sep 2022 14:18:14 GMT" } ]
2022-09-28T00:00:00
[ [ "Srivastava", "Kushagra", "" ], [ "Patel", "Dhruv", "" ], [ "Jha", "Aditya Kumar", "" ], [ "Jha", "Mohhit Kumar", "" ], [ "Singh", "Jaskirat", "" ], [ "Sarvadevabhatla", "Ravi Kiran", "" ], [ "Ramancharla", "Pradeep Kumar", "" ], [ "Kandath", "Harikumar", "" ], [ "Krishna", "K. Madhava", "" ] ]
new_dataset
0.970305
2209.13428
Qingyu Chen
Qingyu Chen, Alexis Allot, Robert Leaman, Chih-Hsuan Wei, Elaheh Aghaarabi, John J. Guerrerio, Lilly Xu, Zhiyong Lu
LitCovid in 2022: an information resource for the COVID-19 literature
9 pages
null
null
null
cs.DL cs.IR
http://creativecommons.org/licenses/by/4.0/
LitCovid (https://www.ncbi.nlm.nih.gov/research/coronavirus/), first launched in February 2020, is a first-of-its-kind literature hub for tracking up-to-date published research on COVID-19. The number of articles in LitCovid has increased from 55,000 to ~300,000 over the past two and half years, with a consistent growth rate of ~10,000 articles per month. In addition to the rapid literature growth, the COVID-19 pandemic has evolved dramatically. For instance, the Omicron variant has now accounted for over 98% of new infections in the U.S. In response to the continuing evolution of the COVID-19 pandemic, this article describes significant updates to LitCovid over the last two years. First, we introduced the Long Covid collection consisting of the articles on COVID-19 survivors experiencing ongoing multisystemic symptoms, including respiratory issues, cardiovascular disease, cognitive impairment, and profound fatigue. Second, we provided new annotations on the latest COVID-19 strains and vaccines mentioned in the literature. Third, we improved several existing features with more accurate machine learning algorithms for annotating topics and classifying articles relevant to COVID-19. LitCovid has been widely used with millions of accesses by users worldwide on various information needs and continues to play a critical role in collecting, curating, and standardizing the latest knowledge on the COVID-19 literature.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 14:32:20 GMT" } ]
2022-09-28T00:00:00
[ [ "Chen", "Qingyu", "" ], [ "Allot", "Alexis", "" ], [ "Leaman", "Robert", "" ], [ "Wei", "Chih-Hsuan", "" ], [ "Aghaarabi", "Elaheh", "" ], [ "Guerrerio", "John J.", "" ], [ "Xu", "Lilly", "" ], [ "Lu", "Zhiyong", "" ] ]
new_dataset
0.997827
2209.13431
Gracy Christopher
M. Gracy, B. Rebecca Jeyavadhanam
MTTBA- A Key Contributor for Sustainable Energy Consumption Time and Space Utility for Highly Secured Crypto Transactions in Blockchain Technology
15 pages, 13 figures
null
null
null
cs.CR
http://creativecommons.org/publicdomain/zero/1.0/
A Merkle tree is an information construction that is used in Blockchain to verify data or transactions in a large content pool in a safe manner. The role of the Merkle tree is crucial in Bitcoin and other cryptocurrencies in a Blockchain network. In this paper, we propose a bright and enhanced verification method, Merkle Trim Tree-based Blockchain Authentication (MTTBA) for the hash node traversal to reach the Merkle Root in a minimum time. MTTBA is a unique mechanism for verifying the Merkle Tree's accumulated transactions specifically for an odd number of transactions. The future impact of cryptocurrency is going to be massive and MTTBA proves its efficacy in transaction speed and eliminating node duplication. Our method enables any block to validate transactions' full availability without duplicating hash nodes. Performance has been evaluated in different parameters and the results show marked improvement in throughput(1680ms), processing time(29700kbps), memory usage(140MB), and security(99.30%). The energy consumption factor is crucial in the scenario, and MTTBA has achieved the lowest of 240 joules.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 14:38:30 GMT" } ]
2022-09-28T00:00:00
[ [ "Gracy", "M.", "" ], [ "Jeyavadhanam", "B. Rebecca", "" ] ]
new_dataset
0.992452
2209.13458
Kayol Mayer
Jonathan Aguiar Soares, Kayol Soares Mayer, Pedro Benevenuto Valadares, Dalton Soares Arantes
PCA-based Channel Estimation for MIMO Communications
5 pages, 7 figures, XL SIMP\'OSIO BRASILEIRO DE TELECOMUNICA\c{C}\~OES E PROCESSAMENTO DE SINAIS (SBrT 2022)
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In multiple-input multiple-output communications, channel estimation is paramount to keep base stations and users on track. This paper proposes a novel PCA-based-principal component analysis-channel estimation approach for MIMO orthogonal frequency division multiplexing systems. The channel frequency response is firstly estimated with the least squares method, and then PCA is used to filter only the higher singular components of the channel impulse response, which is then converted back to the frequency domain. The proposed approach is compared with the MMSE, the minimum mean square error estimation, in terms of bit error rate versus Eb/N0.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 15:25:49 GMT" } ]
2022-09-28T00:00:00
[ [ "Soares", "Jonathan Aguiar", "" ], [ "Mayer", "Kayol Soares", "" ], [ "Valadares", "Pedro Benevenuto", "" ], [ "Arantes", "Dalton Soares", "" ] ]
new_dataset
0.996732
2209.13461
Tasnim Sakib Apon
Tasnim Sakib Apon, Ramisa Anan, Elizabeth Antora Modhu, Arjun Suter, Ifrit Jamal Sneha, MD. Golam Rabiul Alam
BanglaSarc: A Dataset for Sarcasm Detection
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Being one of the most widely spoken language in the world, the use of Bangla has been increasing in the world of social media as well. Sarcasm is a positive statement or remark with an underlying negative motivation that is extensively employed in today's social media platforms. There has been a significant improvement in sarcasm detection in English over the previous many years, however the situation regarding Bangla sarcasm detection remains unchanged. As a result, it is still difficult to identify sarcasm in bangla, and a lack of high-quality data is a major contributing factor. This article proposes BanglaSarc, a dataset constructed specifically for bangla textual data sarcasm detection. This dataset contains of 5112 comments/status and contents collected from various online social platforms such as Facebook, YouTube, along with a few online blogs. Due to the limited amount of data collection of categorized comments in Bengali, this dataset will aid in the of study identifying sarcasm, recognizing people's emotion, detecting various types of Bengali expressions, and other domains. The dataset is publicly available at https://www.kaggle.com/datasets/sakibapon/banglasarc.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 15:28:21 GMT" } ]
2022-09-28T00:00:00
[ [ "Apon", "Tasnim Sakib", "" ], [ "Anan", "Ramisa", "" ], [ "Modhu", "Elizabeth Antora", "" ], [ "Suter", "Arjun", "" ], [ "Sneha", "Ifrit Jamal", "" ], [ "Alam", "MD. Golam Rabiul", "" ] ]
new_dataset
0.999882
2209.13479
Yee Yang Tee
Yee-Yang Tee, Deruo Cheng, Chye-Soon Chee, Tong Lin, Yiqiong Shi, Bah-Hwee Gwee
Unsupervised Domain Adaptation with Histogram-gated Image Translation for Delayered IC Image Analysis
7 pages, 4 figures, To be presented at IEEE PAINE 2022 (oral)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning has achieved great success in the challenging circuit annotation task by employing Convolutional Neural Networks (CNN) for the segmentation of circuit structures. The deep learning approaches require a large amount of manually annotated training data to achieve a good performance, which could cause a degradation in performance if a deep learning model trained on a given dataset is applied to a different dataset. This is commonly known as the domain shift problem for circuit annotation, which stems from the possibly large variations in distribution across different image datasets. The different image datasets could be obtained from different devices or different layers within a single device. To address the domain shift problem, we propose Histogram-gated Image Translation (HGIT), an unsupervised domain adaptation framework which transforms images from a given source dataset to the domain of a target dataset, and utilize the transformed images for training a segmentation network. Specifically, our HGIT performs generative adversarial network (GAN)-based image translation and utilizes histogram statistics for data curation. Experiments were conducted on a single labeled source dataset adapted to three different target datasets (without labels for training) and the segmentation performance was evaluated for each target dataset. We have demonstrated that our method achieves the best performance compared to the reported domain adaptation techniques, and is also reasonably close to the fully supervised benchmark.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 15:53:22 GMT" } ]
2022-09-28T00:00:00
[ [ "Tee", "Yee-Yang", "" ], [ "Cheng", "Deruo", "" ], [ "Chee", "Chye-Soon", "" ], [ "Lin", "Tong", "" ], [ "Shi", "Yiqiong", "" ], [ "Gwee", "Bah-Hwee", "" ] ]
new_dataset
0.954861
2209.13509
Jair Augusto Bottega
Jair A. Bottega, Victor A. Kich, Alisson H. Kolling, Jardel D. S. Dyonisio, Pedro L. Cor\c{c}aque, Rodrigo da S. Guerra, Daniel F. T. Gamarra
Jubileo: An Open-Source Robot and Framework for Research in Human-Robot Social Interaction
IEEE Humanoids 2022 (Accepted)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Human-robot interaction (HRI) is essential to the widespread use of robots in daily life. Robots will eventually be able to carry out a variety of duties in human civilization through effective social interaction. Creating straightforward and understandable interfaces to engage with robots as they start to proliferate in the personal workspace is essential. Typically, interactions with simulated robots are displayed on screens. A more appealing alternative is virtual reality (VR), which gives visual cues more like those seen in the real world. In this study, we introduce Jubileo, a robotic animatronic face with various tools for research and application development in human-robot social interaction field. Jubileo project offers more than just a fully functional open-source physical robot; it also gives a comprehensive framework to operate with a VR interface, enabling an immersive environment for HRI application tests and noticeably better deployment speed.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 16:24:39 GMT" } ]
2022-09-28T00:00:00
[ [ "Bottega", "Jair A.", "" ], [ "Kich", "Victor A.", "" ], [ "Kolling", "Alisson H.", "" ], [ "Dyonisio", "Jardel D. S.", "" ], [ "Corçaque", "Pedro L.", "" ], [ "Guerra", "Rodrigo da S.", "" ], [ "Gamarra", "Daniel F. T.", "" ] ]
new_dataset
0.999563
2209.13538
J. Miguel Diaz-Ba\~nez
Jos\'e-Miguel D\'iaz-B\'a\~nez
Mathematics and Flamenco: An Unexpected Partnership
null
null
null
null
cs.CG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we present a series of mathematical problems which throw interesting lights on flamenco music. More specifically, these are problems in discrete and computational mathematics suggested by an analytical (not compositional) examination of flamenco ``cante'' (singing). As a consequence, since the problems are taken from a culturally specific context, the examples can make more effective mathematics education.
[ { "version": "v1", "created": "Tue, 27 Sep 2022 16:50:33 GMT" } ]
2022-09-28T00:00:00
[ [ "Díaz-Báñez", "José-Miguel", "" ] ]
new_dataset
0.998957
2209.13542
Raju Gottumukkala
Majid Hosseini, Fahad Sohrab, Raju Gottumukkala, Ravi Teja Bhupatiraju, Satya Katragadda, Jenni Raitoharju, Alexandros Iosifidis, Moncef Gabbouj
EmpathicSchool: A multimodal dataset for real-time facial expressions and physiological data analysis under different stress conditions
null
null
null
null
cs.MM eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Affective computing has garnered researchers' attention and interest in recent years as there is a need for AI systems to better understand and react to human emotions. However, analyzing human emotions, such as mood or stress, is quite complex. While various stress studies use facial expressions and wearables, most existing datasets rely on processing data from a single modality. This paper presents EmpathicSchool, a novel dataset that captures facial expressions and the associated physiological signals, such as heart rate, electrodermal activity, and skin temperature, under different stress levels. The data was collected from 20 participants at different sessions for 26 hours. The data includes nine different signal types, including both computer vision and physiological features that can be used to detect stress. In addition, various experiments were conducted to validate the signal quality.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 22:19:18 GMT" } ]
2022-09-28T00:00:00
[ [ "Hosseini", "Majid", "" ], [ "Sohrab", "Fahad", "" ], [ "Gottumukkala", "Raju", "" ], [ "Bhupatiraju", "Ravi Teja", "" ], [ "Katragadda", "Satya", "" ], [ "Raitoharju", "Jenni", "" ], [ "Iosifidis", "Alexandros", "" ], [ "Gabbouj", "Moncef", "" ] ]
new_dataset
0.999815
1901.09089
Adithya Murali
Adithya Murali, Lucas Pe\~na, Christof L\"oding, P. Madhusudan
A First-Order Logic with Frames
This manuscript is an extended and revised version of the publication with the same title that appeared at ESOP 2022 (https://doi.org/10.1007/978-3-030-44914-8_19). It is currently under review
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel logic, called Frame Logic (FL), that extends first-order logic (with recursive definitions) using a construct Sp(.) that captures the implicit supports of formulas -- the precise subset of the universe upon which their meaning depends. Using such supports, we formulate proof rules that facilitate frame reasoning elegantly when the underlying model undergoes change. We show that the logic is expressive by capturing several data-structures and also exhibit a translation from a precise fragment of separation logic to frame logic. Finally, we design a program logic based on frame logic for reasoning with programs that dynamically update heaps that facilitates local specifications and frame reasoning. This program logic consists of both localized proof rules as well as rules that derive the weakest tightest preconditions in FL.
[ { "version": "v1", "created": "Fri, 25 Jan 2019 21:33:21 GMT" }, { "version": "v2", "created": "Mon, 22 Jul 2019 16:15:42 GMT" }, { "version": "v3", "created": "Tue, 25 Feb 2020 01:50:51 GMT" }, { "version": "v4", "created": "Mon, 26 Sep 2022 16:54:26 GMT" } ]
2022-09-27T00:00:00
[ [ "Murali", "Adithya", "" ], [ "Peña", "Lucas", "" ], [ "Löding", "Christof", "" ], [ "Madhusudan", "P.", "" ] ]
new_dataset
0.999182
2010.04108
Sebastian Wild
Konstantinos Tsakalidis, Sebastian Wild, Viktor Zamaraev
Succinct Permutation Graphs
updated to match final Algorithmica version
null
10.1007/s00453-022-01039-2
null
cs.DS cs.DM
http://creativecommons.org/licenses/by/4.0/
We present a succinct data structure for permutation graphs, and their superclass of circular permutation graphs, i.e., data structures using optimal space up to lower order terms. Unlike concurrent work on circle graphs (Acan et al. 2022), our data structure also supports distance and shortest-path queries, as well as adjacency and neighborhood queries, all in optimal time. We present in particular the first succinct exact distance oracle for (circular) permutation graphs. A second succinct data structure also supports degree queries in time independent of the neighborhood's size at the expense of an $O(\log n/\log \log n)$-factor overhead in all running times. Furthermore, we develop a succinct data structure for the class of bipartite permutation graphs. We demonstrate how to run algorithms directly over our succinct representations for several problems on permutation graphs: Clique, Coloring, Independent Set, Hamiltonian Cycle, All-Pair Shortest Paths, and others. Finally, we initiate the study of semi-distributed graph representations; a concept that smoothly interpolates between distributed (labeling schemes) and centralized (standard data structures). We show how to turn some of our data structures into semi-distributed representations by storing only $O(n)$ bits of additional global information, circumventing the lower bound on distance labeling schemes for permutation graphs.
[ { "version": "v1", "created": "Thu, 8 Oct 2020 16:47:10 GMT" }, { "version": "v2", "created": "Sat, 24 Sep 2022 17:02:26 GMT" } ]
2022-09-27T00:00:00
[ [ "Tsakalidis", "Konstantinos", "" ], [ "Wild", "Sebastian", "" ], [ "Zamaraev", "Viktor", "" ] ]
new_dataset
0.985854
2011.14497
Kavisha Vidanapathirana
Kavisha Vidanapathirana, Peyman Moghadam, Ben Harwood, Muming Zhao, Sridha Sridharan, Clinton Fookes
Locus: LiDAR-based Place Recognition using Spatiotemporal Higher-Order Pooling
ICRA 2021. Implementation available at: https://github.com/csiro-robotics/locus
null
10.1109/ICRA48506.2021.9560915
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Place Recognition enables the estimation of a globally consistent map and trajectory by providing non-local constraints in Simultaneous Localisation and Mapping (SLAM). This paper presents Locus, a novel place recognition method using 3D LiDAR point clouds in large-scale environments. We propose a method for extracting and encoding topological and temporal information related to components in a scene and demonstrate how the inclusion of this auxiliary information in place description leads to more robust and discriminative scene representations. Second-order pooling along with a non-linear transform is used to aggregate these multi-level features to generate a fixed-length global descriptor, which is invariant to the permutation of input features. The proposed method outperforms state-of-the-art methods on the KITTI dataset. Furthermore, Locus is demonstrated to be robust across several challenging situations such as occlusions and viewpoint changes in 3D LiDAR point clouds. The open-source implementation is available at: https://github.com/csiro-robotics/locus .
[ { "version": "v1", "created": "Mon, 30 Nov 2020 01:45:55 GMT" }, { "version": "v2", "created": "Fri, 26 Mar 2021 06:49:52 GMT" }, { "version": "v3", "created": "Thu, 8 Apr 2021 00:09:27 GMT" } ]
2022-09-27T00:00:00
[ [ "Vidanapathirana", "Kavisha", "" ], [ "Moghadam", "Peyman", "" ], [ "Harwood", "Ben", "" ], [ "Zhao", "Muming", "" ], [ "Sridharan", "Sridha", "" ], [ "Fookes", "Clinton", "" ] ]
new_dataset
0.998605
2109.00648
Natalia Tomashenko
Natalia Tomashenko, Xin Wang, Emmanuel Vincent, Jose Patino, Brij Mohan Lal Srivastava, Paul-Gauthier No\'e, Andreas Nautsch, Nicholas Evans, Junichi Yamagishi, Benjamin O'Brien, Ana\"is Chanclu, Jean-Fran\c{c}ois Bonastre, Massimiliano Todisco, Mohamed Maouche
The VoicePrivacy 2020 Challenge: Results and findings
Submitted to the Special Issue on Voice Privacy (Computer Speech and Language Journal - Elsevier); under review
null
10.1016/j.csl.2022.101362
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
This paper presents the results and analyses stemming from the first VoicePrivacy 2020 Challenge which focuses on developing anonymization solutions for speech technology. We provide a systematic overview of the challenge design with an analysis of submitted systems and evaluation results. In particular, we describe the voice anonymization task and datasets used for system development and evaluation. Also, we present different attack models and the associated objective and subjective evaluation metrics. We introduce two anonymization baselines and provide a summary description of the anonymization systems developed by the challenge participants. We report objective and subjective evaluation results for baseline and submitted systems. In addition, we present experimental results for alternative privacy metrics and attack models developed as a part of the post-evaluation analysis. Finally, we summarize our insights and observations that will influence the design of the next VoicePrivacy challenge edition and some directions for future voice anonymization research.
[ { "version": "v1", "created": "Wed, 1 Sep 2021 23:40:38 GMT" }, { "version": "v2", "created": "Wed, 13 Oct 2021 21:05:51 GMT" }, { "version": "v3", "created": "Thu, 18 Nov 2021 07:47:29 GMT" }, { "version": "v4", "created": "Mon, 26 Sep 2022 05:52:52 GMT" } ]
2022-09-27T00:00:00
[ [ "Tomashenko", "Natalia", "" ], [ "Wang", "Xin", "" ], [ "Vincent", "Emmanuel", "" ], [ "Patino", "Jose", "" ], [ "Srivastava", "Brij Mohan Lal", "" ], [ "Noé", "Paul-Gauthier", "" ], [ "Nautsch", "Andreas", "" ], [ "Evans", "Nicholas", "" ], [ "Yamagishi", "Junichi", "" ], [ "O'Brien", "Benjamin", "" ], [ "Chanclu", "Anaïs", "" ], [ "Bonastre", "Jean-François", "" ], [ "Todisco", "Massimiliano", "" ], [ "Maouche", "Mohamed", "" ] ]
new_dataset
0.987837
2109.06550
Xiyuan Liu
Xiyuan Liu, Chongjian Yuan, Fu Zhang
Targetless Extrinsic Calibration of Multiple Small FoV LiDARs and Cameras using Adaptive Voxelization
12 pages, 15 figures
null
10.1109/TIM.2022.3176889
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Determining the extrinsic parameter between multiple LiDARs and cameras is essential for autonomous robots, especially for solid-state LiDARs, where each LiDAR unit has a very small Field-of-View (FoV), and multiple units are often used collectively. The majority of extrinsic calibration methods are proposed for 360$^\circ$ mechanical spinning LiDARs where the FoV overlap with other LiDAR or camera sensors is assumed. Few research works have been focused on the calibration of small FoV LiDARs and cameras nor on the improvement of the calibration speed. In this work, we consider the problem of extrinsic calibration among small FoV LiDARs and cameras, with the aim to shorten the total calibration time and further improve the calibration precision. We first implement an adaptive voxelization technique in the extraction and matching of LiDAR feature points. Such a process could avoid the redundant creation of $k$-d trees in LiDAR extrinsic calibration and extract LiDAR feature points in a more reliable and fast manner than existing methods. We then formulate the multiple LiDAR extrinsic calibration into a LiDAR Bundle Adjustment (BA) problem. By deriving the cost function up to second-order, the solving time and precision of the non-linear least square problem are further boosted. Our proposed method has been verified on data collected in four targetless scenes and under two types of solid-state LiDARs with a completely different scanning pattern, density, and FoV. The robustness of our work has also been validated under eight initial setups, with each setup containing 100 independent trials. Compared with the state-of-the-art methods, our work has increased the calibration speed 15 times for LiDAR-LiDAR extrinsic calibration and 1.5 times for LiDAR-Camera extrinsic calibration (averaged result from 50 independent trials) while remaining accurate.
[ { "version": "v1", "created": "Tue, 14 Sep 2021 09:45:56 GMT" }, { "version": "v2", "created": "Fri, 4 Feb 2022 13:28:37 GMT" }, { "version": "v3", "created": "Sat, 24 Sep 2022 07:07:51 GMT" } ]
2022-09-27T00:00:00
[ [ "Liu", "Xiyuan", "" ], [ "Yuan", "Chongjian", "" ], [ "Zhang", "Fu", "" ] ]
new_dataset
0.954143
2109.08336
Kavisha Vidanapathirana
Kavisha Vidanapathirana, Milad Ramezani, Peyman Moghadam, Sridha Sridharan, Clinton Fookes
LoGG3D-Net: Locally Guided Global Descriptor Learning for 3D Place Recognition
Accepted - ICRA 2022
null
10.1109/ICRA46639.2022.9811753
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Retrieval-based place recognition is an efficient and effective solution for re-localization within a pre-built map, or global data association for Simultaneous Localization and Mapping (SLAM). The accuracy of such an approach is heavily dependent on the quality of the extracted scene-level representation. While end-to-end solutions - which learn a global descriptor from input point clouds - have demonstrated promising results, such approaches are limited in their ability to enforce desirable properties at the local feature level. In this paper, we introduce a local consistency loss to guide the network towards learning local features which are consistent across revisits, hence leading to more repeatable global descriptors resulting in an overall improvement in 3D place recognition performance. We formulate our approach in an end-to-end trainable architecture called LoGG3D-Net. Experiments on two large-scale public benchmarks (KITTI and MulRan) show that our method achieves mean $F1_{max}$ scores of $0.939$ and $0.968$ on KITTI and MulRan respectively, achieving state-of-the-art performance while operating in near real-time. The open-source implementation is available at: https://github.com/csiro-robotics/LoGG3D-Net.
[ { "version": "v1", "created": "Fri, 17 Sep 2021 03:32:43 GMT" }, { "version": "v2", "created": "Wed, 22 Sep 2021 06:49:35 GMT" }, { "version": "v3", "created": "Thu, 17 Feb 2022 04:33:16 GMT" } ]
2022-09-27T00:00:00
[ [ "Vidanapathirana", "Kavisha", "" ], [ "Ramezani", "Milad", "" ], [ "Moghadam", "Peyman", "" ], [ "Sridharan", "Sridha", "" ], [ "Fookes", "Clinton", "" ] ]
new_dataset
0.971546
2110.05472
Shubham Goel
Shubham Goel, Georgia Gkioxari, Jitendra Malik
Differentiable Stereopsis: Meshes from multiple views using differentiable rendering
In CVPR2022. Project webpage: https://shubham-goel.github.io/ds/
In CVPR 2022 (pp. 8635-8644)
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose Differentiable Stereopsis, a multi-view stereo approach that reconstructs shape and texture from few input views and noisy cameras. We pair traditional stereopsis and modern differentiable rendering to build an end-to-end model which predicts textured 3D meshes of objects with varying topologies and shape. We frame stereopsis as an optimization problem and simultaneously update shape and cameras via simple gradient descent. We run an extensive quantitative analysis and compare to traditional multi-view stereo techniques and state-of-the-art learning based methods. We show compelling reconstructions on challenging real-world scenes and for an abundance of object types with complex shape, topology and texture. Project webpage: https://shubham-goel.github.io/ds/
[ { "version": "v1", "created": "Mon, 11 Oct 2021 17:59:40 GMT" }, { "version": "v2", "created": "Fri, 23 Sep 2022 18:46:52 GMT" } ]
2022-09-27T00:00:00
[ [ "Goel", "Shubham", "" ], [ "Gkioxari", "Georgia", "" ], [ "Malik", "Jitendra", "" ] ]
new_dataset
0.988789
2110.12329
Doina Bucur
Doina Bucur
The network signature of constellation line figures
Data repository: https://github.com/doinab/constellation-lines (in progress)
PLOS ONE 17(7): e0272270 (2022)
10.1371/journal.pone.0272270
null
cs.SI cs.LG physics.hist-ph
http://creativecommons.org/licenses/by/4.0/
In traditional astronomies across the world, groups of stars in the night sky were linked into constellations -- symbolic representations rich in meaning and with practical roles. In some sky cultures, constellations are represented as line (or connect-the-dot) figures, which are spatial networks drawn over the fixed background of stars. We analyse 1802 line figures from 56 sky cultures spanning all continents, in terms of their network, spatial, and brightness features, and ask what associations exist between these visual features and culture type or sky region. First, an embedded map of constellations is learnt, to show clusters of line figures. We then form the network of constellations (as linked by their similarity), to study how similar cultures are by computing their assortativity (or homophily) over the network. Finally, we measure the diversity (or entropy) index for the set of constellations drawn per sky region. Our results show distinct types of line figures, and that many folk astronomies with oral traditions have widespread similarities in constellation design, which do not align with cultural ancestry. In a minority of sky regions, certain line designs appear universal, but this is not the norm: in the majority of sky regions, the line geometries are diverse.
[ { "version": "v1", "created": "Tue, 19 Oct 2021 16:11:53 GMT" }, { "version": "v2", "created": "Sat, 13 Nov 2021 20:41:57 GMT" }, { "version": "v3", "created": "Thu, 28 Jul 2022 20:13:53 GMT" }, { "version": "v4", "created": "Mon, 26 Sep 2022 10:15:47 GMT" } ]
2022-09-27T00:00:00
[ [ "Bucur", "Doina", "" ] ]
new_dataset
0.987849
2112.02866
Tommaso Cesari
Nicol\`o Cesa-Bianchi, Tommaso Cesari, Roberto Colomboni, Claudio Gentile, Yishay Mansour
Nonstochastic Bandits with Composite Anonymous Feedback
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a nonstochastic bandit setting in which the loss of an action is not immediately charged to the player, but rather spread over the subsequent rounds in an adversarial way. The instantaneous loss observed by the player at the end of each round is then a sum of many loss components of previously played actions. This setting encompasses as a special case the easier task of bandits with delayed feedback, a well-studied framework where the player observes the delayed losses individually. Our first contribution is a general reduction transforming a standard bandit algorithm into one that can operate in the harder setting: We bound the regret of the transformed algorithm in terms of the stability and regret of the original algorithm. Then, we show that the transformation of a suitably tuned FTRL with Tsallis entropy has a regret of order $\sqrt{(d+1)KT}$, where $d$ is the maximum delay, $K$ is the number of arms, and $T$ is the time horizon. Finally, we show that our results cannot be improved in general by exhibiting a matching (up to a log factor) lower bound on the regret of any algorithm operating in this setting.
[ { "version": "v1", "created": "Mon, 6 Dec 2021 08:44:04 GMT" }, { "version": "v2", "created": "Sat, 24 Sep 2022 11:55:22 GMT" } ]
2022-09-27T00:00:00
[ [ "Cesa-Bianchi", "Nicolò", "" ], [ "Cesari", "Tommaso", "" ], [ "Colomboni", "Roberto", "" ], [ "Gentile", "Claudio", "" ], [ "Mansour", "Yishay", "" ] ]
new_dataset
0.975654
2202.07503
Guanchu Wang
Guanchu Wang and Zaid Pervaiz Bhat and Zhimeng Jiang and Yi-Wei Chen and Daochen Zha and Alfredo Costilla Reyes and Afshin Niktash and Gorkem Ulkar and Erman Okman and Xuanting Cai and Xia Hu
BED: A Real-Time Object Detection System for Edge Devices
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deploying deep neural networks~(DNNs) on edge devices provides efficient and effective solutions for the real-world tasks. Edge devices have been used for collecting a large volume of data efficiently in different domains. DNNs have been an effective tool for data processing and analysis. However, designing DNNs on edge devices is challenging due to the limited computational resources and memory. To tackle this challenge, we demonstrate Object Detection System for Edge Devices~(BED) on the MAX78000 DNN accelerator. It integrates on-device DNN inference with a camera and an LCD display for image acquisition and detection exhibition, respectively. BED is a concise, effective and detailed solution, including model training, quantization, synthesis and deployment. The entire repository is open-sourced on Github, including a Graphical User Interface~(GUI) for on-chip debugging. Experiment results indicate that BED can produce accurate detection with a 300-KB tiny DNN model, which takes only 91.9 ms of inference time and 1.845 mJ of energy. The real-time detection is available at YouTube.
[ { "version": "v1", "created": "Mon, 14 Feb 2022 18:24:20 GMT" }, { "version": "v2", "created": "Fri, 17 Jun 2022 03:32:04 GMT" }, { "version": "v3", "created": "Sun, 14 Aug 2022 16:00:25 GMT" }, { "version": "v4", "created": "Sun, 25 Sep 2022 20:21:48 GMT" } ]
2022-09-27T00:00:00
[ [ "Wang", "Guanchu", "" ], [ "Bhat", "Zaid Pervaiz", "" ], [ "Jiang", "Zhimeng", "" ], [ "Chen", "Yi-Wei", "" ], [ "Zha", "Daochen", "" ], [ "Reyes", "Alfredo Costilla", "" ], [ "Niktash", "Afshin", "" ], [ "Ulkar", "Gorkem", "" ], [ "Okman", "Erman", "" ], [ "Cai", "Xuanting", "" ], [ "Hu", "Xia", "" ] ]
new_dataset
0.992447
2203.01414
Chaolin Rao
Chaolin Rao, Huangjie Yu, Haochuan Wan, Jindong Zhou, Yueyang Zheng, Yu Ma, Anpei Chen, Minye Wu, Binzhe Yuan, Pingqiang Zhou, Xin Lou and Jingyi Yu
ICARUS: A Specialized Architecture for Neural Radiance Fields Rendering
null
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The practical deployment of Neural Radiance Fields (NeRF) in rendering applications faces several challenges, with the most critical one being low rendering speed on even high-end graphic processing units (GPUs). In this paper, we present ICARUS, a specialized accelerator architecture tailored for NeRF rendering. Unlike GPUs using general purpose computing and memory architectures for NeRF, ICARUS executes the complete NeRF pipeline using dedicated plenoptic cores (PLCore) consisting of a positional encoding unit (PEU), a multi-layer perceptron (MLP) engine, and a volume rendering unit (VRU). A PLCore takes in positions \& directions and renders the corresponding pixel colors without any intermediate data going off-chip for temporary storage and exchange, which can be time and power consuming. To implement the most expensive component of NeRF, i.e., the MLP, we transform the fully connected operations to approximated reconfigurable multiple constant multiplications (MCMs), where common subexpressions are shared across different multiplications to improve the computation efficiency. We build a prototype ICARUS using Synopsys HAPS-80 S104, a field programmable gate array (FPGA)-based prototyping system for large-scale integrated circuits and systems design. We evaluate the power-performance-area (PPA) of a PLCore using 40nm LP CMOS technology. Working at 400 MHz, a single PLCore occupies 16.5 $mm^2$ and consumes 282.8 mW, translating to 0.105 uJ/sample. The results are compared with those of GPU and tensor processing unit (TPU) implementations.
[ { "version": "v1", "created": "Tue, 1 Mar 2022 03:24:28 GMT" }, { "version": "v2", "created": "Fri, 27 May 2022 10:36:14 GMT" }, { "version": "v3", "created": "Mon, 26 Sep 2022 08:35:22 GMT" } ]
2022-09-27T00:00:00
[ [ "Rao", "Chaolin", "" ], [ "Yu", "Huangjie", "" ], [ "Wan", "Haochuan", "" ], [ "Zhou", "Jindong", "" ], [ "Zheng", "Yueyang", "" ], [ "Ma", "Yu", "" ], [ "Chen", "Anpei", "" ], [ "Wu", "Minye", "" ], [ "Yuan", "Binzhe", "" ], [ "Zhou", "Pingqiang", "" ], [ "Lou", "Xin", "" ], [ "Yu", "Jingyi", "" ] ]
new_dataset
0.991027
2203.03183
Zehao Wang
Zehao Wang, Mingxiao Li, Minye Wu, Marie-Francine Moens, Tinne Tuytelaars
Find a Way Forward: a Language-Guided Semantic Map Navigator
content revised
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce the map-language navigation task where an agent executes natural language instructions and moves to the target position based only on a given 3D semantic map. To tackle the task, we design the instruction-aware Path Proposal and Discrimination model (iPPD). Our approach leverages map information to provide instruction-aware path proposals, i.e., it selects all potential instruction-aligned candidate paths to reduce the solution space. Next, to represent the map observations along a path for a better modality alignment, a novel Path Feature Encoding scheme tailored for semantic maps is proposed. An attention-based Language Driven Discriminator is designed to evaluate path candidates and determine the best path as the final result. Our method can naturally avoid error accumulation compared with single-step greedy decision methods. Comparing to a single-step imitation learning approach, iPPD has performance gains above 17% on navigation success and 0.18 on path matching measurement nDTW in challenging unseen environments.
[ { "version": "v1", "created": "Mon, 7 Mar 2022 07:40:33 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 06:31:47 GMT" } ]
2022-09-27T00:00:00
[ [ "Wang", "Zehao", "" ], [ "Li", "Mingxiao", "" ], [ "Wu", "Minye", "" ], [ "Moens", "Marie-Francine", "" ], [ "Tuytelaars", "Tinne", "" ] ]
new_dataset
0.991284
2204.10455
Marisa Kirisame
Marisa Kirisame, Pranav Shenoy, Pavel Panchekha
Optimal Heap Limits for Reducing Browser Memory Use
null
null
null
null
cs.PL cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Garbage-collected language runtimes carefully tune heap limits to reduce garbage collection time and memory usage. However, there's a trade-off: a lower heap limit reduces memory use but increases garbage collection time. Classic methods for setting heap limits include manually tuned heap limits and multiple-of-live-size rules of thumb, but it is not clear when one rule is better than another or how to compare them. We address this problem with a new framework where heap limits are set for multiple heaps at once. Our key insight is that every heap limit rule induces a particular allocation of memory across multiple processes, and this allocation can be sub-optimal. We use our framework to derive an optimal "square-root" heap limit rule, which minimizes total memory usage for any amount of total garbage collection time. Paradoxically, the square-root heap limit rule achieves this coordination without communication: it allocates memory optimally across multiple heaps without requiring any communication between those heaps. To demonstrate that this heap limit rule is effective, we prototype it for V8, the JavaScript runtime used in Google Chrome, Microsoft Edge, and other browsers, as well as in server-side frameworks like node.js and Deno. On real-world web pages, our prototype achieves reductions of approximately 16.0% of memory usage while keeping garbage collection time constant. On memory-intensive benchmarks, reductions of up to 30.0% of garbage collection time are possible with no change in total memory usage.
[ { "version": "v1", "created": "Fri, 22 Apr 2022 01:26:48 GMT" }, { "version": "v2", "created": "Wed, 27 Apr 2022 09:27:34 GMT" }, { "version": "v3", "created": "Fri, 16 Sep 2022 16:59:32 GMT" }, { "version": "v4", "created": "Sun, 25 Sep 2022 06:22:19 GMT" } ]
2022-09-27T00:00:00
[ [ "Kirisame", "Marisa", "" ], [ "Shenoy", "Pranav", "" ], [ "Panchekha", "Pavel", "" ] ]
new_dataset
0.997016
2205.01052
Hans Wang
Gordon King and Hans Wang
HTTPA/2: a Trusted End-to-End Protocol for Web Services
24 pages, 6 figures
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of cloud computing and the Internet, the commercialized website becomes capable of providing more web services, such as software as a service (SaaS) or function as a service (FaaS), for great user experiences. Undoubtedly, web services have been thriving in popularity that will continue growing to serve modern human life. As expected, there came the ineluctable need for preserving privacy, enhancing security, and building trust. However, HTTPS alone cannot provide a remote attestation for building trust with web services, which remains lacking in trust. At the same time, cloud computing is actively adopting the use of TEEs and will demand a web-based protocol for remote attestation with ease of use. Here, we propose HTTPA/2 as an upgraded version of HTTP-Attestable (HTTPA) by augmenting existing HTTP to enable end-to-end trusted communication between endpoints at layer 7 (L7). HTTPA/2 allows for L7 message protection without relying on TLS. In practice, HTTPA/2 is designed to be compatible with the in-network processing of the modern cloud infrastructure, including L7 gateway, L7 load balancer, caching, etc. We envision that \acs{httpa}/2 will further enable trustworthy web services and trustworthy AI applications in the future, accelerating the transformation of the web-based digital world to be more trustworthy.
[ { "version": "v1", "created": "Mon, 2 May 2022 17:37:54 GMT" }, { "version": "v2", "created": "Fri, 20 May 2022 18:51:44 GMT" }, { "version": "v3", "created": "Wed, 15 Jun 2022 15:44:21 GMT" }, { "version": "v4", "created": "Fri, 17 Jun 2022 21:37:12 GMT" }, { "version": "v5", "created": "Sun, 25 Sep 2022 20:27:35 GMT" } ]
2022-09-27T00:00:00
[ [ "King", "Gordon", "" ], [ "Wang", "Hans", "" ] ]
new_dataset
0.997803
2206.13995
Hao Chen
Hao Chen
New MDS Entanglement-Assisted Quantum Codes from MDS Hermitian Self-Orthogonal Codes
18 pages, MDS quantum codes can be transformed to MDS Entanglement-assisted quantum codes with nonzero c parameters directly
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
The intersection ${\bf C}\bigcap {\bf C}^{\perp_H}$ of a linear code ${\bf C} \subset {\bf F}_{q^2}$ and its Hermitian dual ${\bf C}^{\perp_H}$ is called the Hermitian hull of this code. A linear code ${\bf C} \subset {\bf F}_{q^2}$ satisfying ${\bf C} \subset {\bf C}^{\perp_H}$ is called Hermitian self-orthogonal. Many Hermitian self-orthogonal codes were given for the construction of MDS quantum error correction codes (QECCs). In this paper we prove that for a nonnegative integer $h$ satisfying $0 \leq h \leq k$, a linear Hermitian self-orthogonal $[n, k]_{q^2}$ code is equivalent to a linear $h$-dimension Hermitian hull code. Therefore a lot of new MDS entanglement-assisted quantum error correction (EAQEC) codes can be constructed from previous known Hermitian self-orthogonal codes. Actually our method shows that previous constructed quantum MDS codes from Hermitian self-orthogonal codes can be transformed to MDS entanglement-assisted quantum codes with nonzero consumption parameter $c$ directly. We prove that MDS EAQEC $[[n, k, d, c]]_q$ codes with nonzero $c$ parameters and $d\leq \frac{n+2}{2}$ exist for arbitrary length $n \leq q^2+1$. Moreover any QECC constructed from $k$-dimensional Hermitian self-orthogonal codes can be transformed to $k$ different EAQEC codes.
[ { "version": "v1", "created": "Tue, 28 Jun 2022 13:31:16 GMT" }, { "version": "v2", "created": "Wed, 29 Jun 2022 09:23:17 GMT" }, { "version": "v3", "created": "Sun, 3 Jul 2022 14:18:02 GMT" }, { "version": "v4", "created": "Fri, 29 Jul 2022 07:34:20 GMT" }, { "version": "v5", "created": "Mon, 26 Sep 2022 00:49:47 GMT" } ]
2022-09-27T00:00:00
[ [ "Chen", "Hao", "" ] ]
new_dataset
0.995643
2206.14764
Peng Liang
Liming Fu, Peng Liang, Zeeshan Rasheed, Zengyang Li, Amjed Tahir, Xiaofeng Han
Potential Technical Debt and Its Resolution in Code Reviews: An Exploratory Study of the OpenStack and Qt Communities
The 16th ACM/IEEE International Symposium on Empirical Software Engineering and Measurement (ESEM)
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Technical Debt (TD) refers to the situation where developers make trade-offs to achieve short-term goals at the expense of long-term code quality, which can have a negative impact on the quality of software systems. In the context of code review, such sub-optimal implementations have chances to be timely resolved during the review process before the code is merged. Therefore, we could consider them as Potential Technical Debt (PTD) since PTD will evolve into TD when it is injected into software systems without being resolved. To date, little is known about the extent to which PTD is identified in code reviews. To this end, we conducted an exploratory study in an attempt to understand the nature of PTD in code reviews and track down the resolution of PTD after being identified. We randomly collected 2,030 review comments from the Nova project of OpenStack and the Qt Base project of Qt. We then manually checked these review comments, and obtained 163 PTD-related review comments for further analysis. Our results show that: (1) PTD can be identified in code reviews but is not prevalent. (2) Design, defect, documentation, requirement, test, and code PTD are identified in code reviews, in which code and documentation PTD are the dominant. (3) 81.0% of the PTD identified in code reviews has been resolved by developers, and 78.0% of the resolved TD was resolved by developers within a week. (4) Code refactoring is the main practice used by developers to resolve the PTD identified in code reviews. Our findings indicate that: (1) review-based detection of PTD is seen as one of the trustworthy mechanisms in development, and (2) there is still a significant proportion of PTD (19.0%) remaining unresolved when injected into the software systems. Practitioners and researchers should establish effective strategies to manage and resolve PTD in development.
[ { "version": "v1", "created": "Wed, 29 Jun 2022 16:53:46 GMT" }, { "version": "v2", "created": "Sat, 24 Sep 2022 05:44:52 GMT" } ]
2022-09-27T00:00:00
[ [ "Fu", "Liming", "" ], [ "Liang", "Peng", "" ], [ "Rasheed", "Zeeshan", "" ], [ "Li", "Zengyang", "" ], [ "Tahir", "Amjed", "" ], [ "Han", "Xiaofeng", "" ] ]
new_dataset
0.997521
2207.02202
Zhengzhong Tu
Runsheng Xu, Zhengzhong Tu, Hao Xiang, Wei Shao, Bolei Zhou, Jiaqi Ma
CoBEVT: Cooperative Bird's Eye View Semantic Segmentation with Sparse Transformers
CoRL 2022; code: https://github.com/DerrickXuNu/CoBEVT
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bird's eye view (BEV) semantic segmentation plays a crucial role in spatial sensing for autonomous driving. Although recent literature has made significant progress on BEV map understanding, they are all based on single-agent camera-based systems. These solutions sometimes have difficulty handling occlusions or detecting distant objects in complex traffic scenes. Vehicle-to-Vehicle (V2V) communication technologies have enabled autonomous vehicles to share sensing information, dramatically improving the perception performance and range compared to single-agent systems. In this paper, we propose CoBEVT, the first generic multi-agent multi-camera perception framework that can cooperatively generate BEV map predictions. To efficiently fuse camera features from multi-view and multi-agent data in an underlying Transformer architecture, we design a fused axial attention module (FAX), which captures sparsely local and global spatial interactions across views and agents. The extensive experiments on the V2V perception dataset, OPV2V, demonstrate that CoBEVT achieves state-of-the-art performance for cooperative BEV semantic segmentation. Moreover, CoBEVT is shown to be generalizable to other tasks, including 1) BEV segmentation with single-agent multi-camera and 2) 3D object detection with multi-agent LiDAR systems, achieving state-of-the-art performance with real-time inference speed. The code is available at https://github.com/DerrickXuNu/CoBEVT.
[ { "version": "v1", "created": "Tue, 5 Jul 2022 17:59:28 GMT" }, { "version": "v2", "created": "Sun, 25 Sep 2022 07:19:32 GMT" } ]
2022-09-27T00:00:00
[ [ "Xu", "Runsheng", "" ], [ "Tu", "Zhengzhong", "" ], [ "Xiang", "Hao", "" ], [ "Shao", "Wei", "" ], [ "Zhou", "Bolei", "" ], [ "Ma", "Jiaqi", "" ] ]
new_dataset
0.990747
2207.07540
Emmanuel Senft
Emmanuel Senft, David Porfirio, Katie Winkle
PD/EUP Workshop Proceedings
HTML file with clickable links to papers - All papers have been reviewed by two reviewers in a single blind fashion - Symposium website: https://sites.google.com/wisc.edu/hri22pdeupworkshop/
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
People who need robots are often not the same as people who can program them. This key observation in human-robot interaction (HRI) has lead to a number of challenges when developing robotic applications, since developers must understand the exact needs of end-users. Participatory Design (PD), the process of including stakeholders such as end users early in the robot design process, has been used with noteworthy success in HRI, but typically remains limited to the early phases of development. Resulting robot behaviors are often then hardcoded by engineers or utilized in Wizard-of-Oz (WoZ) systems that rarely achieve autonomy. End-User Programming (EUP), i.e., the research of tools allowing end users with limited computer knowledge to program systems, has been widely applied to the design of robot behaviors for interaction with humans, but these tools risk being used solely as research demonstrations only existing for the amount of time required for them to be evaluated and published. In the PD/EUP Workshop, we aim to facilitate mutual learning between these communities and to create communication opportunities that could help the larger HRI community work towards end-user personalized and adaptable interactions. Both PD and EUP will be key requirements if we want robots to be useful for wider society. From this workshop, we expect new collaboration opportunities to emerge and we aim to formalize new methodologies that integrate PD and EUP approaches.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 15:32:55 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 11:43:37 GMT" } ]
2022-09-27T00:00:00
[ [ "Senft", "Emmanuel", "" ], [ "Porfirio", "David", "" ], [ "Winkle", "Katie", "" ] ]
new_dataset
0.993149
2208.10431
Mengqi Xue
Mengqi Xue, Qihan Huang, Haofei Zhang, Lechao Cheng, Jie Song, Minghui Wu, Mingli Song
ProtoPFormer: Concentrating on Prototypical Parts in Vision Transformers for Interpretable Image Recognition
Arxiv preprint; 18 pages, 12 figures, 7 tables
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prototypical part network (ProtoPNet) has drawn wide attention and boosted many follow-up studies due to its self-explanatory property for explainable artificial intelligence (XAI). However, when directly applying ProtoPNet on vision transformer (ViT) backbones, learned prototypes have a "distraction" problem: they have a relatively high probability of being activated by the background and pay less attention to the foreground. The powerful capability of modeling long-term dependency makes the transformer-based ProtoPNet hard to focus on prototypical parts, thus severely impairing its inherent interpretability. This paper proposes prototypical part transformer (ProtoPFormer) for appropriately and effectively applying the prototype-based method with ViTs for interpretable image recognition. The proposed method introduces global and local prototypes for capturing and highlighting the representative holistic and partial features of targets according to the architectural characteristics of ViTs. The global prototypes are adopted to provide the global view of objects to guide local prototypes to concentrate on the foreground while eliminating the influence of the background. Afterwards, local prototypes are explicitly supervised to concentrate on their respective prototypical visual parts, increasing the overall interpretability. Extensive experiments demonstrate that our proposed global and local prototypes can mutually correct each other and jointly make final decisions, which faithfully and transparently reason the decision-making processes associatively from the whole and local perspectives, respectively. Moreover, ProtoPFormer consistently achieves superior performance and visualization results over the state-of-the-art (SOTA) prototype-based baselines. Our code has been released at https://github.com/zju-vipa/ProtoPFormer.
[ { "version": "v1", "created": "Mon, 22 Aug 2022 16:36:32 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 16:18:27 GMT" } ]
2022-09-27T00:00:00
[ [ "Xue", "Mengqi", "" ], [ "Huang", "Qihan", "" ], [ "Zhang", "Haofei", "" ], [ "Cheng", "Lechao", "" ], [ "Song", "Jie", "" ], [ "Wu", "Minghui", "" ], [ "Song", "Mingli", "" ] ]
new_dataset
0.986706
2208.13583
Anitha Gollamudi
Alexandra E. Michael, Anitha Gollamudi, Jay Bosamiya, Craig Disselkoen, Aidan Denlinger, Conrad Watt, Bryan Parno, Marco Patrignani, Marco Vassena, Deian Stefan
MSWasm: Soundly Enforcing Memory-Safe Execution of Unsafe Code
null
null
null
null
cs.CR cs.PL
http://creativecommons.org/licenses/by-sa/4.0/
Most programs compiled to WebAssembly (Wasm) today are written in unsafe languages like C and C++. Unfortunately, memory-unsafe C code remains unsafe when compiled to Wasm -- and attackers can exploit buffer overflows and use-after-frees in Wasm almost as easily as they can on native platforms. Memory-Safe WebAssembly (MSWasm) proposes to extend Wasm with language-level memory-safety abstractions to precisely address this problem. In this paper, we build on the original MSWasm position paper to realize this vision. We give a precise and formal semantics of MSWasm, and prove that well-typed MSWasm programs are, by construction, robustly memory safe. To this end, we develop a novel, language-independent memory-safety property based on colored memory locations and pointers. This property also lets us reason about the security guarantees of a formal C-to-MSWasm compiler -- and prove that it always produces memory-safe programs (and preserves the semantics of safe programs). We use these formal results to then guide several implementations: Two compilers of MSWasm to native code, and a C-to-MSWasm compiler (that extends Clang). Our MSWasm compilers support different enforcement mechanisms, allowing developers to make security-performance trade-offs according to their needs. Our evaluation shows that the overhead of enforcing memory safety in software ranges from 22% (enforcing spatial safety alone) to 198% (enforcing full memory safety) on the PolyBenchC suite. More importantly, MSWasm's design makes it easy to swap between enforcement mechanisms; as fast (especially hardware-based) enforcement techniques become available, MSWasm will be able to take advantage of these advances almost for free.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 13:22:28 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 16:50:30 GMT" } ]
2022-09-27T00:00:00
[ [ "Michael", "Alexandra E.", "" ], [ "Gollamudi", "Anitha", "" ], [ "Bosamiya", "Jay", "" ], [ "Disselkoen", "Craig", "" ], [ "Denlinger", "Aidan", "" ], [ "Watt", "Conrad", "" ], [ "Parno", "Bryan", "" ], [ "Patrignani", "Marco", "" ], [ "Vassena", "Marco", "" ], [ "Stefan", "Deian", "" ] ]
new_dataset
0.999389
2209.09313
Terence Smith Dr
Terence R. Smith
Natural Wave Numbers, Natural Wave Co-numbers, and the Computation of the Primes
16 pages
null
null
null
cs.DS math.NT
http://creativecommons.org/licenses/by-nc-nd/4.0/
The paper exploits an isomorphism between the natural numbers N and a space U of periodic sequences of the roots of unity in constructing a recursive procedure for representing and computing the prime numbers. The nth wave number ${\bf u}_n$ is the countable sequence of the nth roots of unity having frequencies k/n for all integer phases k. The space U is closed under a commutative and associative binary operation ${\bf u}_m \odot{\bf u}_n={\bf u}_{mn}$, termed the circular product, and is isomorphic with N under their respective product operators. Functions are defined on U that partition wave numbers into two complementary sequences, of which the co-number $ {\overset {\bf \ast }{ \bf u}}_n$ is a function of a wave number in which zeros replace its positive roots of unity. The recursive procedure $ {\overset {\bf \ast }{ \bf U}}_{N+1}= {\overset {\bf \ast }{ \bf U}}_{N}\odot{\overset {\bf \ast }{\bf u}}_{{N+1}}$ represents prime numbers explicitly in terms of preceding prime numbers, starting with $p_1=2$, and is shown never to terminate. If ${p}_1, ... , { p}_{N+1}$ are the first $N+1$ prime phases, then the phases in the range $p_{N+1} \leq k < p^2_{N+1}$ that are associated with the non-zero terms of $ {\overset {\bf \ast }{\bf U}}_{N}$ are, together with $ p_1, ...,p_N$, all of the prime phases less than $p^2_{N+1}$. When applied with all of the primes identified at the previous step, the recursive procedure identifies approximately $7^{2(N-1)}/(2(N-1)ln7)$ primes at each iteration for $ N>1$. When the phases of wave numbers are represented in modular arithmetic, the prime phases are representable in terms of sums of reciprocals of the initial set of prime phases and have a relation with the zeta-function.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 19:18:40 GMT" } ]
2022-09-27T00:00:00
[ [ "Smith", "Terence R.", "" ] ]
new_dataset
0.999551
2209.11554
Kun Woo Cho
Kun Woo Cho, Mohammad H. Mazaheri, Jeremy Gummeson, Omid Abari, Kyle Jamieson
mmWall: A Transflective Metamaterial Surface for mmWave Networks
18 pages, 18 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mobile operators are poised to leverage millimeter wave technology as 5G evolves, but despite efforts to bolster their reliability indoors and outdoors, mmWave links remain vulnerable to blockage by walls, people, and obstacles. Further, there is significant interest in bringing outdoor mmWave coverage indoors, which for similar reasons remains challenging today. This paper presents the design, hardware implementation, and experimental evaluation of mmWall, the first electronically almost-360 degree steerable metamaterial surface that operates above 24 GHz and both refracts or reflects incoming mmWave transmissions. Our metamaterial design consists of arrays of varactor-split ring resonator unit cells, miniaturized for mmWave. Custom control circuitry drives each resonator, overcoming coupling challenges that arise at scale. Leveraging beam steering algorithms, we integrate mmWall into the link layer discovery protocols of common mmWave networks. We have fabricated a 10 cm by 20 cm mmWall prototype consisting of a 28 by 76 unit cell array, and evaluate in indoor, outdoor-to-indoor, and multi-beam scenarios. Indoors, mmWall guarantees 91% of locations outage-free under 128-QAM mmWave data rates and boosts SNR by up to 15 dB. Outdoors, mmWall reduces the probability of complete link failure by a ratio of up to 40% under 0-80% path blockage and boosts SNR by up to 30 dB.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 12:25:33 GMT" }, { "version": "v2", "created": "Mon, 26 Sep 2022 00:42:20 GMT" } ]
2022-09-27T00:00:00
[ [ "Cho", "Kun Woo", "" ], [ "Mazaheri", "Mohammad H.", "" ], [ "Gummeson", "Jeremy", "" ], [ "Abari", "Omid", "" ], [ "Jamieson", "Kyle", "" ] ]
new_dataset
0.999663
2209.11772
Istvan Gyongy
Istvan Gyongy, Ahmet T. Erdogan, Neale A.W. Dutton, Germ\'an Mora Mart\'in, Alistair Gorman, Hanning Mai, Francesco Mattioli Della Rocca, Robert K. Henderson
A direct time-of-flight image sensor with in-pixel surface detection and dynamic vision
24 pages, 16 figures. The visualisations may be viewed by clicking on the hyperlinks in the text
null
null
null
cs.CV eess.IV physics.ins-det
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D flash LIDAR is an alternative to the traditional scanning LIDAR systems, promising precise depth imaging in a compact form factor, and free of moving parts, for applications such as self-driving cars, robotics and augmented reality (AR). Typically implemented using single-photon, direct time-of-flight (dToF) receivers in image sensor format, the operation of the devices can be hindered by the large number of photon events needing to be processed and compressed in outdoor scenarios, limiting frame rates and scalability to larger arrays. We here present a 64x32 pixel (256x128 SPAD) dToF imager that overcomes these limitations by using pixels with embedded histogramming, which lock onto and track the return signal. This reduces the size of output data frames considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for direct depth readings. The sensor offers selective readout of pixels detecting surfaces, or those sensing motion, leading to reduced power consumption and off-chip processing requirements. We demonstrate the application of the sensor in mid-range LIDAR.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 14:38:00 GMT" } ]
2022-09-27T00:00:00
[ [ "Gyongy", "Istvan", "" ], [ "Erdogan", "Ahmet T.", "" ], [ "Dutton", "Neale A. W.", "" ], [ "Martín", "Germán Mora", "" ], [ "Gorman", "Alistair", "" ], [ "Mai", "Hanning", "" ], [ "Della Rocca", "Francesco Mattioli", "" ], [ "Henderson", "Robert K.", "" ] ]
new_dataset
0.999702
2209.11867
Luis Garcia Pueyo
Llu\'is Garcia-Pueyo, Panayiotis Tsaparas, Anand Bhaskar, Prathyusha Senthil Kumar, Roelof van Zwol, Timos Sellis, Anthony McCosker, Paolo Papotti
Integrity 2022: Integrity in Social Networks and Media
null
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This is the proposal for the third edition of the Workshop on Integrity in Social Networks and Media, Integrity 2022, following the success of the first two Workshops held in conjunction with the 13th & 14th ACM Conference on Web Search and Data Mining (WSDM) in 2020 and 2021. The goal of the workshop is to bring together researchers and practitioners to discuss content and interaction integrity challenges in social networks and social media platforms. The event consists of (1) a series of invited talks by reputed members of the Integrity community from both academia and industry, (2) a call-for-papers for contributed talks and posters, and (3) a panel with the speakers.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 21:29:42 GMT" } ]
2022-09-27T00:00:00
[ [ "Garcia-Pueyo", "Lluís", "" ], [ "Tsaparas", "Panayiotis", "" ], [ "Bhaskar", "Anand", "" ], [ "Kumar", "Prathyusha Senthil", "" ], [ "van Zwol", "Roelof", "" ], [ "Sellis", "Timos", "" ], [ "McCosker", "Anthony", "" ], [ "Papotti", "Paolo", "" ] ]
new_dataset
0.98922
2209.11871
Ann Clifton
Edgar Tanaka, Ann Clifton, Joana Correia, Sharmistha Jat, Rosie Jones, Jussi Karlgren, Winstead Zhu
Cem Mil Podcasts: A Spoken Portuguese Document Corpus
6 pages, 1 figure
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This document describes the Portuguese language podcast dataset released by Spotify for academic research purposes. We give an overview of how the data was sampled, some basic statistics over the collection, as well as brief information of distribution over Brazilian and Portuguese dialects.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 21:41:10 GMT" } ]
2022-09-27T00:00:00
[ [ "Tanaka", "Edgar", "" ], [ "Clifton", "Ann", "" ], [ "Correia", "Joana", "" ], [ "Jat", "Sharmistha", "" ], [ "Jones", "Rosie", "" ], [ "Karlgren", "Jussi", "" ], [ "Zhu", "Winstead", "" ] ]
new_dataset
0.999784
2209.11946
Niranjan Hasabnis
Niranjan Hasabnis
Are Machine Programming Systems using Right Source-Code Measures to Select Code Repositories?
6 pages, 1 figure, to be presented at MaLTeSQuE 2022 workshop to be held with ACM Joint European Software Engineering Conference and Symposium on the Foundations of Software Engineering (ESEC-FSE) 2022, November 18, Singapore,
null
null
null
cs.SE cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine programming (MP) is an emerging field at the intersection of deterministic and probabilistic computing, and it aims to assist software and hardware engineers, among other applications. Along with powerful compute resources, MP systems often rely on vast amount of open-source code to learn interesting properties about code and programming and solve problems in the areas of debugging, code recommendation, auto-completion, etc. Unfortunately, several of the existing MP systems either do not consider quality of code repositories or use atypical quality measures than those typically used in software engineering community to select them. As such, impact of quality of code repositories on the performance of these systems needs to be studied. In this preliminary paper, we evaluate impact of different quality repositories on the performance of a candidate MP system. Towards that objective, we develop a framework, named GitRank, to rank open-source repositories on quality, maintainability, and popularity by leveraging existing research on this topic. We then apply GitRank to evaluate correlation between the quality measures used by the candidate MP system and the quality measures used by our framework. Our preliminary results reveal some correlation between the quality measures used in GitRank and ControlFlag's performance, suggesting that some of the measures used in GitRank are applicable to ControlFlag. But it also raises questions around right quality measures for code repositories used in MP systems. We believe that our findings also generate interesting insights towards code quality measures that affect performance of MP systems.
[ { "version": "v1", "created": "Sat, 24 Sep 2022 07:34:18 GMT" } ]
2022-09-27T00:00:00
[ [ "Hasabnis", "Niranjan", "" ] ]
new_dataset
0.971956
2209.11971
Xunzhao Yin
Xunzhao Yin, Qingrong Huang, Franz M\"uller, Shan Deng, Alptekin Vardar, Sourav De, Zhouhang Jiang, Mohsen Imani, Cheng Zhuo, Thomas K\"ampfe, Kai Ni
A Homogeneous Processing Fabric for Matrix-Vector Multiplication and Associative Search Using Ferroelectric Time-Domain Compute-in-Memory
8 pages, 8 figures
null
null
null
cs.ET eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we propose a ferroelectric FET(FeFET) time-domain compute-in-memory (TD-CiM) array as a homogeneous processing fabric for binary multiplication-accumulation (MAC) and content addressable memory (CAM). We demonstrate that: i) the XOR(XNOR)/AND logic function can be realized using a single cell composed of 2FeFETs connected in series; ii) a two-phase computation in an inverter chain with each stage featuring the XOR/AND cell to control the associated capacitor loading and the computation results of binary MAC and CAM are reflected in the chain output signal delay, illustrating full digital compatibility; iii) comprehensive theoretical and experimental validation of the proposed 2FeFET cell and inverter delay chains and their robustness against FeFET variation; iv) the homogeneous processing fabric is applied in hyperdimensional computing to show dynamic and fine-grain resource allocation to accommodate different tasks requiring varying demands over the binary MAC and CAM resources.
[ { "version": "v1", "created": "Sat, 24 Sep 2022 09:40:41 GMT" } ]
2022-09-27T00:00:00
[ [ "Yin", "Xunzhao", "" ], [ "Huang", "Qingrong", "" ], [ "Müller", "Franz", "" ], [ "Deng", "Shan", "" ], [ "Vardar", "Alptekin", "" ], [ "De", "Sourav", "" ], [ "Jiang", "Zhouhang", "" ], [ "Imani", "Mohsen", "" ], [ "Zhuo", "Cheng", "" ], [ "Kämpfe", "Thomas", "" ], [ "Ni", "Kai", "" ] ]
new_dataset
0.979043
2209.12023
\'Edouard Bonnet
\'Edouard Bonnet, Ugo Giocanti, Patrice Ossona de Mendez, St\'ephan Thomass\'e
Twin-width V: linear minors, modular counting, and matrix multiplication
45 pages, 9 figures
null
null
null
cs.DS cs.DM cs.LO math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We continue developing the theory around the twin-width of totally ordered binary structures, initiated in the previous paper of the series. We first introduce the notion of parity and linear minors of a matrix, which consists of iteratively replacing consecutive rows or consecutive columns with a linear combination of them. We show that a matrix class has bounded twin-width if and only if its linear-minor closure does not contain all matrices. We observe that the fixed-parameter tractable algorithm for first-order model checking on structures given with an $O(1)$-sequence (certificate of bounded twin-width) and the fact that first-order transductions of bounded twin-width classes have bounded twin-width, both established in Twin-width I, extend to first-order logic with modular counting quantifiers. We make explicit a win-win argument obtained as a by-product of Twin-width IV, and somewhat similar to bidimensionality, that we call rank-bidimensionality. Armed with the above-mentioned extension to modular counting, we show that the twin-width of the product of two conformal matrices $A, B$ over a finite field is bounded by a function of the twin-width of $A$, of $B$, and of the size of the field. Furthermore, if $A$ and $B$ are $n \times n$ matrices of twin-width $d$ over $\mathbb F_q$, we show that $AB$ can be computed in time $O_{d,q}(n^2 \log n)$. We finally present an ad hoc algorithm to efficiently multiply two matrices of bounded twin-width, with a single-exponential dependence in the twin-width bound: If the inputs are given in a compact tree-like form, called twin-decomposition (of width $d$), then two $n \times n$ matrices $A, B$ over $\mathbb F_2$, a twin-decomposition of $AB$ with width $2^{d+o(d)}$ can be computed in time $4^{d+o(d)}n$ (resp. $4^{d+o(d)}n^{1+\varepsilon}$), and entries queried in doubly-logarithmic (resp. constant) time.
[ { "version": "v1", "created": "Sat, 24 Sep 2022 14:41:54 GMT" } ]
2022-09-27T00:00:00
[ [ "Bonnet", "Édouard", "" ], [ "Giocanti", "Ugo", "" ], [ "de Mendez", "Patrice Ossona", "" ], [ "Thomassé", "Stéphan", "" ] ]
new_dataset
0.992791