id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2209.08430
Shihao Shen
Shihao Shen and Yilin Cai and Wenshan Wang and Sebastian Scherer
DytanVO: Joint Refinement of Visual Odometry and Motion Segmentation in Dynamic Environments
Accepted to ICRA 2023
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Learning-based visual odometry (VO) algorithms achieve remarkable performance on common static scenes, benefiting from high-capacity models and massive annotated data, but tend to fail in dynamic, populated environments. Semantic segmentation is largely used to discard dynamic associations before estimating camera motions but at the cost of discarding static features and is hard to scale up to unseen categories. In this paper, we leverage the mutual dependence between camera ego-motion and motion segmentation and show that both can be jointly refined in a single learning-based framework. In particular, we present DytanVO, the first supervised learning-based VO method that deals with dynamic environments. It takes two consecutive monocular frames in real-time and predicts camera ego-motion in an iterative fashion. Our method achieves an average improvement of 27.7% in ATE over state-of-the-art VO solutions in real-world dynamic environments, and even performs competitively among dynamic visual SLAM systems which optimize the trajectory on the backend. Experiments on plentiful unseen environments also demonstrate our method's generalizability.
[ { "version": "v1", "created": "Sat, 17 Sep 2022 23:56:03 GMT" }, { "version": "v2", "created": "Sat, 24 Sep 2022 21:04:07 GMT" }, { "version": "v3", "created": "Tue, 17 Jan 2023 09:33:21 GMT" }, { "version": "v4", "created": "Sat, 29 Apr 2023 04:37:57 GMT" } ]
2023-05-02T00:00:00
[ [ "Shen", "Shihao", "" ], [ "Cai", "Yilin", "" ], [ "Wang", "Wenshan", "" ], [ "Scherer", "Sebastian", "" ] ]
new_dataset
0.980791
2209.08752
Yiye Chen
Yiye Chen, Yunzhi Lin, Ruinian Xu, Patricio Vela
Keypoint-GraspNet: Keypoint-based 6-DoF Grasp Generation from the Monocular RGB-D input
Accepted by ICRA2023. Final version. Code is available at: https://github.com/ivalab/KGN
null
null
null
cs.RO cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Great success has been achieved in the 6-DoF grasp learning from the point cloud input, yet the computational cost due to the point set orderlessness remains a concern. Alternatively, we explore the grasp generation from the RGB-D input in this paper. The proposed solution, Keypoint-GraspNet, detects the projection of the gripper keypoints in the image space and then recover the SE(3) poses with a PnP algorithm. A synthetic dataset based on the primitive shape and the grasp family is constructed to examine our idea. Metric-based evaluation reveals that our method outperforms the baselines in terms of the grasp proposal accuracy, diversity, and the time cost. Finally, robot experiments show high success rate, demonstrating the potential of the idea in the real-world applications.
[ { "version": "v1", "created": "Mon, 19 Sep 2022 04:23:20 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 18:51:50 GMT" }, { "version": "v3", "created": "Thu, 16 Mar 2023 18:10:43 GMT" }, { "version": "v4", "created": "Mon, 1 May 2023 17:53:42 GMT" } ]
2023-05-02T00:00:00
[ [ "Chen", "Yiye", "" ], [ "Lin", "Yunzhi", "" ], [ "Xu", "Ruinian", "" ], [ "Vela", "Patricio", "" ] ]
new_dataset
0.976065
2209.12726
Ayan Biswas
Arijit Saha, Ayan Biswas, Supriya Dhabal, Palaniandavar Venkateswaran
An Improved PMOS-Based Low Dropout Regulator Design for Large Loads
null
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
A stable low dropout (LDO) voltage regulator topology is presented in this paper. LDOs are linear voltage regulators that do not produce ripples in the DC voltage. Despite the close proximity of the supply input voltage to the output, this regulator will maintain the desired output voltage. Based on a detailed comparison between NMOS and PMOS-based LDOs, we decided to opt for a PMOS design because it does not require an additional charge pump as compared to NMOS. A demonstration of how Miller capacitance enhances overall design stability is also presented here. Multiple pass elements are arranged in parallel in order to increase the current carrying capacity of the pass network.
[ { "version": "v1", "created": "Mon, 26 Sep 2022 14:29:54 GMT" }, { "version": "v2", "created": "Tue, 24 Jan 2023 10:23:27 GMT" }, { "version": "v3", "created": "Fri, 27 Jan 2023 10:31:38 GMT" }, { "version": "v4", "created": "Tue, 31 Jan 2023 16:25:49 GMT" }, { "version": "v5", "created": "Mon, 1 May 2023 14:31:08 GMT" } ]
2023-05-02T00:00:00
[ [ "Saha", "Arijit", "" ], [ "Biswas", "Ayan", "" ], [ "Dhabal", "Supriya", "" ], [ "Venkateswaran", "Palaniandavar", "" ] ]
new_dataset
0.998677
2210.12527
Victor Adewopo
Victor Adewopo, Nelly Elsayed, Kelly Anderson
Baby Physical Safety Monitoring in Smart Home Using Action Recognition System
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Humans are able to intuitively deduce actions that took place between two states in observations via deductive reasoning. This is because the brain operates on a bidirectional communication model, which has radically improved the accuracy of recognition and prediction based on features connected to previous experiences. During the past decade, deep learning models for action recognition have significantly improved. However, deep neural networks struggle with these tasks on a smaller dataset for specific Action Recognition (AR) tasks. As with most action recognition tasks, the ambiguity of accurately describing activities in spatial-temporal data is a drawback that can be overcome by curating suitable datasets, including careful annotations and preprocessing of video data for analyzing various recognition tasks. In this study, we present a novel lightweight framework combining transfer learning techniques with a Conv2D LSTM layer to extract features from the pre-trained I3D model on the Kinetics dataset for a new AR task (Smart Baby Care) that requires a smaller dataset and less computational resources. Furthermore, we developed a benchmark dataset and an automated model that uses LSTM convolution with I3D (ConvLSTM-I3D) for recognizing and predicting baby activities in a smart baby room. Finally, we implemented video augmentation to improve model performance on the smart baby care task. Compared to other benchmark models, our experimental framework achieved better performance with less computational resources.
[ { "version": "v1", "created": "Sat, 22 Oct 2022 19:00:14 GMT" }, { "version": "v2", "created": "Sun, 30 Apr 2023 01:17:01 GMT" } ]
2023-05-02T00:00:00
[ [ "Adewopo", "Victor", "" ], [ "Elsayed", "Nelly", "" ], [ "Anderson", "Kelly", "" ] ]
new_dataset
0.988332
2211.17256
Yael Vinker
Yael Vinker, Yuval Alaluf, Daniel Cohen-Or, Ariel Shamir
CLIPascene: Scene Sketching with Different Types and Levels of Abstraction
Project page available at https://clipascene.github.io/CLIPascene/
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we present a method for converting a given scene image into a sketch using different types and multiple levels of abstraction. We distinguish between two types of abstraction. The first considers the fidelity of the sketch, varying its representation from a more precise portrayal of the input to a looser depiction. The second is defined by the visual simplicity of the sketch, moving from a detailed depiction to a sparse sketch. Using an explicit disentanglement into two abstraction axes -- and multiple levels for each one -- provides users additional control over selecting the desired sketch based on their personal goals and preferences. To form a sketch at a given level of fidelity and simplification, we train two MLP networks. The first network learns the desired placement of strokes, while the second network learns to gradually remove strokes from the sketch without harming its recognizability and semantics. Our approach is able to generate sketches of complex scenes including those with complex backgrounds (e.g., natural and urban settings) and subjects (e.g., animals and people) while depicting gradual abstractions of the input scene in terms of fidelity and simplicity.
[ { "version": "v1", "created": "Wed, 30 Nov 2022 18:54:32 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 15:33:55 GMT" } ]
2023-05-02T00:00:00
[ [ "Vinker", "Yael", "" ], [ "Alaluf", "Yuval", "" ], [ "Cohen-Or", "Daniel", "" ], [ "Shamir", "Ariel", "" ] ]
new_dataset
0.999241
2302.03840
Tang Jiankai
Jiankai Tang, Kequan Chen, Yuntao Wang, Yuanchun Shi, Shwetak Patel, Daniel McDuff, Xin Liu
MMPD: Multi-Domain Mobile Video Physiology Dataset
GitHub : https://github.com/McJackTang/MMPD_rPPG_dataset
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Remote photoplethysmography (rPPG) is an attractive method for noninvasive, convenient and concomitant measurement of physiological vital signals. Public benchmark datasets have served a valuable role in the development of this technology and improvements in accuracy over recent years.However, there remain gaps in the public datasets.First, despite the ubiquity of cameras on mobile devices, there are few datasets recorded specifically with mobile phone cameras. Second, most datasets are relatively small and therefore are limited in diversity, both in appearance (e.g., skin tone), behaviors (e.g., motion) and environment (e.g., lighting conditions). In an effort to help the field advance, we present the Multi-domain Mobile Video Physiology Dataset (MMPD), comprising 11 hours of recordings from mobile phones of 33 subjects. The dataset is designed to capture videos with greater representation across skin tone, body motion, and lighting conditions. MMPD is comprehensive with eight descriptive labels and can be used in conjunction with the rPPG-toolbox. The reliability of the dataset is verified by mainstream unsupervised methods and neural methods. The GitHub repository of our dataset: https://github.com/THU-CS-PI/MMPD_rPPG_dataset.
[ { "version": "v1", "created": "Wed, 8 Feb 2023 02:20:01 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 01:43:36 GMT" } ]
2023-05-02T00:00:00
[ [ "Tang", "Jiankai", "" ], [ "Chen", "Kequan", "" ], [ "Wang", "Yuntao", "" ], [ "Shi", "Yuanchun", "" ], [ "Patel", "Shwetak", "" ], [ "McDuff", "Daniel", "" ], [ "Liu", "Xin", "" ] ]
new_dataset
0.999829
2302.08217
Thomas P\"ahtz
Zhiguo He, Yang Yang, Pengcheng Jiao, Haipeng Wang, Guanzheng Lin, Thomas P\"ahtz
Copebot: Underwater soft robot with copepod-like locomotion
null
Soft Robotics 10 (2), 314-325 (2023)
10.1089/soro.2021.0158
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
It has been a great challenge to develop robots that are able to perform complex movement patterns with high speed and, simultaneously, high accuracy. Copepods are animals found in freshwater and saltwater habitats that can have extremely fast escape responses when a predator is sensed by performing explosive curved jumps. Here, we present a design and build prototypes of a combustion-driven underwater soft robot, the "copebot", that, like copepods, is able to accurately reach nearby predefined locations in space within a single curved jump. Because of an improved thrust force transmission unit, causing a large initial acceleration peak (850 Bodylength*s-2), the copebot is 8 times faster than previous combustion-driven underwater soft robots, whilst able to perform a complete 360{\deg} rotation during the jump. Thrusts generated by the copebot are tested to quantitatively determine the actuation performance, and parametric studies are conducted to investigate the sensitivities of the input parameters to the kinematic performance of the copebot. We demonstrate the utility of our design by building a prototype that rapidly jumps out of the water, accurately lands on its feet on a small platform, wirelessly transmits data, and jumps back into the water. Our copebot design opens the way toward high-performance biomimetic robots for multifunctional applications.
[ { "version": "v1", "created": "Thu, 16 Feb 2023 11:02:10 GMT" }, { "version": "v2", "created": "Sat, 29 Apr 2023 11:33:46 GMT" } ]
2023-05-02T00:00:00
[ [ "He", "Zhiguo", "" ], [ "Yang", "Yang", "" ], [ "Jiao", "Pengcheng", "" ], [ "Wang", "Haipeng", "" ], [ "Lin", "Guanzheng", "" ], [ "Pähtz", "Thomas", "" ] ]
new_dataset
0.999189
2302.08956
Idris Abdulmumin
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Abinew Ali Ayele, Nedjma Ousidhoum, David Ifeoluwa Adelani, Seid Muhie Yimam, Ibrahim Sa'id Ahmad, Meriem Beloucif, Saif M. Mohammad, Sebastian Ruder, Oumaima Hourrane, Pavel Brazdil, Felermino D\'ario M\'ario Ant\'onio Ali, Davis David, Salomey Osei, Bello Shehu Bello, Falalu Ibrahim, Tajuddeen Gwadabe, Samuel Rutunda, Tadesse Belay, Wendimu Baye Messelle, Hailu Beshada Balcha, Sisay Adugna Chala, Hagos Tesfahun Gebremichael, Bernard Opoku, Steven Arthur
AfriSenti: A Twitter Sentiment Analysis Benchmark for African Languages
16 pages, 6 Figures, 9 Tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Africa is home to over 2000 languages from over six language families and has the highest linguistic diversity among all continents. This includes 75 languages with at least one million speakers each. Yet, there is little NLP research conducted on African languages. Crucial in enabling such research is the availability of high-quality annotated datasets. In this paper, we introduce AfriSenti, which consists of 14 sentiment datasets of 110,000+ tweets in 14 African languages (Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yor\`ub\'a) from four language families annotated by native speakers. The data is used in SemEval 2023 Task 12, the first Afro-centric SemEval shared task. We describe the data collection methodology, annotation process, and related challenges when curating each of the datasets. We conduct experiments with different sentiment classification baselines and discuss their usefulness. We hope AfriSenti enables new work on under-represented languages. The dataset is available at https://github.com/afrisenti-semeval/afrisent-semeval-2023 and can also be loaded as a huggingface datasets (https://huggingface.co/datasets/shmuhammad/AfriSenti).
[ { "version": "v1", "created": "Fri, 17 Feb 2023 15:40:12 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 14:43:02 GMT" }, { "version": "v3", "created": "Mon, 24 Apr 2023 13:57:08 GMT" }, { "version": "v4", "created": "Fri, 28 Apr 2023 19:46:51 GMT" } ]
2023-05-02T00:00:00
[ [ "Muhammad", "Shamsuddeen Hassan", "" ], [ "Abdulmumin", "Idris", "" ], [ "Ayele", "Abinew Ali", "" ], [ "Ousidhoum", "Nedjma", "" ], [ "Adelani", "David Ifeoluwa", "" ], [ "Yimam", "Seid Muhie", "" ], [ "Ahmad", "Ibrahim Sa'id", "" ], [ "Beloucif", "Meriem", "" ], [ "Mohammad", "Saif M.", "" ], [ "Ruder", "Sebastian", "" ], [ "Hourrane", "Oumaima", "" ], [ "Brazdil", "Pavel", "" ], [ "Ali", "Felermino Dário Mário António", "" ], [ "David", "Davis", "" ], [ "Osei", "Salomey", "" ], [ "Bello", "Bello Shehu", "" ], [ "Ibrahim", "Falalu", "" ], [ "Gwadabe", "Tajuddeen", "" ], [ "Rutunda", "Samuel", "" ], [ "Belay", "Tadesse", "" ], [ "Messelle", "Wendimu Baye", "" ], [ "Balcha", "Hailu Beshada", "" ], [ "Chala", "Sisay Adugna", "" ], [ "Gebremichael", "Hagos Tesfahun", "" ], [ "Opoku", "Bernard", "" ], [ "Arthur", "Steven", "" ] ]
new_dataset
0.999842
2302.14705
Shikhar Tuli
Shikhar Tuli and Niraj K. Jha
AccelTran: A Sparsity-Aware Accelerator for Dynamic Inference with Transformers
null
null
null
null
cs.AR cs.LG
http://creativecommons.org/licenses/by/4.0/
Self-attention-based transformer models have achieved tremendous success in the domain of natural language processing. Despite their efficacy, accelerating the transformer is challenging due to its quadratic computational complexity and large activation sizes. Existing transformer accelerators attempt to prune its tokens to reduce memory access, albeit with high compute overheads. Moreover, previous works directly operate on large matrices involved in the attention operation, which limits hardware utilization. In order to address these challenges, this work proposes a novel dynamic inference scheme, DynaTran, which prunes activations at runtime with low overhead, substantially reducing the number of ineffectual operations. This improves the throughput of transformer inference. We further propose tiling the matrices in transformer operations along with diverse dataflows to improve data reuse, thus enabling higher energy efficiency. To effectively implement these methods, we propose AccelTran, a novel accelerator architecture for transformers. Extensive experiments with different models and benchmarks demonstrate that DynaTran achieves higher accuracy than the state-of-the-art top-k hardware-aware pruning strategy while attaining up to 1.2$\times$ higher sparsity. One of our proposed accelerators, AccelTran-Edge, achieves 330K$\times$ higher throughput with 93K$\times$ lower energy requirement when compared to a Raspberry Pi device. On the other hand, AccelTran-Server achieves 5.73$\times$ higher throughput and 3.69$\times$ lower energy consumption compared to the state-of-the-art transformer co-processor, Energon. The simulation source code is available at https://github.com/jha-lab/acceltran.
[ { "version": "v1", "created": "Tue, 28 Feb 2023 16:17:23 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 16:21:21 GMT" } ]
2023-05-02T00:00:00
[ [ "Tuli", "Shikhar", "" ], [ "Jha", "Niraj K.", "" ] ]
new_dataset
0.991211
2303.12445
Leo Milecki
Leo Milecki, Vicky Kalogeiton, Sylvain Bodard, Dany Anglicheau, Jean-Michel Correas, Marc-Olivier Timsit, Maria Vakalopoulou
MEDIMP: 3D Medical Images with clinical Prompts from limited tabular data for renal transplantation
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Renal transplantation emerges as the most effective solution for end-stage renal disease. Occurring from complex causes, a substantial risk of transplant chronic dysfunction persists and may lead to graft loss. Medical imaging plays a substantial role in renal transplant monitoring in clinical practice. However, graft supervision is multi-disciplinary, notably joining nephrology, urology, and radiology, while identifying robust biomarkers from such high-dimensional and complex data for prognosis is challenging. In this work, taking inspiration from the recent success of Large Language Models (LLMs), we propose MEDIMP -- Medical Images with clinical Prompts -- a model to learn meaningful multi-modal representations of renal transplant Dynamic Contrast-Enhanced Magnetic Resonance Imaging (DCE MRI) by incorporating structural clinicobiological data after translating them into text prompts. MEDIMP is based on contrastive learning from joint text-image paired embeddings to perform this challenging task. Moreover, we propose a framework that generates medical prompts using automatic textual data augmentations from LLMs. Our goal is to learn meaningful manifolds of renal transplant DCE MRI, interesting for the prognosis of the transplant or patient status (2, 3, and 4 years after the transplant), fully exploiting the limited available multi-modal data most efficiently. Extensive experiments and comparisons with other renal transplant representation learning methods with limited data prove the effectiveness of MEDIMP in a relevant clinical setting, giving new directions toward medical prompts. Our code is available at https://github.com/leomlck/MEDIMP.
[ { "version": "v1", "created": "Wed, 22 Mar 2023 10:30:43 GMT" }, { "version": "v2", "created": "Sat, 29 Apr 2023 15:42:49 GMT" } ]
2023-05-02T00:00:00
[ [ "Milecki", "Leo", "" ], [ "Kalogeiton", "Vicky", "" ], [ "Bodard", "Sylvain", "" ], [ "Anglicheau", "Dany", "" ], [ "Correas", "Jean-Michel", "" ], [ "Timsit", "Marc-Olivier", "" ], [ "Vakalopoulou", "Maria", "" ] ]
new_dataset
0.999397
2303.14152
Nikolas Lamb
Nikolas Lamb, Cameron Palmer, Benjamin Molloy, Sean Banerjee, Natasha Kholgade Banerjee
Fantastic Breaks: A Dataset of Paired 3D Scans of Real-World Broken Objects and Their Complete Counterparts
To be published at CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated shape repair approaches currently lack access to datasets that describe real-world damaged geometry. We present Fantastic Breaks (and Where to Find Them: https://terascale-all-sensing-research-studio.github.io/FantasticBreaks), a dataset containing scanned, waterproofed, and cleaned 3D meshes for 150 broken objects, paired and geometrically aligned with complete counterparts. Fantastic Breaks contains class and material labels, proxy repair parts that join to broken meshes to generate complete meshes, and manually annotated fracture boundaries. Through a detailed analysis of fracture geometry, we reveal differences between Fantastic Breaks and synthetic fracture datasets generated using geometric and physics-based methods. We show experimental shape repair evaluation with Fantastic Breaks using multiple learning-based approaches pre-trained with synthetic datasets and re-trained with subset of Fantastic Breaks.
[ { "version": "v1", "created": "Fri, 24 Mar 2023 17:03:40 GMT" }, { "version": "v2", "created": "Wed, 29 Mar 2023 13:13:35 GMT" }, { "version": "v3", "created": "Thu, 30 Mar 2023 20:16:26 GMT" }, { "version": "v4", "created": "Mon, 1 May 2023 12:58:51 GMT" } ]
2023-05-02T00:00:00
[ [ "Lamb", "Nikolas", "" ], [ "Palmer", "Cameron", "" ], [ "Molloy", "Benjamin", "" ], [ "Banerjee", "Sean", "" ], [ "Banerjee", "Natasha Kholgade", "" ] ]
new_dataset
0.999751
2304.06845
Idris Abdulmumin
Shamsuddeen Hassan Muhammad, Idris Abdulmumin, Seid Muhie Yimam, David Ifeoluwa Adelani, Ibrahim Sa'id Ahmad, Nedjma Ousidhoum, Abinew Ayele, Saif M. Mohammad, Meriem Beloucif, Sebastian Ruder
SemEval-2023 Task 12: Sentiment Analysis for African Languages (AfriSenti-SemEval)
19 pages, 5 figures, 6 tables
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present the first Africentric SemEval Shared task, Sentiment Analysis for African Languages (AfriSenti-SemEval) - The dataset is available at https://github.com/afrisenti-semeval/afrisent-semeval-2023. AfriSenti-SemEval is a sentiment classification challenge in 14 African languages: Amharic, Algerian Arabic, Hausa, Igbo, Kinyarwanda, Moroccan Arabic, Mozambican Portuguese, Nigerian Pidgin, Oromo, Swahili, Tigrinya, Twi, Xitsonga, and Yor\`ub\'a (Muhammad et al., 2023), using data labeled with 3 sentiment classes. We present three subtasks: (1) Task A: monolingual classification, which received 44 submissions; (2) Task B: multilingual classification, which received 32 submissions; and (3) Task C: zero-shot classification, which received 34 submissions. The best performance for tasks A and B was achieved by NLNDE team with 71.31 and 75.06 weighted F1, respectively. UCAS-IIE-NLP achieved the best average score for task C with 58.15 weighted F1. We describe the various approaches adopted by the top 10 systems and their approaches.
[ { "version": "v1", "created": "Thu, 13 Apr 2023 22:26:10 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 10:18:04 GMT" } ]
2023-05-02T00:00:00
[ [ "Muhammad", "Shamsuddeen Hassan", "" ], [ "Abdulmumin", "Idris", "" ], [ "Yimam", "Seid Muhie", "" ], [ "Adelani", "David Ifeoluwa", "" ], [ "Ahmad", "Ibrahim Sa'id", "" ], [ "Ousidhoum", "Nedjma", "" ], [ "Ayele", "Abinew", "" ], [ "Mohammad", "Saif M.", "" ], [ "Beloucif", "Meriem", "" ], [ "Ruder", "Sebastian", "" ] ]
new_dataset
0.999774
2304.09548
Ashutosh Modi
Ashutosh Modi and Prathamesh Kalamkar and Saurabh Karn and Aman Tiwari and Abhinav Joshi and Sai Kiran Tanikella and Shouvik Kumar Guha and Sachin Malhan and Vivek Raghavan
SemEval 2023 Task 6: LegalEval - Understanding Legal Texts
13 Pages (9 Pages + References), Accepted at SemEval 2023 at ACL 2023
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In populous countries, pending legal cases have been growing exponentially. There is a need for developing NLP-based techniques for processing and automatically understanding legal documents. To promote research in the area of Legal NLP we organized the shared task LegalEval - Understanding Legal Texts at SemEval 2023. LegalEval task has three sub-tasks: Task-A (Rhetorical Roles Labeling) is about automatically structuring legal documents into semantically coherent units, Task-B (Legal Named Entity Recognition) deals with identifying relevant entities in a legal document and Task-C (Court Judgement Prediction with Explanation) explores the possibility of automatically predicting the outcome of a legal case along with providing an explanation for the prediction. In total 26 teams (approx. 100 participants spread across the world) submitted systems paper. In each of the sub-tasks, the proposed systems outperformed the baselines; however, there is a lot of scope for improvement. This paper describes the tasks, and analyzes techniques proposed by various teams.
[ { "version": "v1", "created": "Wed, 19 Apr 2023 10:28:32 GMT" }, { "version": "v2", "created": "Mon, 24 Apr 2023 12:13:15 GMT" }, { "version": "v3", "created": "Mon, 1 May 2023 11:33:08 GMT" } ]
2023-05-02T00:00:00
[ [ "Modi", "Ashutosh", "" ], [ "Kalamkar", "Prathamesh", "" ], [ "Karn", "Saurabh", "" ], [ "Tiwari", "Aman", "" ], [ "Joshi", "Abhinav", "" ], [ "Tanikella", "Sai Kiran", "" ], [ "Guha", "Shouvik Kumar", "" ], [ "Malhan", "Sachin", "" ], [ "Raghavan", "Vivek", "" ] ]
new_dataset
0.999064
2304.09989
Abdesslem Layeb
Abdesslem Layeb
CKmeans and FCKmeans : Two deterministic initialization procedures for Kmeans algorithm using a modified crowding distance
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents two novel deterministic initialization procedures for K-means clustering based on a modified crowding distance. The procedures, named CKmeans and FCKmeans, use more crowded points as initial centroids. Experimental studies on multiple datasets demonstrate that the proposed approach outperforms Kmeans and Kmeans++ in terms of clustering accuracy. The effectiveness of CKmeans and FCKmeans is attributed to their ability to select better initial centroids based on the modified crowding distance. Overall, the proposed approach provides a promising alternative for improving K-means clustering.
[ { "version": "v1", "created": "Wed, 19 Apr 2023 21:46:02 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 17:13:38 GMT" } ]
2023-05-02T00:00:00
[ [ "Layeb", "Abdesslem", "" ] ]
new_dataset
0.978858
2304.12749
Liyi Zhou
Yu Gai, Liyi Zhou, Kaihua Qin, Dawn Song, Arthur Gervais
Blockchain Large Language Models
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a dynamic, real-time approach to detecting anomalous blockchain transactions. The proposed tool, BlockGPT, generates tracing representations of blockchain activity and trains from scratch a large language model to act as a real-time Intrusion Detection System. Unlike traditional methods, BlockGPT is designed to offer an unrestricted search space and does not rely on predefined rules or patterns, enabling it to detect a broader range of anomalies. We demonstrate the effectiveness of BlockGPT through its use as an anomaly detection tool for Ethereum transactions. In our experiments, it effectively identifies abnormal transactions among a dataset of 68M transactions and has a batched throughput of 2284 transactions per second on average. Our results show that, BlockGPT identifies abnormal transactions by ranking 49 out of 124 attacks among the top-3 most abnormal transactions interacting with their victim contracts. This work makes contributions to the field of blockchain transaction analysis by introducing a custom data encoding compatible with the transformer architecture, a domain-specific tokenization technique, and a tree encoding method specifically crafted for the Ethereum Virtual Machine (EVM) trace representation.
[ { "version": "v1", "created": "Tue, 25 Apr 2023 11:56:18 GMT" }, { "version": "v2", "created": "Sat, 29 Apr 2023 16:26:40 GMT" } ]
2023-05-02T00:00:00
[ [ "Gai", "Yu", "" ], [ "Zhou", "Liyi", "" ], [ "Qin", "Kaihua", "" ], [ "Song", "Dawn", "" ], [ "Gervais", "Arthur", "" ] ]
new_dataset
0.997224
2304.14407
Dongdong Chen
Junke Wang and Dongdong Chen and Chong Luo and Xiyang Dai and Lu Yuan and Zuxuan Wu and Yu-Gang Jiang
ChatVideo: A Tracklet-centric Multimodal and Versatile Video Understanding System
work in progress
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing deep video models are limited by specific tasks, fixed input-output spaces, and poor generalization capabilities, making it difficult to deploy them in real-world scenarios. In this paper, we present our vision for multimodal and versatile video understanding and propose a prototype system, \system. Our system is built upon a tracklet-centric paradigm, which treats tracklets as the basic video unit and employs various Video Foundation Models (ViFMs) to annotate their properties e.g., appearance, motion, \etc. All the detected tracklets are stored in a database and interact with the user through a database manager. We have conducted extensive case studies on different types of in-the-wild videos, which demonstrates the effectiveness of our method in answering various video-related problems. Our project is available at https://www.wangjunke.info/ChatVideo/
[ { "version": "v1", "created": "Thu, 27 Apr 2023 17:59:58 GMT" }, { "version": "v2", "created": "Sat, 29 Apr 2023 03:48:26 GMT" } ]
2023-05-02T00:00:00
[ [ "Wang", "Junke", "" ], [ "Chen", "Dongdong", "" ], [ "Luo", "Chong", "" ], [ "Dai", "Xiyang", "" ], [ "Yuan", "Lu", "" ], [ "Wu", "Zuxuan", "" ], [ "Jiang", "Yu-Gang", "" ] ]
new_dataset
0.999262
2304.14931
Abdurahman Maarouf
Abdurahman Maarouf, Dominik B\"ar, Dominique Geissler, Stefan Feuerriegel
HQP: A Human-Annotated Dataset for Detecting Online Propaganda
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online propaganda poses a severe threat to the integrity of societies. However, existing datasets for detecting online propaganda have a key limitation: they were annotated using weak labels that can be noisy and even incorrect. To address this limitation, our work makes the following contributions: (1) We present HQP: a novel dataset (N=30,000) for detecting online propaganda with high-quality labels. To the best of our knowledge, HQP is the first dataset for detecting online propaganda that was created through human annotation. (2) We show empirically that state-of-the-art language models fail in detecting online propaganda when trained with weak labels (AUC: 64.03). In contrast, state-of-the-art language models can accurately detect online propaganda when trained with our high-quality labels (AUC: 92.25), which is an improvement of ~44%. (3) To address the cost of labeling, we extend our work to few-shot learning. Specifically, we show that prompt-based learning using a small sample of high-quality labels can still achieve a reasonable performance (AUC: 80.27). Finally, we discuss implications for the NLP community to balance the cost and quality of labeling. Crucially, our work highlights the importance of high-quality labels for sensitive NLP tasks such as propaganda detection.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 15:42:55 GMT" }, { "version": "v2", "created": "Mon, 1 May 2023 08:29:51 GMT" } ]
2023-05-02T00:00:00
[ [ "Maarouf", "Abdurahman", "" ], [ "Bär", "Dominik", "" ], [ "Geissler", "Dominique", "" ], [ "Feuerriegel", "Stefan", "" ] ]
new_dataset
0.999802
2305.00039
Luigi Capogrosso
Luigi Capogrosso, Luca Geretti, Marco Cristani, Franco Fummi, Tiziano Villa
HermesBDD: A Multi-Core and Multi-Platform Binary Decision Diagram Package
26th International Symposium on Design and Diagnostics of Electronic Circuits and Systems (DDECS)
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
BDDs are representations of a Boolean expression in the form of a directed acyclic graph. BDDs are widely used in several fields, particularly in model checking and hardware verification. There are several implementations for BDD manipulation, where each package differs depending on the application. This paper presents HermesBDD: a novel multi-core and multi-platform binary decision diagram package focused on high performance and usability. HermesBDD supports a static and dynamic memory management mechanism, the possibility to exploit lock-free hash tables, and a simple parallel implementation of the If-Then-Else procedure based on a higher-level wrapper for threads and futures. HermesBDD is completely written in C++ with no need to rely on external libraries and is developed according to software engineering principles for reliability and easy maintenance over time. We provide experimental results on the n-Queens problem, the de-facto SAT solver benchmark for BDDs, demonstrating a significant speedup of 18.73x over our non-parallel baselines, and a remarkable performance boost w.r.t. other state-of-the-art BDDs packages.
[ { "version": "v1", "created": "Wed, 22 Mar 2023 11:15:27 GMT" } ]
2023-05-02T00:00:00
[ [ "Capogrosso", "Luigi", "" ], [ "Geretti", "Luca", "" ], [ "Cristani", "Marco", "" ], [ "Fummi", "Franco", "" ], [ "Villa", "Tiziano", "" ] ]
new_dataset
0.999326
2305.00061
Zhengzhong Liang
Zhengzhong Liang, Zeyu Zhang, Steven Bethard, Mihai Surdeanu
Explainable Verbal Reasoner Plus (EVR+): A Natural Language Reasoning Framework that Supports Diverse Compositional Reasoning
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Languages models have been successfully applied to a variety of reasoning tasks in NLP, yet the language models still suffer from compositional generalization. In this paper we present Explainable Verbal Reasoner Plus (EVR+), a reasoning framework that enhances language models' compositional reasoning ability by (1) allowing the model to explicitly generate and execute symbolic operators, and (2) allowing the model to decompose a complex task into several simpler ones in a flexible manner. Compared with its predecessor Explainable Verbal Reasoner (EVR) and other previous approaches adopting similar ideas, our framework supports more diverse types of reasoning such as nested loops and different types of recursion. To evaluate our reasoning framework, we build a synthetic dataset with five tasks that require compositional reasoning. Results show that our reasoning framework can enhance the language model's compositional generalization performance on the five tasks, using a fine-tuned language model. We also discussed the possibility and the challenges to combine our reasoning framework with a few-shot prompted language model.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 19:27:26 GMT" } ]
2023-05-02T00:00:00
[ [ "Liang", "Zhengzhong", "" ], [ "Zhang", "Zeyu", "" ], [ "Bethard", "Steven", "" ], [ "Surdeanu", "Mihai", "" ] ]
new_dataset
0.995846
2305.00076
Saminu Mohammad Aliyu
Saminu Mohammad Aliyu, Idris Abdulmumin, Shamsuddeen Hassan Muhammad, Ibrahim Said Ahmad, Saheed Abdullahi Salahudeen, Aliyu Yusuf, Falalu Ibrahim Lawan
HausaNLP at SemEval-2023 Task 10: Transfer Learning, Synthetic Data and Side-Information for Multi-Level Sexism Classification
5 pages, 3 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present the findings of our participation in the SemEval-2023 Task 10: Explainable Detection of Online Sexism (EDOS) task, a shared task on offensive language (sexism) detection on English Gab and Reddit dataset. We investigated the effects of transferring two language models: XLM-T (sentiment classification) and HateBERT (same domain -- Reddit) for multi-level classification into Sexist or not Sexist, and other subsequent sub-classifications of the sexist data. We also use synthetic classification of unlabelled dataset and intermediary class information to maximize the performance of our models. We submitted a system in Task A, and it ranked 49th with F1-score of 0.82. This result showed to be competitive as it only under-performed the best system by 0.052% F1-score.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 20:03:46 GMT" } ]
2023-05-02T00:00:00
[ [ "Aliyu", "Saminu Mohammad", "" ], [ "Abdulmumin", "Idris", "" ], [ "Muhammad", "Shamsuddeen Hassan", "" ], [ "Ahmad", "Ibrahim Said", "" ], [ "Salahudeen", "Saheed Abdullahi", "" ], [ "Yusuf", "Aliyu", "" ], [ "Lawan", "Falalu Ibrahim", "" ] ]
new_dataset
0.998919
2305.00084
Wanwan Li
Dang Bui, Wanwan Li, Hong Huang
CarGameAR: An Integrated AR Car Game Authoring Interface for Custom-Built Car Programed on Arduino Board
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we present CarGameAR: An Integrated AR Car Game Authoring Interface for Custom-Built Car Programed on Arduino Board. The car consists of an Arduino board, an H-bridge, and motors. The objective of the project is to create a system that can move a car in different directions using a computer application. The system uses Unity software to create a virtual environment where the user can control the car using keyboard commands. The car's motion is achieved by sending signals from the computer to the Arduino board, which then drives the motors through the H-bridge. The project provides a cost-effective and efficient way to build a car, which can be used for educational purposes, such as teaching programming. Moreover, this project is not limited to the control of the car through keyboard commands in a virtual environment. The system can be adapted to support augmented reality (AR) technology, providing an even more immersive and engaging user experience. By integrating the car with AR, the user can control the car's motion using physical gestures and movements, adding an extra layer of interactivity to the system. This makes the car an ideal platform for game development in AR, allowing the user to create driving games that blend the physical and virtual worlds seamlessly. Additionally, the car's affordability and ease of construction make it an accessible and valuable tool for teaching programming and principles in a fun and interactive way. Overall, this project demonstrates the versatility and potential of the car system, highlighting the various applications and possibilities it offers for both education and entertainment.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 20:36:24 GMT" } ]
2023-05-02T00:00:00
[ [ "Bui", "Dang", "" ], [ "Li", "Wanwan", "" ], [ "Huang", "Hong", "" ] ]
new_dataset
0.999549
2305.00104
Yuchen Liu
Yuchen Liu, Natasha Ong, Kaiyan Peng, Bo Xiong, Qifan Wang, Rui Hou, Madian Khabsa, Kaiyue Yang, David Liu, Donald S. Williamson, Hanchao Yu
MMViT: Multiscale Multiview Vision Transformers
null
null
null
null
cs.CV eess.AS eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Multiscale Multiview Vision Transformers (MMViT), which introduces multiscale feature maps and multiview encodings to transformer models. Our model encodes different views of the input signal and builds several channel-resolution feature stages to process the multiple views of the input at different resolutions in parallel. At each scale stage, we use a cross-attention block to fuse information across different views. This enables the MMViT model to acquire complex high-dimensional representations of the input at different resolutions. The proposed model can serve as a backbone model in multiple domains. We demonstrate the effectiveness of MMViT on audio and image classification tasks, achieving state-of-the-art results.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 21:51:41 GMT" } ]
2023-05-02T00:00:00
[ [ "Liu", "Yuchen", "" ], [ "Ong", "Natasha", "" ], [ "Peng", "Kaiyan", "" ], [ "Xiong", "Bo", "" ], [ "Wang", "Qifan", "" ], [ "Hou", "Rui", "" ], [ "Khabsa", "Madian", "" ], [ "Yang", "Kaiyue", "" ], [ "Liu", "David", "" ], [ "Williamson", "Donald S.", "" ], [ "Yu", "Hanchao", "" ] ]
new_dataset
0.999782
2305.00126
Zhuyun Zhou Ms.
Zhuyun Zhou, Zongwei Wu, R\'emi Boutteau, Fan Yang, Dominique Ginhac
DSEC-MOS: Segment Any Moving Object with Moving Ego Vehicle
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moving Object Segmentation (MOS), a crucial task in computer vision, has numerous applications such as surveillance, autonomous driving, and video analytics. Existing datasets for moving object segmentation mainly focus on RGB or Lidar videos, but lack additional event information that can enhance the understanding of dynamic scenes. To address this limitation, we propose a novel dataset, called DSEC-MOS. Our dataset includes frames captured by RGB cameras embedded on moving vehicules and incorporates event data, which provide high temporal resolution and low-latency information about changes in the scenes. To generate accurate segmentation mask annotations for moving objects, we apply the recently emerged large model SAM - Segment Anything Model - with moving object bounding boxes from DSEC-MOD serving as prompts and calibrated RGB frames, then further revise the results. Our DSEC-MOS dataset contains in total 16 sequences (13314 images). To the best of our knowledge, DSEC-MOS is also the first moving object segmentation dataset that includes event camera in autonomous driving. Project Page: https://github.com/ZZY-Zhou/DSEC-MOS.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 23:43:10 GMT" } ]
2023-05-02T00:00:00
[ [ "Zhou", "Zhuyun", "" ], [ "Wu", "Zongwei", "" ], [ "Boutteau", "Rémi", "" ], [ "Yang", "Fan", "" ], [ "Ginhac", "Dominique", "" ] ]
new_dataset
0.999737
2305.00182
David Alonso Del Barrio
David Alonso del Barrio and Daniel Gatica-Perez
Examining European Press Coverage of the Covid-19 No-Vax Movement: An NLP Framework
null
null
10.1145/3592572.3592845
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper examines how the European press dealt with the no-vax reactions against the Covid-19 vaccine and the dis- and misinformation associated with this movement. Using a curated dataset of 1786 articles from 19 European newspapers on the anti-vaccine movement over a period of 22 months in 2020-2021, we used Natural Language Processing techniques including topic modeling, sentiment analysis, semantic relationship with word embeddings, political analysis, named entity recognition, and semantic networks, to understand the specific role of the European traditional press in the disinformation ecosystem. The results of this multi-angle analysis demonstrate that the European well-established press actively opposed a variety of hoaxes mainly spread on social media, and was critical of the anti-vax trend, regardless of the political orientation of the newspaper. This confirms the relevance of studying the role of high-quality press in the disinformation ecosystem.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 06:26:03 GMT" } ]
2023-05-02T00:00:00
[ [ "del Barrio", "David Alonso", "" ], [ "Gatica-Perez", "Daniel", "" ] ]
new_dataset
0.992578
2305.00201
Yuzhong Chen
Zhenxiang Xiao, Yuzhong Chen, Lu Zhang, Junjie Yao, Zihao Wu, Xiaowei Yu, Yi Pan, Lin Zhao, Chong Ma, Xinyu Liu, Wei Liu, Xiang Li, Yixuan Yuan, Dinggang Shen, Dajiang Zhu, Tianming Liu, Xi Jiang
Instruction-ViT: Multi-Modal Prompts for Instruction Learning in ViT
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prompts have been proven to play a crucial role in large language models, and in recent years, vision models have also been using prompts to improve scalability for multiple downstream tasks. In this paper, we focus on adapting prompt design based on instruction tuning into a visual transformer model for image classification which we called Instruction-ViT. The key idea is to implement multi-modal prompts (text or image prompt) related to category information to guide the fine-tuning of the model. Based on the experiments of several image captionining tasks, the performance and domain adaptability were improved. Our work provided an innovative strategy to fuse multi-modal prompts with better performance and faster adaptability for visual classification models.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 08:59:12 GMT" } ]
2023-05-02T00:00:00
[ [ "Xiao", "Zhenxiang", "" ], [ "Chen", "Yuzhong", "" ], [ "Zhang", "Lu", "" ], [ "Yao", "Junjie", "" ], [ "Wu", "Zihao", "" ], [ "Yu", "Xiaowei", "" ], [ "Pan", "Yi", "" ], [ "Zhao", "Lin", "" ], [ "Ma", "Chong", "" ], [ "Liu", "Xinyu", "" ], [ "Liu", "Wei", "" ], [ "Li", "Xiang", "" ], [ "Yuan", "Yixuan", "" ], [ "Shen", "Dinggang", "" ], [ "Zhu", "Dajiang", "" ], [ "Liu", "Tianming", "" ], [ "Jiang", "Xi", "" ] ]
new_dataset
0.995303
2305.00204
Maciej Wielgosz
Maciej Wielgosz and Antonio M. L\'opez and Muhammad Naveed Riaz
CARLA-BSP: a simulated dataset with pedestrians
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present a sample dataset featuring pedestrians generated using the ARCANE framework, a new framework for generating datasets in CARLA (0.9.13). We provide use cases for pedestrian detection, autoencoding, pose estimation, and pose lifting. We also showcase baseline results. For more information, visit https://project-arcane.eu/.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 09:10:32 GMT" } ]
2023-05-02T00:00:00
[ [ "Wielgosz", "Maciej", "" ], [ "López", "Antonio M.", "" ], [ "Riaz", "Muhammad Naveed", "" ] ]
new_dataset
0.999869
2305.00210
Shahroz Khan
Shahroz Khan, Kosa Goucher-Lambert, Konstantinos Kostas, Panagiotis Kaklis
ShipHullGAN: A generic parametric modeller for ship hull design using deep convolutional generative model
null
Volume 411, 1 June 2023, 116051
10.1016/j.cma.2023.116051
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we introduce ShipHullGAN, a generic parametric modeller built using deep convolutional generative adversarial networks (GANs) for the versatile representation and generation of ship hulls. At a high level, the new model intends to address the current conservatism in the parametric ship design paradigm, where parametric modellers can only handle a particular ship type. We trained ShipHullGAN on a large dataset of 52,591 \textit{physically validated} designs from a wide range of existing ship types, including container ships, tankers, bulk carriers, tugboats, and crew supply vessels. We developed a new shape extraction and representation strategy to convert all training designs into a common geometric representation of the same resolution, as typically GANs can only accept vectors of fixed dimension as input. A space-filling layer is placed right after the generator component to ensure that the trained generator can cover all design classes. During training, designs are provided in the form of a shape-signature tensor (SST) which harnesses the compact geometric representation using geometric moments that further enable the inexpensive incorporation of physics-informed elements in ship design. We have shown through extensive comparative studies and optimisation cases that ShipHullGAN can generate designs with augmented features resulting in versatile design spaces that produce traditional and novel designs with geometrically valid and practically feasible shapes.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 09:31:20 GMT" } ]
2023-05-02T00:00:00
[ [ "Khan", "Shahroz", "" ], [ "Goucher-Lambert", "Kosa", "" ], [ "Kostas", "Konstantinos", "" ], [ "Kaklis", "Panagiotis", "" ] ]
new_dataset
0.999084
2305.00278
Chaoning Zhang
Dongsheng Han, Chaoning Zhang, Yu Qiao, Maryam Qamar, Yuna Jung, SeungKyu Lee, Sung-Ho Bae, Choong Seon Hong
Segment Anything Model (SAM) Meets Glass: Mirror and Transparent Objects Cannot Be Easily Detected
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Meta AI Research has recently released SAM (Segment Anything Model) which is trained on a large segmentation dataset of over 1 billion masks. As a foundation model in the field of computer vision, SAM (Segment Anything Model) has gained attention for its impressive performance in generic object segmentation. Despite its strong capability in a wide range of zero-shot transfer tasks, it remains unknown whether SAM can detect things in challenging setups like transparent objects. In this work, we perform an empirical evaluation of two glass-related challenging scenarios: mirror and transparent objects. We found that SAM often fails to detect the glass in both scenarios, which raises concern for deploying the SAM in safety-critical situations that have various forms of glass.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 15:27:57 GMT" } ]
2023-05-02T00:00:00
[ [ "Han", "Dongsheng", "" ], [ "Zhang", "Chaoning", "" ], [ "Qiao", "Yu", "" ], [ "Qamar", "Maryam", "" ], [ "Jung", "Yuna", "" ], [ "Lee", "SeungKyu", "" ], [ "Bae", "Sung-Ho", "" ], [ "Hong", "Choong Seon", "" ] ]
new_dataset
0.999123
2305.00314
Walter Zimmer
Walter Zimmer, Joseph Birkner, Marcel Brucker, Huu Tung Nguyen, Stefan Petrovski, Bohan Wang, Alois C. Knoll
InfraDet3D: Multi-Modal 3D Object Detection based on Roadside Infrastructure Camera and LiDAR Sensors
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Current multi-modal object detection approaches focus on the vehicle domain and are limited in the perception range and the processing capabilities. Roadside sensor units (RSUs) introduce a new domain for perception systems and leverage altitude to observe traffic. Cameras and LiDARs mounted on gantry bridges increase the perception range and produce a full digital twin of the traffic. In this work, we introduce InfraDet3D, a multi-modal 3D object detector for roadside infrastructure sensors. We fuse two LiDARs using early fusion and further incorporate detections from monocular cameras to increase the robustness and to detect small objects. Our monocular 3D detection module uses HD maps to ground object yaw hypotheses, improving the final perception results. The perception framework is deployed on a real-world intersection that is part of the A9 Test Stretch in Munich, Germany. We perform several ablation studies and experiments and show that fusing two LiDARs with two cameras leads to an improvement of +1.90 mAP compared to a camera-only solution. We evaluate our results on the A9 infrastructure dataset and achieve 68.48 mAP on the test set. The dataset and code will be available at https://a9-dataset.com to allow the research community to further improve the perception results and make autonomous driving safer.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 17:59:55 GMT" } ]
2023-05-02T00:00:00
[ [ "Zimmer", "Walter", "" ], [ "Birkner", "Joseph", "" ], [ "Brucker", "Marcel", "" ], [ "Nguyen", "Huu Tung", "" ], [ "Petrovski", "Stefan", "" ], [ "Wang", "Bohan", "" ], [ "Knoll", "Alois C.", "" ] ]
new_dataset
0.999781
2305.00347
Pierre Ohlmann
Pierre Ohlmann
Positionality of mean-payoff games on infinite graphs
4 pages, 2 figures
null
null
null
cs.LO cs.GT
http://creativecommons.org/licenses/by/4.0/
This short note establishes positionality of mean-payoff games over infinite game graphs by constructing a well-founded monotone universal graph.
[ { "version": "v1", "created": "Sat, 29 Apr 2023 21:43:31 GMT" } ]
2023-05-02T00:00:00
[ [ "Ohlmann", "Pierre", "" ] ]
new_dataset
0.950867
2305.00366
Yuze Lou
Yuze Lou, Bailey Kuehl, Erin Bransom, Sergey Feldman, Aakanksha Naik, Doug Downey
S2abEL: A Dataset for Entity Linking from Scientific Tables
null
null
null
null
cs.CL cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entity linking (EL) is the task of linking a textual mention to its corresponding entry in a knowledge base, and is critical for many knowledge-intensive NLP applications. When applied to tables in scientific papers, EL is a step toward large-scale scientific knowledge bases that could enable advanced scientific question answering and analytics. We present the first dataset for EL in scientific tables. EL for scientific tables is especially challenging because scientific knowledge bases can be very incomplete, and disambiguating table mentions typically requires understanding the papers's tet in addition to the table. Our dataset, S2abEL, focuses on EL in machine learning results tables and includes hand-labeled cell types, attributed sources, and entity links from the PaperswithCode taxonomy for 8,429 cells from 732 tables. We introduce a neural baseline method designed for EL on scientific tables containing many out-of-knowledge-base mentions, and show that it significantly outperforms a state-of-the-art generic table EL method. The best baselines fall below human performance, and our analysis highlights avenues for improvement.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 02:07:22 GMT" } ]
2023-05-02T00:00:00
[ [ "Lou", "Yuze", "" ], [ "Kuehl", "Bailey", "" ], [ "Bransom", "Erin", "" ], [ "Feldman", "Sergey", "" ], [ "Naik", "Aakanksha", "" ], [ "Downey", "Doug", "" ] ]
new_dataset
0.999295
2305.00367
Cong Nguyen
Cong T. Nguyen, Dinh Thai Hoang, Diep N. Nguyen, Yong Xiao, Dusit Niyato, Eryk Dutkiewicz
MetaShard: A Novel Sharding Blockchain Platform for Metaverse Applications
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Due to its security, transparency, and flexibility in verifying virtual assets, blockchain has been identified as one of the key technologies for Metaverse. Unfortunately, blockchain-based Metaverse faces serious challenges such as massive resource demands, scalability, and security concerns. To address these issues, this paper proposes a novel sharding-based blockchain framework, namely MetaShard, for Metaverse applications. Particularly, we first develop an effective consensus mechanism, namely Proof-of-Engagement, that can incentivize MUs' data and computing resource contribution. Moreover, to improve the scalability of MetaShard, we propose an innovative sharding management scheme to maximize the network's throughput while protecting the shards from 51% attacks. Since the optimization problem is NP-complete, we develop a hybrid approach that decomposes the problem (using the binary search method) into sub-problems that can be solved effectively by the Lagrangian method. As a result, the proposed approach can obtain solutions in polynomial time, thereby enabling flexible shard reconfiguration and reducing the risk of corruption from the adversary. Extensive numerical experiments show that, compared to the state-of-the-art commercial solvers, our proposed approach can achieve up to 66.6% higher throughput in less than 1/30 running time. Moreover, the proposed approach can achieve global optimal solutions in most experiments.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 02:11:35 GMT" } ]
2023-05-02T00:00:00
[ [ "Nguyen", "Cong T.", "" ], [ "Hoang", "Dinh Thai", "" ], [ "Nguyen", "Diep N.", "" ], [ "Xiao", "Yong", "" ], [ "Niyato", "Dusit", "" ], [ "Dutkiewicz", "Eryk", "" ] ]
new_dataset
0.96012
2305.00397
Su Pang
Su Pang, Daniel Morris, Hayder Radha
TransCAR: Transformer-based Camera-And-Radar Fusion for 3D Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite radar's popularity in the automotive industry, for fusion-based 3D object detection, most existing works focus on LiDAR and camera fusion. In this paper, we propose TransCAR, a Transformer-based Camera-And-Radar fusion solution for 3D object detection. Our TransCAR consists of two modules. The first module learns 2D features from surround-view camera images and then uses a sparse set of 3D object queries to index into these 2D features. The vision-updated queries then interact with each other via transformer self-attention layer. The second module learns radar features from multiple radar scans and then applies transformer decoder to learn the interactions between radar features and vision-updated queries. The cross-attention layer within the transformer decoder can adaptively learn the soft-association between the radar features and vision-updated queries instead of hard-association based on sensor calibration only. Finally, our model estimates a bounding box per query using set-to-set Hungarian loss, which enables the method to avoid non-maximum suppression. TransCAR improves the velocity estimation using the radar scans without temporal information. The superior experimental results of our TransCAR on the challenging nuScenes datasets illustrate that our TransCAR outperforms state-of-the-art Camera-Radar fusion-based 3D object detection approaches.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 05:35:03 GMT" } ]
2023-05-02T00:00:00
[ [ "Pang", "Su", "" ], [ "Morris", "Daniel", "" ], [ "Radha", "Hayder", "" ] ]
new_dataset
0.998449
2305.00405
Graham H. Norton
Graham H. Norton
On Rueppel's Linear Complexity Conjecture
null
null
null
null
cs.SC
http://creativecommons.org/licenses/by/4.0/
Rueppel's conjecture on the linear complexity of the first $n$ terms of the sequence $(1,1,0,1,0^3,1,0^7,1,0^{15},\ldots)$ was first proved by Dai using the Euclidean algorithm. We have previously shown that we can attach a homogeneous (annihilator) ideal of $F[x,z]$ to the first $n$ terms of a sequence over a field $F$ and construct a pair of generating forms for it. This approach gives another proof of Rueppel's conjecture. We also prove additional properties of these forms and deduce the outputs of the LFSR synthesis algorithm applied to the first $n$ terms. Further, dehomogenising the leading generators yields the minimal polynomials of Dai.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 06:39:29 GMT" } ]
2023-05-02T00:00:00
[ [ "Norton", "Graham H.", "" ] ]
new_dataset
0.972151
2305.00412
Zhe Chen
Zhe Chen, Yang Yang, Anne Bettens, Youngho Eun, Xiaofeng Wu
A Simulation-Augmented Benchmarking Framework for Automatic RSO Streak Detection in Single-Frame Space Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting Resident Space Objects (RSOs) and preventing collisions with other satellites is crucial. Recently, deep convolutional neural networks (DCNNs) have shown superior performance in object detection when large-scale datasets are available. However, collecting rich data of RSOs is difficult due to very few occurrences in the space images. Without sufficient data, it is challenging to comprehensively train DCNN detectors and make them effective for detecting RSOs in space images, let alone to estimate whether a detector is sufficiently robust. The lack of meaningful evaluation of different detectors could further affect the design and application of detection methods. To tackle this issue, we propose that the space images containing RSOs can be simulated to complement the shortage of raw data for better benchmarking. Accordingly, we introduce a novel simulation-augmented benchmarking framework for RSO detection (SAB-RSOD). In our framework, by making the best use of the hardware parameters of the sensor that captures real-world space images, we first develop a high-fidelity RSO simulator that can generate various realistic space images. Then, we use this simulator to generate images that contain diversified RSOs in space and annotate them automatically. Later, we mix the synthetic images with the real-world images, obtaining around 500 images for training with only the real-world images for evaluation. Under SAB-RSOD, we can train different popular object detectors like Yolo and Faster RCNN effectively, enabling us to evaluate their performance thoroughly. The evaluation results have shown that the amount of available data and image resolution are two key factors for robust RSO detection. Moreover, if using a lower resolution for higher efficiency, we demonstrated that a simple UNet-based detection method can already access high detection accuracy.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 07:00:16 GMT" } ]
2023-05-02T00:00:00
[ [ "Chen", "Zhe", "" ], [ "Yang", "Yang", "" ], [ "Bettens", "Anne", "" ], [ "Eun", "Youngho", "" ], [ "Wu", "Xiaofeng", "" ] ]
new_dataset
0.980194
2305.00446
Hiuching Hung
Hiuchung Hung, Andreas Maier, Thorsten Piske
Building a Non-native Speech Corpus Featuring Chinese-English Bilingual Children: Compilation and Rationale
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper introduces a non-native speech corpus consisting of narratives from fifty 5- to 6-year-old Chinese-English children. Transcripts totaling 6.5 hours of children taking a narrative comprehension test in English (L2) are presented, along with human-rated scores and annotations of grammatical and pronunciation errors. The children also completed the parallel MAIN tests in Chinese (L1) for reference purposes. For all tests we recorded audio and video with our innovative self-developed remote collection methods. The video recordings serve to mitigate the challenge of low intelligibility in L2 narratives produced by young children during the transcription process. This corpus offers valuable resources for second language teaching and has the potential to enhance the overall performance of automatic speech recognition (ASR).
[ { "version": "v1", "created": "Sun, 30 Apr 2023 10:41:43 GMT" } ]
2023-05-02T00:00:00
[ [ "Hung", "Hiuchung", "" ], [ "Maier", "Andreas", "" ], [ "Piske", "Thorsten", "" ] ]
new_dataset
0.960678
2305.00517
Lakmal Meegahapola
Yasith Amarasinghe, Darshana Sandaruwan, Thilina Madusanka, Indika Perera, Lakmal Meegahapola
Multimodal Earable Sensing for Human Energy Expenditure Estimation
IEEE EMBC 2023 (45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society)
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy Expenditure Estimation (EEE) is vital for maintaining weight, managing chronic diseases, achieving fitness goals, and improving overall health and well-being. Gold standard measurements for energy expenditure are expensive and time-consuming, hence limiting utility and adoption. Prior work has used wearable sensors for EEE as a workaround. Moreover, earables (ear-worn sensing devices such as earbuds) have recently emerged as a sub-category of wearables with unique characteristics (i.e., small form factor, high adoption) and positioning on the human body (i.e., robust to motion, high stability, facing thin skin), opening up a novel sensing opportunity. However, earables with multimodal sensors have rarely been used for EEE, with data collected in multiple activity types. Further, it is unknown how earable sensors perform compared to standard wearable sensors worn on other body positions. In this study, using a publicly available dataset gathered from 17 participants, we evaluate the EEE performance using multimodal sensors of earable devices to show that an MAE of 0.5 MET (RMSE = 0.67) can be achieved. Furthermore, we compare the EEE performance of three commercial wearable devices with the earable, demonstrating competitive performance of earables
[ { "version": "v1", "created": "Sun, 30 Apr 2023 16:06:06 GMT" } ]
2023-05-02T00:00:00
[ [ "Amarasinghe", "Yasith", "" ], [ "Sandaruwan", "Darshana", "" ], [ "Madusanka", "Thilina", "" ], [ "Perera", "Indika", "" ], [ "Meegahapola", "Lakmal", "" ] ]
new_dataset
0.992025
2305.00521
Ki Taekyung
Taekyung Ki and Dongchan Min
StyleLipSync: Style-based Personalized Lip-sync Video Generation
Our project page: https://stylelipsync.github.io
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we present StyleLipSync, a style-based personalized lip-sync video generative model that can generate identity-agnostic lip-synchronizing video from arbitrary audio. To generate a video of arbitrary identities, we leverage expressive lip prior from the semantically rich latent space of a pre-trained StyleGAN, where we can also design a video consistency with a linear transformation. In contrast to the previous lip-sync methods, we introduce pose-aware masking that dynamically locates the mask to improve the naturalness over frames by utilizing a 3D parametric mesh predictor frame by frame. Moreover, we propose a few-shot lip-sync adaptation method for an arbitrary person by introducing a sync regularizer that preserves lips-sync generalization while enhancing the person-specific visual information. Extensive experiments demonstrate that our model can generate accurate lip-sync videos even with the zero-shot setting and enhance characteristics of an unseen face using a few seconds of target video through the proposed adaptation method. Please refer to our project page.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 16:38:42 GMT" } ]
2023-05-02T00:00:00
[ [ "Ki", "Taekyung", "" ], [ "Min", "Dongchan", "" ] ]
new_dataset
0.997588
2305.00522
Steven Piantadosi
Steven T. Piantadosi
How to enumerate trees from a context-free grammar
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
I present a simple algorithm for enumerating the trees generated by a Context Free Grammar (CFG). The algorithm uses a pairing function to form a bijection between CFG derivations and natural numbers, so that trees can be uniquely decoded from counting. This provides a general way to number expressions in natural logical languages, and potentially can be extended to other combinatorial problems. I also show how this algorithm may be generalized to more general forms of derivation, including analogs of Lempel-Ziv coding on trees.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 16:40:54 GMT" } ]
2023-05-02T00:00:00
[ [ "Piantadosi", "Steven T.", "" ] ]
new_dataset
0.995551
2305.00538
Yanfang Le
Yanfang Le, Jeongkeun Lee, Jeremias Blendin, Jiayi Chen, Georgios Nikolaidis, Rong Pan, Robert Soule, Aditya Akella, Pedro Yebenes Segura, Arjun singhvi, Yuliang Li, Qingkai Meng, Changhoon Kim, Serhat Arslan
SFC: Near-Source Congestion Signaling and Flow Control
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
State-of-the-art congestion control algorithms for data centers alone do not cope well with transient congestion and high traffic bursts. To help with these, we revisit the concept of direct \emph{backward} feedback from switches and propose Back-to-Sender (BTS) signaling to many concurrent incast senders. Combining it with our novel approach to in-network caching, we achieve near-source sub-RTT congestion signaling. Source Flow Control (SFC) combines these two simple signaling mechanisms to instantly pause traffic sources, hence avoiding the head-of-line blocking problem of conventional hop-by-hop flow control. Our prototype system and scale simulations demonstrate that near-source signaling can significantly reduce the message completion time of various workloads in the presence of incast, complementing existing congestion control algorithms. Our results show that SFC can reduce the $99^{th}$-percentile flow completion times by $1.2-6\times$ and the peak switch buffer usage by $2-3\times$ compared to the recent incast solutions.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 17:33:50 GMT" } ]
2023-05-02T00:00:00
[ [ "Le", "Yanfang", "" ], [ "Lee", "Jeongkeun", "" ], [ "Blendin", "Jeremias", "" ], [ "Chen", "Jiayi", "" ], [ "Nikolaidis", "Georgios", "" ], [ "Pan", "Rong", "" ], [ "Soule", "Robert", "" ], [ "Akella", "Aditya", "" ], [ "Segura", "Pedro Yebenes", "" ], [ "singhvi", "Arjun", "" ], [ "Li", "Yuliang", "" ], [ "Meng", "Qingkai", "" ], [ "Kim", "Changhoon", "" ], [ "Arslan", "Serhat", "" ] ]
new_dataset
0.996951
2305.00546
Lesley Frew
Lesley Frew, Michael L. Nelson, Michele C. Weigle
Making Changes in Webpages Discoverable: A Change-Text Search Interface for Web Archives
In Proceedings of JCDL 2023; 20 pages, 11 figures, 2 tables
null
null
null
cs.IR cs.DL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Webpages change over time, and web archives hold copies of historical versions of webpages. Users of web archives, such as journalists, want to find and view changes on webpages over time. However, the current search interfaces for web archives do not support this task. For the web archives that include a full-text search feature, multiple versions of the same webpage that match the search query are shown individually without enumerating changes, or are grouped together in a way that hides changes. We present a change text search engine that allows users to find changes in webpages. We describe the implementation of the search engine backend and frontend, including a tool that allows users to view the changes between two webpage versions in context as an animation. We evaluate the search engine with U.S. federal environmental webpages that changed between 2016 and 2020. The change text search results page can clearly show when terms and phrases were added or removed from webpages. The inverted index can also be queried to identify salient and frequently deleted terms in a corpus.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 18:16:06 GMT" } ]
2023-05-02T00:00:00
[ [ "Frew", "Lesley", "" ], [ "Nelson", "Michael L.", "" ], [ "Weigle", "Michele C.", "" ] ]
new_dataset
0.964485
2305.00582
Augustine Musukwa
Augustine Musukwa
On APN functions and their derivatives
16
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
We determine a connection between the weight of a Boolean function and the total weight of its first-order derivatives. The relationship established is used to study some cryptographic properties of Boolean functions. We establish a characterization of APN permutations in terms of the weight of the first-order derivatives of their components. We also characterize APN functions by the total weight of the second-order derivatives of their components. The total weight of the first-order and second-order derivatives for functions such as permutations, bent, partially-bent, quadratic, plateaued and balanced functions is determined.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 21:22:36 GMT" } ]
2023-05-02T00:00:00
[ [ "Musukwa", "Augustine", "" ] ]
new_dataset
0.997526
2305.00598
Antonio Abelem
Antonio Abelem, Don Towsley, Gayane Vardoyan
Quantum Internet: The Future of Internetworking
Shortcourse presented in the XXXVIII Brazilian Symposium on Computer Networks and Distributed Systems (SBRC 2020). arXiv admin note: text overlap with arXiv:1912.06642, arXiv:1810.08421, arXiv:quant-ph/0607065, arXiv:1610.05238 by other authors
Shortcourses' Book of the XXXVIII Brazilian Symposium on Computer Networks and Distributed Systems (SBRC 2020). 1ed.: SBC, 2020, v.1, ISBN-13: 978-65-87003-33-7, p. 48-90
10.5753/sbc.5033.7.2
null
cs.NI quant-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Quantum information, computation and communication, will have a great impact on our world. One important subfield will be quantum networking and the quantum Internet. The purpose of a quantum Internet is to enable applications that are fundamentally out of reach for the classical Internet. Quantum networks enable new capabilities to communication systems. This allows the parties to generate long distance quantum entanglement, which serves a number of tasks including the generation of multiparty shared secrets whose security relies only on the laws of physics, distributed quantum computing, improved sensing, quantum computing on encrypted data, and secure private-bid auctions. However, quantum signals are fragile, and, in general, cannot be copied or amplified. In order to enable widespread use and application development, it is essential to develop methods that allow quantum protocols to connect to the underlying hardware implementation transparently and to make fast and reactive decisions for generating entanglement in the network to mitigate limited qubit lifetimes. Architectures for large-scale quantum internetworking are in development, paralleling theoretical and experimental work on physical layers and low-level error management and connection technologies. This chapter aims to present the main concepts, challenges, and opportunities for research in quantum information, quantum computing and quantum networking.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 23:17:47 GMT" } ]
2023-05-02T00:00:00
[ [ "Abelem", "Antonio", "" ], [ "Towsley", "Don", "" ], [ "Vardoyan", "Gayane", "" ] ]
new_dataset
0.955259
2305.00603
Tianxiang Hao
Tianxiang Hao, Hui Chen, Yuchen Guo and Guiguang Ding
Consolidator: Mergeable Adapter with Grouped Connections for Visual Adaptation
ICLR 2023
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, transformers have shown strong ability as visual feature extractors, surpassing traditional convolution-based models in various scenarios. However, the success of vision transformers largely owes to their capacity to accommodate numerous parameters. As a result, new challenges for adapting large models to downstream tasks arise. On the one hand, classic fine-tuning tunes all parameters in a huge model for every task and thus easily falls into overfitting, leading to inferior performance. On the other hand, on resource-limited devices, fine-tuning stores a full copy of parameters and thus is usually impracticable for the shortage of storage space. However, few works have focused on how to efficiently and effectively transfer knowledge in a vision transformer. Existing methods did not dive into the properties of visual features, leading to inferior performance. Moreover, some of them bring heavy inference cost though benefiting storage. To tackle these problems, we propose consolidator to modify the pre-trained model with the addition of a small set of tunable parameters to temporarily store the task-specific knowledge while freezing the backbone model. Motivated by the success of group-wise convolution, we adopt grouped connections across the features extracted by fully connected layers to construct tunable parts in a consolidator. To further enhance the model's capacity to transfer knowledge under a constrained storage budget and keep inference efficient, we consolidate the parameters in two stages: 1. between adaptation and storage, and 2. between loading and inference. On a series of downstream visual tasks, our consolidator can reach up to 7.56 better accuracy than full fine-tuning with merely 0.35% parameters, and outperform state-of-the-art parameter-efficient tuning methods by a clear margin. Code is available at https://github.com/beyondhtx/Consolidator.
[ { "version": "v1", "created": "Sun, 30 Apr 2023 23:59:02 GMT" } ]
2023-05-02T00:00:00
[ [ "Hao", "Tianxiang", "" ], [ "Chen", "Hui", "" ], [ "Guo", "Yuchen", "" ], [ "Ding", "Guiguang", "" ] ]
new_dataset
0.961165
2305.00604
Felix Petersen
Felix Petersen, Tobias Sutter, Christian Borgelt, Dongsung Huh, Hilde Kuehne, Yuekai Sun, Oliver Deussen
ISAAC Newton: Input-based Approximate Curvature for Newton's Method
Published at ICLR 2023, Code @ https://github.com/Felix-Petersen/isaac, Video @ https://youtu.be/7RKRX-MdwqM
null
null
null
cs.LG cs.CV math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ISAAC (Input-baSed ApproximAte Curvature), a novel method that conditions the gradient using selected second-order information and has an asymptotically vanishing computational overhead, assuming a batch size smaller than the number of neurons. We show that it is possible to compute a good conditioner based on only the input to a respective layer without a substantial computational overhead. The proposed method allows effective training even in small-batch stochastic regimes, which makes it competitive to first-order as well as second-order methods.
[ { "version": "v1", "created": "Mon, 1 May 2023 00:00:04 GMT" } ]
2023-05-02T00:00:00
[ [ "Petersen", "Felix", "" ], [ "Sutter", "Tobias", "" ], [ "Borgelt", "Christian", "" ], [ "Huh", "Dongsung", "" ], [ "Kuehne", "Hilde", "" ], [ "Sun", "Yuekai", "" ], [ "Deussen", "Oliver", "" ] ]
new_dataset
0.988133
2305.00606
Derguene Mbaye
Derguene Mbaye, Moussa Diallo, Thierno Ibrahima Diop
Low-Resourced Machine Translation for Senegalese Wolof Language
14 pages, 5 figures, 2 Tables, 8th International Congress on Information and Communication Technology (ICICT 2023)
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Natural Language Processing (NLP) research has made great advancements in recent years with major breakthroughs that have established new benchmarks. However, these advances have mainly benefited a certain group of languages commonly referred to as resource-rich such as English and French. Majority of other languages with weaker resources are then left behind which is the case for most African languages including Wolof. In this work, we present a parallel Wolof/French corpus of 123,000 sentences on which we conducted experiments on machine translation models based on Recurrent Neural Networks (RNN) in different data configurations. We noted performance gains with the models trained on subworded data as well as those trained on the French-English language pair compared to those trained on the French-Wolof pair under the same experimental conditions.
[ { "version": "v1", "created": "Mon, 1 May 2023 00:04:19 GMT" } ]
2023-05-02T00:00:00
[ [ "Mbaye", "Derguene", "" ], [ "Diallo", "Moussa", "" ], [ "Diop", "Thierno Ibrahima", "" ] ]
new_dataset
0.98662
2305.00645
Qifan Wang
Qifan Wang, Shujie Cui, Lei Zhou, Ye Dong, Jianli Bai, Yun Sing Koh and Giovanni Russello
GTree: GPU-Friendly Privacy-preserving Decision Tree Training and Inference
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decision tree (DT) is a widely used machine learning model due to its versatility, speed, and interpretability. However, for privacy-sensitive applications, outsourcing DT training and inference to cloud platforms raise concerns about data privacy. Researchers have developed privacy-preserving approaches for DT training and inference using cryptographic primitives, such as Secure Multi-Party Computation (MPC). While these approaches have shown progress, they still suffer from heavy computation and communication overheads. Few recent works employ Graphical Processing Units (GPU) to improve the performance of MPC-protected deep learning. This raises a natural question: \textit{can MPC-protected DT training and inference be accelerated by GPU?} We present GTree, the first scheme that uses GPU to accelerate MPC-protected secure DT training and inference. GTree is built across 3 parties who securely and jointly perform each step of DT training and inference with GPU. Each MPC protocol in GTree is designed in a GPU-friendly version. The performance evaluation shows that GTree achieves ${\thicksim}11{\times}$ and ${\thicksim}21{\times}$ improvements in training SPECT and Adult datasets, compared to the prior most efficient CPU-based work. For inference, GTree shows its superior efficiency when the DT has less than 10 levels, which is $126\times$ faster than the prior most efficient work when inferring $10^4$ instances with a tree of 7 levels. GTree also achieves a stronger security guarantee than prior solutions, which only leaks the tree depth and size of data samples while prior solutions also leak the tree structure. With \textit{oblivious array access}, the access pattern on GPU is also protected.
[ { "version": "v1", "created": "Mon, 1 May 2023 03:35:43 GMT" } ]
2023-05-02T00:00:00
[ [ "Wang", "Qifan", "" ], [ "Cui", "Shujie", "" ], [ "Zhou", "Lei", "" ], [ "Dong", "Ye", "" ], [ "Bai", "Jianli", "" ], [ "Koh", "Yun Sing", "" ], [ "Russello", "Giovanni", "" ] ]
new_dataset
0.973189
2305.00671
Fangjian Lin
Yizhe Ma, Fangjian Lin, Sitong Wu, Shengwei Tian, Long Yu
PRSeg: A Lightweight Patch Rotate MLP Decoder for Semantic Segmentation
Accepted by IEEE TCSVT
null
10.1109/TCSVT.2023.3271523
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lightweight MLP-based decoder has become increasingly promising for semantic segmentation. However, the channel-wise MLP cannot expand the receptive fields, lacking the context modeling capacity, which is critical to semantic segmentation. In this paper, we propose a parametric-free patch rotate operation to reorganize the pixels spatially. It first divides the feature map into multiple groups and then rotates the patches within each group. Based on the proposed patch rotate operation, we design a novel segmentation network, named PRSeg, which includes an off-the-shelf backbone and a lightweight Patch Rotate MLP decoder containing multiple Dynamic Patch Rotate Blocks (DPR-Blocks). In each DPR-Block, the fully connected layer is performed following a Patch Rotate Module (PRM) to exchange spatial information between pixels. Specifically, in PRM, the feature map is first split into the reserved part and rotated part along the channel dimension according to the predicted probability of the Dynamic Channel Selection Module (DCSM), and our proposed patch rotate operation is only performed on the rotated part. Extensive experiments on ADE20K, Cityscapes and COCO-Stuff 10K datasets prove the effectiveness of our approach. We expect that our PRSeg can promote the development of MLP-based decoder in semantic segmentation.
[ { "version": "v1", "created": "Mon, 1 May 2023 06:03:16 GMT" } ]
2023-05-02T00:00:00
[ [ "Ma", "Yizhe", "" ], [ "Lin", "Fangjian", "" ], [ "Wu", "Sitong", "" ], [ "Tian", "Shengwei", "" ], [ "Yu", "Long", "" ] ]
new_dataset
0.998668
2305.00696
Litao Yang
Litao Yang, Deval Mehta, Sidong Liu, Dwarikanath Mahapatra, Antonio Di Ieva, Zongyuan Ge
TPMIL: Trainable Prototype Enhanced Multiple Instance Learning for Whole Slide Image Classification
Accepted for MIDL 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Digital pathology based on whole slide images (WSIs) plays a key role in cancer diagnosis and clinical practice. Due to the high resolution of the WSI and the unavailability of patch-level annotations, WSI classification is usually formulated as a weakly supervised problem, which relies on multiple instance learning (MIL) based on patches of a WSI. In this paper, we aim to learn an optimal patch-level feature space by integrating prototype learning with MIL. To this end, we develop a Trainable Prototype enhanced deep MIL (TPMIL) framework for weakly supervised WSI classification. In contrast to the conventional methods which rely on a certain number of selected patches for feature space refinement, we softly cluster all the instances by allocating them to their corresponding prototypes. Additionally, our method is able to reveal the correlations between different tumor subtypes through distances between corresponding trained prototypes. More importantly, TPMIL also enables to provide a more accurate interpretability based on the distance of the instances from the trained prototypes which serves as an alternative to the conventional attention score-based interpretability. We test our method on two WSI datasets and it achieves a new SOTA. GitHub repository: https://github.com/LitaoYang-Jet/TPMIL
[ { "version": "v1", "created": "Mon, 1 May 2023 07:39:19 GMT" } ]
2023-05-02T00:00:00
[ [ "Yang", "Litao", "" ], [ "Mehta", "Deval", "" ], [ "Liu", "Sidong", "" ], [ "Mahapatra", "Dwarikanath", "" ], [ "Di Ieva", "Antonio", "" ], [ "Ge", "Zongyuan", "" ] ]
new_dataset
0.999636
2305.00767
Cong Cao
Huanjing Yue, Cong Cao, Lei Liao, and Jingyu Yang
RViDeformer: Efficient Raw Video Denoising Transformer with a Larger Benchmark Dataset
16 pages,15 figures
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, raw video denoising has garnered increased attention due to the consistency with the imaging process and well-studied noise modeling in the raw domain. However, two problems still hinder the denoising performance. Firstly, there is no large dataset with realistic motions for supervised raw video denoising, as capturing noisy and clean frames for real dynamic scenes is difficult. To address this, we propose recapturing existing high-resolution videos displayed on a 4K screen with high-low ISO settings to construct noisy-clean paired frames. In this way, we construct a video denoising dataset (named as ReCRVD) with 120 groups of noisy-clean videos, whose ISO values ranging from 1600 to 25600. Secondly, while non-local temporal-spatial attention is beneficial for denoising, it often leads to heavy computation costs. We propose an efficient raw video denoising transformer network (RViDeformer) that explores both short and long-distance correlations. Specifically, we propose multi-branch spatial and temporal attention modules, which explore the patch correlations from local window, local low-resolution window, global downsampled window, and neighbor-involved window, and then they are fused together. We employ reparameterization to reduce computation costs. Our network is trained in both supervised and unsupervised manners, achieving the best performance compared with state-of-the-art methods. Additionally, the model trained with our proposed dataset (ReCRVD) outperforms the model trained with previous benchmark dataset (CRVD) when evaluated on the real-world outdoor noisy videos. Our code and dataset will be released after the acceptance of this work.
[ { "version": "v1", "created": "Mon, 1 May 2023 11:06:58 GMT" } ]
2023-05-02T00:00:00
[ [ "Yue", "Huanjing", "" ], [ "Cao", "Cong", "" ], [ "Liao", "Lei", "" ], [ "Yang", "Jingyu", "" ] ]
new_dataset
0.989443
2305.00813
Kaushik Roy
Amit Sheth, Kaushik Roy, Manas Gaur
Neurosymbolic AI - Why, What, and How
To appear in IEEE Intelligent Systems
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Humans interact with the environment using a combination of perception - transforming sensory inputs from their environment into symbols, and cognition - mapping symbols to knowledge about the environment for supporting abstraction, reasoning by analogy, and long-term planning. Human perception-inspired machine perception, in the context of AI, refers to large-scale pattern recognition from raw data using neural networks trained using self-supervised learning objectives such as next-word prediction or object recognition. On the other hand, machine cognition encompasses more complex computations, such as using knowledge of the environment to guide reasoning, analogy, and long-term planning. Humans can also control and explain their cognitive functions. This seems to require the retention of symbolic mappings from perception outputs to knowledge about their environment. For example, humans can follow and explain the guidelines and safety constraints driving their decision-making in safety-critical applications such as healthcare, criminal justice, and autonomous driving. This article introduces the rapidly emerging paradigm of Neurosymbolic AI combines neural networks and knowledge-guided symbolic approaches to create more capable and flexible AI systems. These systems have immense potential to advance both algorithm-level (e.g., abstraction, analogy, reasoning) and application-level (e.g., explainable and safety-constrained decision-making) capabilities of AI systems.
[ { "version": "v1", "created": "Mon, 1 May 2023 13:27:22 GMT" } ]
2023-05-02T00:00:00
[ [ "Sheth", "Amit", "" ], [ "Roy", "Kaushik", "" ], [ "Gaur", "Manas", "" ] ]
new_dataset
0.992788
2305.00818
Joseph Chow
Bingqing Liu, Joseph Y. J. Chow
On-demand Mobility-as-a-Service platform assignment games with guaranteed stable outcomes
null
null
null
null
cs.GT cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Mobility-as-a-Service (MaaS) systems are two-sided markets, with two mutually exclusive sets of agents, i.e., travelers/users and operators, forming a mobility ecosystem in which multiple operators compete or cooperate to serve customers under a governing platform provider. This study proposes a MaaS platform equilibrium model based on many-to-many assignment games incorporating both fixed-route transit services and mobility-on-demand (MOD) services. The matching problem is formulated as a multicommodity flow network design problem under congestion. The local stability conditions reflect a generalization of Wardrop's principles that include operator decisions. A subsidy mechanism from the platform is proposed to guarantee local stability. An exact solution algorithm is proposed based on a branch and bound framework with a Frank-Wolfe algorithm integrated with Lagrangian relaxation and subgradient optimization, which guarantees the optimality of the matching problem but not stability. A heuristic which integrates stability conditions and subsidy design is proposed, which reaches either the optimal MaaS platform equilibrium solution with global stability, or a feasible locally stable solution that may require subsidy. A worst-case bound and condition for obtaining an exact solution are both identified. Two sets of reproducible numerical experiments are conducted. The first, on a toy network, verifies the model and algorithm, and illustrates the differences between local and global stability. The second, on an expanded Sioux Falls network with 82 nodes and 748 links, derives generalizable insights about the model for coopetitive interdependencies between operators sharing the platform, handling congestion effects in MOD services, effects of local stability on investment impacts, and illustrating inequities that may arise under heterogeneous populations.
[ { "version": "v1", "created": "Mon, 1 May 2023 13:33:16 GMT" } ]
2023-05-02T00:00:00
[ [ "Liu", "Bingqing", "" ], [ "Chow", "Joseph Y. J.", "" ] ]
new_dataset
0.997278
2305.00885
Glaucia Melo Dos Santos
Glaucia Melo, Luis Fernando Lins, Paulo Alencar, Donald Cowan
Supporting Contextual Conversational Agent-Based Software Development
Accepted on BotSE Workshop 2023
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Software Development (SD) is remarkably dynamic and is critically dependent on the knowledge acquired by the project's software developers as the project progresses. Software developers need to understand large amounts of information related to the tasks at hand. This information (context) is often not explicit, as it can be lost in large documentation repositories, a team member's brain, or beyond their cognitive memory capacity. These contexts include tool features, integration strategies, data structures, code syntax, approaches to tasks, project definitions, and even implicit or tacit contexts, which add significant complexity to the SD process. Current software development practices still lack sufficient techniques using the existing SD execution information and context to provide developers with relevant process guidance, augmenting their capacity to do their job using available applicable information. This paper presents ongoing and future research on an approach to support conversational agent-based knowledge-augmented software development. Developers benefit by receiving recommendations about task-related information and workflows they need to execute. This work advances human-computer interaction patterns in workflow engines, from graphical user interfaces to conversational patterns in software engineering.
[ { "version": "v1", "created": "Mon, 1 May 2023 15:34:21 GMT" } ]
2023-05-02T00:00:00
[ [ "Melo", "Glaucia", "" ], [ "Lins", "Luis Fernando", "" ], [ "Alencar", "Paulo", "" ], [ "Cowan", "Donald", "" ] ]
new_dataset
0.997769
2305.00911
Jai Prakash
Jai Prakash, Michele Vignati and Edoardo Sabbioni
SRPT vs Smith Predictor for Vehicle Teleoperation
This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Vehicle teleoperation has potential applications in fallback solutions for autonomous vehicles, remote delivery services, and hazardous operations. However, network delays and limited situational awareness can compromise teleoperation performance and increase the cognitive workload of human operators. To address these issues, we previously introduced the novel successive reference pose tracking (SRPT) approach, which transmits successive reference poses to the vehicle instead of steering commands. This paper compares the stability and performance of SRPT with Smith predictor-based approaches for direct vehicle teleoperation in challenging scenarios. The Smith predictor approach is further categorized, one with Lookahead driver and second with Stanley driver. Simulations are conducted in a Simulink environment, considering variable network delays and different vehicle speeds, and include maneuvers such as tight corners, slalom, low-adhesion roads, and strong crosswinds. The results show that the SRPT approach significantly improves stability and reference tracking performance, with negligible effect of network delays on path tracking. Our findings demonstrate the effectiveness of SRPT in eliminating the detrimental effect of network delays in vehicle teleoperation.
[ { "version": "v1", "created": "Thu, 27 Apr 2023 17:57:38 GMT" } ]
2023-05-02T00:00:00
[ [ "Prakash", "Jai", "" ], [ "Vignati", "Michele", "" ], [ "Sabbioni", "Edoardo", "" ] ]
new_dataset
0.998195
2305.00925
Joseph Bao
Joseph Bao, Murat Kantarcioglu, Yevgeniy Vorobeychik, Charles Kamhoua
IoTFlowGenerator: Crafting Synthetic IoT Device Traffic Flows for Cyber Deception
FLAIRS-36
null
null
null
cs.CR cs.LG
http://creativecommons.org/licenses/by/4.0/
Over the years, honeypots emerged as an important security tool to understand attacker intent and deceive attackers to spend time and resources. Recently, honeypots are being deployed for Internet of things (IoT) devices to lure attackers, and learn their behavior. However, most of the existing IoT honeypots, even the high interaction ones, are easily detected by an attacker who can observe honeypot traffic due to lack of real network traffic originating from the honeypot. This implies that, to build better honeypots and enhance cyber deception capabilities, IoT honeypots need to generate realistic network traffic flows. To achieve this goal, we propose a novel deep learning based approach for generating traffic flows that mimic real network traffic due to user and IoT device interactions. A key technical challenge that our approach overcomes is scarcity of device-specific IoT traffic data to effectively train a generator. We address this challenge by leveraging a core generative adversarial learning algorithm for sequences along with domain specific knowledge common to IoT devices. Through an extensive experimental evaluation with 18 IoT devices, we demonstrate that the proposed synthetic IoT traffic generation tool significantly outperforms state of the art sequence and packet generators in remaining indistinguishable from real traffic even to an adaptive attacker.
[ { "version": "v1", "created": "Mon, 1 May 2023 16:24:07 GMT" } ]
2023-05-02T00:00:00
[ [ "Bao", "Joseph", "" ], [ "Kantarcioglu", "Murat", "" ], [ "Vorobeychik", "Yevgeniy", "" ], [ "Kamhoua", "Charles", "" ] ]
new_dataset
0.993258
2305.00936
Sihun Cha
Sihun Cha, Kwanggyoon Seo, Amirsaman Ashtari, Junyong Noh
Generating Texture for 3D Human Avatar from a Single Image using Sampling and Refinement Networks
null
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
There has been significant progress in generating an animatable 3D human avatar from a single image. However, recovering texture for the 3D human avatar from a single image has been relatively less addressed. Because the generated 3D human avatar reveals the occluded texture of the given image as it moves, it is critical to synthesize the occluded texture pattern that is unseen from the source image. To generate a plausible texture map for 3D human avatars, the occluded texture pattern needs to be synthesized with respect to the visible texture from the given image. Moreover, the generated texture should align with the surface of the target 3D mesh. In this paper, we propose a texture synthesis method for a 3D human avatar that incorporates geometry information. The proposed method consists of two convolutional networks for the sampling and refining process. The sampler network fills in the occluded regions of the source image and aligns the texture with the surface of the target 3D mesh using the geometry information. The sampled texture is further refined and adjusted by the refiner network. To maintain the clear details in the given image, both sampled and refined texture is blended to produce the final texture map. To effectively guide the sampler network to achieve its goal, we designed a curriculum learning scheme that starts from a simple sampling task and gradually progresses to the task where the alignment needs to be considered. We conducted experiments to show that our method outperforms previous methods qualitatively and quantitatively.
[ { "version": "v1", "created": "Mon, 1 May 2023 16:44:02 GMT" } ]
2023-05-02T00:00:00
[ [ "Cha", "Sihun", "" ], [ "Seo", "Kwanggyoon", "" ], [ "Ashtari", "Amirsaman", "" ], [ "Noh", "Junyong", "" ] ]
new_dataset
0.989041
2305.00942
Lizhen Wang
Lizhen Wang, Xiaochen Zhao, Jingxiang Sun, Yuxiang Zhang, Hongwen Zhang, Tao Yu, Yebin Liu
StyleAvatar: Real-time Photo-realistic Portrait Avatar from a Single Video
8 pages, 5 figures, SIGGRAPH 2023 Conference Proceedings
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Face reenactment methods attempt to restore and re-animate portrait videos as realistically as possible. Existing methods face a dilemma in quality versus controllability: 2D GAN-based methods achieve higher image quality but suffer in fine-grained control of facial attributes compared with 3D counterparts. In this work, we propose StyleAvatar, a real-time photo-realistic portrait avatar reconstruction method using StyleGAN-based networks, which can generate high-fidelity portrait avatars with faithful expression control. We expand the capabilities of StyleGAN by introducing a compositional representation and a sliding window augmentation method, which enable faster convergence and improve translation generalization. Specifically, we divide the portrait scenes into three parts for adaptive adjustments: facial region, non-facial foreground region, and the background. Besides, our network leverages the best of UNet, StyleGAN and time coding for video learning, which enables high-quality video generation. Furthermore, a sliding window augmentation method together with a pre-training strategy are proposed to improve translation generalization and training performance, respectively. The proposed network can converge within two hours while ensuring high image quality and a forward rendering time of only 20 milliseconds. Furthermore, we propose a real-time live system, which further pushes research into applications. Results and experiments demonstrate the superiority of our method in terms of image quality, full portrait video generation, and real-time re-animation compared to existing facial reenactment methods. Training and inference code for this paper are at https://github.com/LizhenWangT/StyleAvatar.
[ { "version": "v1", "created": "Mon, 1 May 2023 16:54:35 GMT" } ]
2023-05-02T00:00:00
[ [ "Wang", "Lizhen", "" ], [ "Zhao", "Xiaochen", "" ], [ "Sun", "Jingxiang", "" ], [ "Zhang", "Yuxiang", "" ], [ "Zhang", "Hongwen", "" ], [ "Yu", "Tao", "" ], [ "Liu", "Yebin", "" ] ]
new_dataset
0.993377
2305.00956
Debarnab Mitra
Debarnab Mitra, Lev Tauz, Murat Can Sarihan, Chee Wei Wong, and Lara Dolecek
Non-Binary LDPC Code Design for Energy-Time Entanglement Quantum Key Distribution
5 pages, 4 figures, submitted to International Symposium on Topics in Coding
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
In energy-time entanglement Quantum Key Distribution (QKD), two users extract a shared secret key from the arrival times (discretized as symbols) of entangled photon pairs. In prior work, Zhou et al. proposed a multi-level coding (MLC) scheme that splits the observed symbols into bit layers and utilizes binary Low-Density Parity-Check (LDPC) codes for reconciliation of the symbols. While binary LDPC codes offer low latency for key generation, splitting the symbols into bits results in a loss of key generation rate due to error propagation. Additionally, existing LDPC codes do not fully utilize the properties of the QKD channel to optimize the key rates. In this paper, we mitigate the above issues by first generalizing the MLC scheme to a non-binary(NB) MLC scheme that has layers with non-binary symbols and utilizes NB-LDPC codes. We show the NB-MLC scheme offers flexibility in system design. Additionally, we show that the NB-MLC scheme with a small symbol size per layer offers the best trade-off between latency and key rate. We then propose a framework to jointly optimize the rate and degree profile of the NB-LDPC codes that is tailored towards the QKD channel resulting in higher key rates than prior work.
[ { "version": "v1", "created": "Mon, 1 May 2023 17:39:02 GMT" } ]
2023-05-02T00:00:00
[ [ "Mitra", "Debarnab", "" ], [ "Tauz", "Lev", "" ], [ "Sarihan", "Murat Can", "" ], [ "Wong", "Chee Wei", "" ], [ "Dolecek", "Lara", "" ] ]
new_dataset
0.997402
2103.03133
\v{S}imon Bil\'ik
Simon Bilik, Lukas Kratochvila, Adam Ligocki, Ondrej Bostik, Tomas Zemcik, Matous Hybl, Karel Horak, Ludek Zalud
Visual diagnosis of the Varroa destructor parasitic mite in honeybees using object detector techniques
null
Sensors, 21-8 (2021), 2764-2780
10.3390/s21082764
BUT171160
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The Varroa destructor mite is one of the most dangerous Honey Bee (Apis mellifera) parasites worldwide and the bee colonies have to be regularly monitored in order to control its spread. Here we present an object detector based method for health state monitoring of bee colonies. This method has the potential for online measurement and processing. In our experiment, we compare the YOLO and SSD object detectors along with the Deep SVDD anomaly detector. Based on the custom dataset with 600 ground-truth images of healthy and infected bees in various scenes, the detectors reached a high F1 score up to 0.874 in the infected bee detection and up to 0.727 in the detection of the Varroa Destructor mite itself. The results demonstrate the potential of this approach, which will be later used in the real-time computer vision based honey bee inspection system. To the best of our knowledge, this study is the first one using object detectors for this purpose. We expect that performance of those object detectors will enable us to inspect the health status of the honey bee colonies.
[ { "version": "v1", "created": "Fri, 26 Feb 2021 11:01:31 GMT" } ]
2023-05-01T00:00:00
[ [ "Bilik", "Simon", "" ], [ "Kratochvila", "Lukas", "" ], [ "Ligocki", "Adam", "" ], [ "Bostik", "Ondrej", "" ], [ "Zemcik", "Tomas", "" ], [ "Hybl", "Matous", "" ], [ "Horak", "Karel", "" ], [ "Zalud", "Ludek", "" ] ]
new_dataset
0.99844
2209.11864
Yizhou Huang
Yizhou Huang, Hamza Dugmag, Timothy D. Barfoot, and Florian Shkurti
Stochastic Planning for ASV Navigation Using Satellite Images
7 pages, 5 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous surface vessels (ASV) represent a promising technology to automate water-quality monitoring of lakes. In this work, we use satellite images as a coarse map and plan sampling routes for the robot. However, inconsistency between the satellite images and the actual lake, as well as environmental disturbances such as wind, aquatic vegetation, and changing water levels can make it difficult for robots to visit places suggested by the prior map. This paper presents a robust route-planning algorithm that minimizes the expected total travel distance given these environmental disturbances, which induce uncertainties in the map. We verify the efficacy of our algorithm in simulations of over a thousand Canadian lakes and demonstrate an application of our algorithm in a 3.7 km-long real-world robot experiment on a lake in Northern Ontario, Canada. Videos are available on our website https://pcctp.github.io/.
[ { "version": "v1", "created": "Fri, 23 Sep 2022 21:25:48 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 15:27:36 GMT" } ]
2023-05-01T00:00:00
[ [ "Huang", "Yizhou", "" ], [ "Dugmag", "Hamza", "" ], [ "Barfoot", "Timothy D.", "" ], [ "Shkurti", "Florian", "" ] ]
new_dataset
0.994372
2210.04476
Albert Yu
Albert Yu, Raymond J. Mooney
Using Both Demonstrations and Language Instructions to Efficiently Learn Robotic Tasks
24 pages, 10 figures. Project website at https://deltaco-robot.github.io/
null
null
null
cs.RO cs.CL cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Demonstrations and natural language instructions are two common ways to specify and teach robots novel tasks. However, for many complex tasks, a demonstration or language instruction alone contains ambiguities, preventing tasks from being specified clearly. In such cases, a combination of both a demonstration and an instruction more concisely and effectively conveys the task to the robot than either modality alone. To instantiate this problem setting, we train a single multi-task policy on a few hundred challenging robotic pick-and-place tasks and propose DeL-TaCo (Joint Demo-Language Task Conditioning), a method for conditioning a robotic policy on task embeddings comprised of two components: a visual demonstration and a language instruction. By allowing these two modalities to mutually disambiguate and clarify each other during novel task specification, DeL-TaCo (1) substantially decreases the teacher effort needed to specify a new task and (2) achieves better generalization performance on novel objects and instructions over previous task-conditioning methods. To our knowledge, this is the first work to show that simultaneously conditioning a multi-task robotic manipulation policy on both demonstration and language embeddings improves sample efficiency and generalization over conditioning on either modality alone. See additional materials at https://deltaco-robot.github.io/
[ { "version": "v1", "created": "Mon, 10 Oct 2022 08:06:58 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 09:38:07 GMT" } ]
2023-05-01T00:00:00
[ [ "Yu", "Albert", "" ], [ "Mooney", "Raymond J.", "" ] ]
new_dataset
0.994442
2301.13441
Xu Wen
Xu Wen, Wanling Gao, Anzheng Li, Lei Wang, Zihan Jiang, Jianfeng Zhan
CMLCompiler: A Unified Compiler for Classical Machine Learning
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Classical machine learning (CML) occupies nearly half of machine learning pipelines in production applications. Unfortunately, it fails to utilize the state-of-the-practice devices fully and performs poorly. Without a unified framework, the hybrid deployments of deep learning (DL) and CML also suffer from severe performance and portability issues. This paper presents the design of a unified compiler, called CMLCompiler, for CML inference. We propose two unified abstractions: operator representations and extended computational graphs. The CMLCompiler framework performs the conversion and graph optimization based on two unified abstractions, then outputs an optimized computational graph to DL compilers or frameworks. We implement CMLCompiler on TVM. The evaluation shows CMLCompiler's portability and superior performance. It achieves up to 4.38$\times$ speedup on CPU, 3.31$\times$ speedup on GPU, and 5.09$\times$ speedup on IoT devices, compared to the state-of-the-art solutions -- scikit-learn, intel sklearn, and hummingbird. Our performance of CML and DL mixed pipelines achieves up to 3.04x speedup compared with cross-framework implementations. The project documents and source code are available at https://www.computercouncil.org/cmlcompiler.
[ { "version": "v1", "created": "Tue, 31 Jan 2023 06:38:05 GMT" }, { "version": "v2", "created": "Wed, 1 Feb 2023 02:49:12 GMT" }, { "version": "v3", "created": "Fri, 28 Apr 2023 06:44:50 GMT" } ]
2023-05-01T00:00:00
[ [ "Wen", "Xu", "" ], [ "Gao", "Wanling", "" ], [ "Li", "Anzheng", "" ], [ "Wang", "Lei", "" ], [ "Jiang", "Zihan", "" ], [ "Zhan", "Jianfeng", "" ] ]
new_dataset
0.996974
2302.01039
Chao Wang
Chao Wang, Anna Belardinelli, Stephan Hasler, Theodoros Stouraitis, Daniel Tanneberg, Michael Gienger
Explainable Human-Robot Training and Cooperation with Augmented Reality
null
null
10.1145/3544549.3583889
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The current spread of social and assistive robotics applications is increasingly highlighting the need for robots that can be easily taught and interacted with, even by users with no technical background. Still, it is often difficult to grasp what such robots know or to assess if a correct representation of the task is being formed. Augmented Reality (AR) has the potential to bridge this gap. We demonstrate three use cases where AR design elements enhance the explainability and efficiency of human-robot interaction: 1) a human teaching a robot some simple kitchen tasks by demonstration, 2) the robot showing its plan for solving novel tasks in AR to a human for validation, and 3) a robot communicating its intentions via AR while assisting people with limited mobility during daily activities.
[ { "version": "v1", "created": "Thu, 2 Feb 2023 12:07:34 GMT" } ]
2023-05-01T00:00:00
[ [ "Wang", "Chao", "" ], [ "Belardinelli", "Anna", "" ], [ "Hasler", "Stephan", "" ], [ "Stouraitis", "Theodoros", "" ], [ "Tanneberg", "Daniel", "" ], [ "Gienger", "Michael", "" ] ]
new_dataset
0.999083
2302.07363
Haoran Wang
Haoran Wang, Yingtong Dou, Canyu Chen, Lichao Sun, Philip S. Yu, Kai Shu
Attacking Fake News Detectors via Manipulating News Social Engagement
ACM Web Conference 2023 (WWW'23)
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Social media is one of the main sources for news consumption, especially among the younger generation. With the increasing popularity of news consumption on various social media platforms, there has been a surge of misinformation which includes false information or unfounded claims. As various text- and social context-based fake news detectors are proposed to detect misinformation on social media, recent works start to focus on the vulnerabilities of fake news detectors. In this paper, we present the first adversarial attack framework against Graph Neural Network (GNN)-based fake news detectors to probe their robustness. Specifically, we leverage a multi-agent reinforcement learning (MARL) framework to simulate the adversarial behavior of fraudsters on social media. Research has shown that in real-world settings, fraudsters coordinate with each other to share different news in order to evade the detection of fake news detectors. Therefore, we modeled our MARL framework as a Markov Game with bot, cyborg, and crowd worker agents, which have their own distinctive cost, budget, and influence. We then use deep Q-learning to search for the optimal policy that maximizes the rewards. Extensive experimental results on two real-world fake news propagation datasets demonstrate that our proposed framework can effectively sabotage the GNN-based fake news detector performance. We hope this paper can provide insights for future research on fake news detection.
[ { "version": "v1", "created": "Tue, 14 Feb 2023 21:51:56 GMT" }, { "version": "v2", "created": "Tue, 21 Feb 2023 19:05:42 GMT" }, { "version": "v3", "created": "Thu, 27 Apr 2023 19:39:43 GMT" } ]
2023-05-01T00:00:00
[ [ "Wang", "Haoran", "" ], [ "Dou", "Yingtong", "" ], [ "Chen", "Canyu", "" ], [ "Sun", "Lichao", "" ], [ "Yu", "Philip S.", "" ], [ "Shu", "Kai", "" ] ]
new_dataset
0.969754
2302.11097
Ming-Liang Zhang
Ming-Liang Zhang, Fei Yin, Cheng-Lin Liu
A Multi-Modal Neural Geometric Solver with Textual Clauses Parsed from Diagram
Accepted to IJCAI 2023
null
null
null
cs.AI cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Geometry problem solving (GPS) is a high-level mathematical reasoning requiring the capacities of multi-modal fusion and geometric knowledge application. Recently, neural solvers have shown great potential in GPS but still be short in diagram presentation and modal fusion. In this work, we convert diagrams into basic textual clauses to describe diagram features effectively, and propose a new neural solver called PGPSNet to fuse multi-modal information efficiently. Combining structural and semantic pre-training, data augmentation and self-limited decoding, PGPSNet is endowed with rich knowledge of geometry theorems and geometric representation, and therefore promotes geometric understanding and reasoning. In addition, to facilitate the research of GPS, we build a new large-scale and fine-annotated GPS dataset named PGPS9K, labeled with both fine-grained diagram annotation and interpretable solution program. Experiments on PGPS9K and an existing dataset Geometry3K validate the superiority of our method over the state-of-the-art neural solvers. Our code, dataset and appendix material are available at \url{https://github.com/mingliangzhang2018/PGPS}.
[ { "version": "v1", "created": "Wed, 22 Feb 2023 02:38:25 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 10:04:17 GMT" } ]
2023-05-01T00:00:00
[ [ "Zhang", "Ming-Liang", "" ], [ "Yin", "Fei", "" ], [ "Liu", "Cheng-Lin", "" ] ]
new_dataset
0.995143
2303.06880
Bo Zhang
Bo Zhang, Jiakang Yuan, Botian Shi, Tao Chen, Yikang Li, Yu Qiao
Uni3D: A Unified Baseline for Multi-dataset 3D Object Detection
Accepted by CVPR-2023, and our code is available at https://github.com/PJLab-ADG/3DTrans
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current 3D object detection models follow a single dataset-specific training and testing paradigm, which often faces a serious detection accuracy drop when they are directly deployed in another dataset. In this paper, we study the task of training a unified 3D detector from multiple datasets. We observe that this appears to be a challenging task, which is mainly due to that these datasets present substantial data-level differences and taxonomy-level variations caused by different LiDAR types and data acquisition standards. Inspired by such observation, we present a Uni3D which leverages a simple data-level correction operation and a designed semantic-level coupling-and-recoupling module to alleviate the unavoidable data-level and taxonomy-level differences, respectively. Our method is simple and easily combined with many 3D object detection baselines such as PV-RCNN and Voxel-RCNN, enabling them to effectively learn from multiple off-the-shelf 3D datasets to obtain more discriminative and generalizable representations. Experiments are conducted on many dataset consolidation settings including Waymo-nuScenes, nuScenes-KITTI, Waymo-KITTI, and Waymo-nuScenes-KITTI consolidations. Their results demonstrate that Uni3D exceeds a series of individual detectors trained on a single dataset, with a 1.04x parameter increase over a selected baseline detector. We expect this work will inspire the research of 3D generalization since it will push the limits of perceptual performance.
[ { "version": "v1", "created": "Mon, 13 Mar 2023 05:54:13 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 05:25:22 GMT" } ]
2023-05-01T00:00:00
[ [ "Zhang", "Bo", "" ], [ "Yuan", "Jiakang", "" ], [ "Shi", "Botian", "" ], [ "Chen", "Tao", "" ], [ "Li", "Yikang", "" ], [ "Qiao", "Yu", "" ] ]
new_dataset
0.999658
2303.15811
Florian M\"uller
Florian M\"uller (LMU Munich), Daniel Schmitt (TU Darmstadt), Andrii Matviienko (KTH Royal Institute of Technology), Dominik Sch\"on (TU Darmstadt), Sebastian G\"unther (TU Darmstadt), Thomas Kosch (HU Berlin), Martin Schmitz (Saarland University)
TicTacToes: Assessing Toe Movements as an Input Modality
To appear in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI 23), April 23-28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 17 pages
In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23). Association for Computing Machinery, New York, NY, USA, Article 520, 1-17
10.1145/3544548.3580954
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
From carrying grocery bags to holding onto handles on the bus, there are a variety of situations where one or both hands are busy, hindering the vision of ubiquitous interaction with technology. Voice commands, as a popular hands-free alternative, struggle with ambient noise and privacy issues. As an alternative approach, research explored movements of various body parts (e.g., head, arms) as input modalities, with foot-based techniques proving particularly suitable for hands-free interaction. Whereas previous research only considered the movement of the foot as a whole, in this work, we argue that our toes offer further degrees of freedom that can be leveraged for interaction. To explore the viability of toe-based interaction, we contribute the results of a controlled experiment with 18 participants assessing the impact of five factors on the accuracy, efficiency and user experience of such interfaces. Based on the findings, we provide design recommendations for future toe-based interfaces.
[ { "version": "v1", "created": "Tue, 28 Mar 2023 08:30:05 GMT" }, { "version": "v2", "created": "Thu, 6 Apr 2023 08:14:47 GMT" } ]
2023-05-01T00:00:00
[ [ "Müller", "Florian", "", "LMU Munich" ], [ "Schmitt", "Daniel", "", "TU Darmstadt" ], [ "Matviienko", "Andrii", "", "KTH Royal Institute of Technology" ], [ "Schön", "Dominik", "", "TU\n Darmstadt" ], [ "Günther", "Sebastian", "", "TU Darmstadt" ], [ "Kosch", "Thomas", "", "HU Berlin" ], [ "Schmitz", "Martin", "", "Saarland University" ] ]
new_dataset
0.998823
2304.05351
Qianqian Xie
Qianqian Xie, Weiguang Han, Yanzhao Lai, Min Peng, Jimin Huang
The Wall Street Neophyte: A Zero-Shot Analysis of ChatGPT Over MultiModal Stock Movement Prediction Challenges
13 pages
null
null
null
cs.CL cs.LG q-fin.ST
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, large language models (LLMs) like ChatGPT have demonstrated remarkable performance across a variety of natural language processing tasks. However, their effectiveness in the financial domain, specifically in predicting stock market movements, remains to be explored. In this paper, we conduct an extensive zero-shot analysis of ChatGPT's capabilities in multimodal stock movement prediction, on three tweets and historical stock price datasets. Our findings indicate that ChatGPT is a "Wall Street Neophyte" with limited success in predicting stock movements, as it underperforms not only state-of-the-art methods but also traditional methods like linear regression using price features. Despite the potential of Chain-of-Thought prompting strategies and the inclusion of tweets, ChatGPT's performance remains subpar. Furthermore, we observe limitations in its explainability and stability, suggesting the need for more specialized training or fine-tuning. This research provides insights into ChatGPT's capabilities and serves as a foundation for future work aimed at improving financial market analysis and prediction by leveraging social media sentiment and historical stock data.
[ { "version": "v1", "created": "Mon, 10 Apr 2023 04:31:00 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 12:06:43 GMT" } ]
2023-05-01T00:00:00
[ [ "Xie", "Qianqian", "" ], [ "Han", "Weiguang", "" ], [ "Lai", "Yanzhao", "" ], [ "Peng", "Min", "" ], [ "Huang", "Jimin", "" ] ]
new_dataset
0.998923
2304.10637
Iker Garc\'ia-Ferrero
Iker Garc\'ia-Ferrero, Jon Ander Campos, Oscar Sainz, Ander Salaberria, Dan Roth
IXA/Cogcomp at SemEval-2023 Task 2: Context-enriched Multilingual Named Entity Recognition using Knowledge Bases
SemEval 2023
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Named Entity Recognition (NER) is a core natural language processing task in which pre-trained language models have shown remarkable performance. However, standard benchmarks like CoNLL 2003 do not address many of the challenges that deployed NER systems face, such as having to classify emerging or complex entities in a fine-grained way. In this paper we present a novel NER cascade approach comprising three steps: first, identifying candidate entities in the input sentence; second, linking the each candidate to an existing knowledge base; third, predicting the fine-grained category for each entity candidate. We empirically demonstrate the significance of external knowledge bases in accurately classifying fine-grained and emerging entities. Our system exhibits robust performance in the MultiCoNER2 shared task, even in the low-resource language setting where we leverage knowledge bases of high-resource languages.
[ { "version": "v1", "created": "Thu, 20 Apr 2023 20:30:34 GMT" }, { "version": "v2", "created": "Mon, 24 Apr 2023 10:21:20 GMT" }, { "version": "v3", "created": "Thu, 27 Apr 2023 20:51:36 GMT" } ]
2023-05-01T00:00:00
[ [ "García-Ferrero", "Iker", "" ], [ "Campos", "Jon Ander", "" ], [ "Sainz", "Oscar", "" ], [ "Salaberria", "Ander", "" ], [ "Roth", "Dan", "" ] ]
new_dataset
0.991666
2304.11639
Guangji Chen
Guangji Chen, Qingqing Wu, Celimuge Wu, Mengnan Jian, Yijian Chen, Wen Chen
Static IRS Meets Distributed MIMO: A New Architecture for Dynamic Beamforming
Submitted to IEEE WCL for possible publication
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intelligent reflecting surface (IRS) has been considered as a revolutionary technology to enhance the wireless communication performance. To cater for multiple mobile users, adjusting IRS beamforming patterns over time, i.e., dynamic IRS beamforming (DIBF), is generally needed for achieving satisfactory performance, which results in high controlling power consumption and overhead. To avoid such cost, we propose a new architecture based on the static regulated IRS for wireless coverage enhancement, where the principle of distributed multiple-input multiple-output (D-MIMO) is integrated into the system to exploite the diversity of spatial directions provided by multiple access points (APs). For this new D-MIMO empowered static IRS architecture, the total target area is partitioned into several subareas and each subarea is served by an assigned AP. We consider to maximize the worst-case received power over all locations in the target area by jointly optimizing a single set of IRS beamforming pattern and AP-subarea association. Then, a two-step algorithm is proposed to obtain its high-quality solution. Theoretical analysis unveils that the fundamental squared power gain can still be achieved over all locations in the target area. The performance gap relative to the DIBF scheme is also analytically quantified. Numerical results validate our theoretical findings and demonstrate the effectiveness of our proposed design over benchmark schemes.
[ { "version": "v1", "created": "Sun, 23 Apr 2023 12:44:00 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 08:17:25 GMT" } ]
2023-05-01T00:00:00
[ [ "Chen", "Guangji", "" ], [ "Wu", "Qingqing", "" ], [ "Wu", "Celimuge", "" ], [ "Jian", "Mengnan", "" ], [ "Chen", "Yijian", "" ], [ "Chen", "Wen", "" ] ]
new_dataset
0.999149
2304.11968
Zhe Li
Jinyu Yang, Mingqi Gao, Zhe Li, Shang Gao, Fangjing Wang, Feng Zheng
Track Anything: Segment Anything Meets Videos
Tech-report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, the Segment Anything Model (SAM) gains lots of attention rapidly due to its impressive segmentation performance on images. Regarding its strong ability on image segmentation and high interactivity with different prompts, we found that it performs poorly on consistent segmentation in videos. Therefore, in this report, we propose Track Anything Model (TAM), which achieves high-performance interactive tracking and segmentation in videos. To be detailed, given a video sequence, only with very little human participation, i.e., several clicks, people can track anything they are interested in, and get satisfactory results in one-pass inference. Without additional training, such an interactive design performs impressively on video object tracking and segmentation. All resources are available on {https://github.com/gaomingqi/Track-Anything}. We hope this work can facilitate related research.
[ { "version": "v1", "created": "Mon, 24 Apr 2023 10:04:06 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 03:21:27 GMT" } ]
2023-05-01T00:00:00
[ [ "Yang", "Jinyu", "" ], [ "Gao", "Mingqi", "" ], [ "Li", "Zhe", "" ], [ "Gao", "Shang", "" ], [ "Wang", "Fangjing", "" ], [ "Zheng", "Feng", "" ] ]
new_dataset
0.973078
2304.12041
Anand Agrawal
Anand Agrawal and Rajib Ranjan Maiti
iTieProbe: Is Your IoT Setup Secure against (Modern) Evil Twin?
To do the responsible vulnerability disclosure of our findings
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Evil twin attack on Wi-Fi network has been a challenging security problem and several solutions have been proposed to this problem. In general, evil twin attack aims to exfiltrate data, like Wi-Fi and service credentials, from the client devices and considered as a serious threat at MAC layer. IoT devices with its companion apps provides different pairing methods for provisioning. The "SmartConfig Mode", the one proposed by Texas Instrument (TI) and the "Access Point pairing mode (AP mode)" are the most common pairing modes provided by the application developer and vendor of the IoT devices. Especially, AP mode use Wi-Fi connectivity to setup IoT devices where a device activates an access point to which the mobile device running the corresponding mobile application is required to connect. In this paper, we have used evil twin attack as a weapon to test the security posture of IoT devices that use Wi-Fi network to set them up. We have designed, implemented and applied a system, called iTieProbe, that can be used in ethical hacking for discovering certain vulnerabilities during such setup. AP mode successfully completes when the mobile device is able to communicate with the IoT device via a home router over a Wi-Fi network. Our proposed system, iTieProbe, is capable of discovering several serious vulnerabilities in the commercial IoT devices that use AP mode or similar approach. We evaluated iTieProbe's efficacy on 9 IoT devices, like IoT cameras, smart plugs, Echo Dot and smart bulbs, and discovered that several of these IoT devices have certain serious threats, like leaking Wi-Fi credential of home router and creating fake IoT device, during the setup of the IoT devices.
[ { "version": "v1", "created": "Mon, 24 Apr 2023 12:38:06 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 06:42:20 GMT" } ]
2023-05-01T00:00:00
[ [ "Agrawal", "Anand", "" ], [ "Maiti", "Rajib Ranjan", "" ] ]
new_dataset
0.995234
2304.12412
Arya Rachman
Arya Rachman, J\"urgen Seiler, and Andr\'e Kaup
End-to-End Lidar-Camera Self-Calibration for Autonomous Vehicles
Accepted for The 35th IEEE Intelligent Vehicles Symposium (IV 2023)
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Autonomous vehicles are equipped with a multi-modal sensor setup to enable the car to drive safely. The initial calibration of such perception sensors is a highly matured topic and is routinely done in an automated factory environment. However, an intriguing question arises on how to maintain the calibration quality throughout the vehicle's operating duration. Another challenge is to calibrate multiple sensors jointly to ensure no propagation of systemic errors. In this paper, we propose CaLiCa, an end-to-end deep self-calibration network which addresses the automatic calibration problem for pinhole camera and Lidar. We jointly predict the camera intrinsic parameters (focal length and distortion) as well as Lidar-Camera extrinsic parameters (rotation and translation), by regressing feature correlation between the camera image and the Lidar point cloud. The network is arranged in a Siamese-twin structure to constrain the network features learning to a mutually shared feature in both point cloud and camera (Lidar-camera constraint). Evaluation using KITTI datasets shows that we achieve 0.154 {\deg} and 0.059 m accuracy with a reprojection error of 0.028 pixel with a single-pass inference. We also provide an ablative study of how our end-to-end learning architecture offers lower terminal loss (21% decrease in rotation loss) compared to isolated calibration
[ { "version": "v1", "created": "Mon, 24 Apr 2023 19:44:23 GMT" }, { "version": "v2", "created": "Fri, 28 Apr 2023 01:12:36 GMT" } ]
2023-05-01T00:00:00
[ [ "Rachman", "Arya", "" ], [ "Seiler", "Jürgen", "" ], [ "Kaup", "André", "" ] ]
new_dataset
0.995424
2304.14418
Madhusudhanan Balasubramanian
Fisseha Admasu Ferede, Madhusudhanan Balasubramanian
SSTM: Spatiotemporal Recurrent Transformers for Multi-frame Optical Flow Estimation
5 tables, 7 figures, MS thesis
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inaccurate optical flow estimates in and near occluded regions, and out-of-boundary regions are two of the current significant limitations of optical flow estimation algorithms. Recent state-of-the-art optical flow estimation algorithms are two-frame based methods where optical flow is estimated sequentially for each consecutive image pair in a sequence. While this approach gives good flow estimates, it fails to generalize optical flows in occluded regions mainly due to limited local evidence regarding moving elements in a scene. In this work, we propose a learning-based multi-frame optical flow estimation method that estimates two or more consecutive optical flows in parallel from multi-frame image sequences. Our underlying hypothesis is that by understanding temporal scene dynamics from longer sequences with more than two frames, we can characterize pixel-wise dependencies in a larger spatiotemporal domain, generalize complex motion patterns and thereby improve the accuracy of optical flow estimates in occluded regions. We present learning-based spatiotemporal recurrent transformers for multi-frame based optical flow estimation (SSTMs). Our method utilizes 3D Convolutional Gated Recurrent Units (3D-ConvGRUs) and spatiotemporal transformers to learn recurrent space-time motion dynamics and global dependencies in the scene and provide a generalized optical flow estimation. When compared with recent state-of-the-art two-frame and multi-frame methods on real world and synthetic datasets, performance of the SSTMs were significantly higher in occluded and out-of-boundary regions. Among all published state-of-the-art multi-frame methods, SSTM achieved state-of the-art results on the Sintel Final and KITTI2015 benchmark datasets.
[ { "version": "v1", "created": "Wed, 26 Apr 2023 23:39:40 GMT" } ]
2023-05-01T00:00:00
[ [ "Ferede", "Fisseha Admasu", "" ], [ "Balasubramanian", "Madhusudhanan", "" ] ]
new_dataset
0.982904
2304.14444
\v{S}imon Bil\'ik
Jakub Nevlacil, Simon Bilik, Karel Horak
Raspberry Pi Bee Health Monitoring Device
null
null
null
null
cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
A declining honeybee population could pose a threat to a food resources of the whole world one of the latest trend in beekeeping is an effort to monitor a health of the honeybees using various sensors and devices. This paper participates on a development on one of these devices. The aim of this paper is to make an upgrades and improvement of an in-development bee health monitoring device and propose a remote data logging solution for a continual monitoring of a beehive.
[ { "version": "v1", "created": "Thu, 27 Apr 2023 18:05:52 GMT" } ]
2023-05-01T00:00:00
[ [ "Nevlacil", "Jakub", "" ], [ "Bilik", "Simon", "" ], [ "Horak", "Karel", "" ] ]
new_dataset
0.999039
2304.14466
Hamam Mokayed Dr
Hamam Mokayed and Amirhossein Nayebiastaneh and Kanjar De and Stergios Sozos and Olle Hagner and Bjorn Backe
Nordic Vehicle Dataset (NVD): Performance of vehicle detectors using newly captured NVD from UAV in different snowy weather conditions
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Vehicle detection and recognition in drone images is a complex problem that has been used for different safety purposes. The main challenge of these images is captured at oblique angles and poses several challenges like non-uniform illumination effect, degradations, blur, occlusion, loss of visibility, etc. Additionally, weather conditions play a crucial role in causing safety concerns and add another high level of challenge to the collected data. Over the past few decades, various techniques have been employed to detect and track vehicles in different weather conditions. However, detecting vehicles in heavy snow is still in the early stages because of a lack of available data. Furthermore, there has been no research on detecting vehicles in snowy weather using real images captured by unmanned aerial vehicles (UAVs). This study aims to address this gap by providing the scientific community with data on vehicles captured by UAVs in different settings and under various snow cover conditions in the Nordic region. The data covers different adverse weather conditions like overcast with snowfall, low light and low contrast conditions with patchy snow cover, high brightness, sunlight, fresh snow, and the temperature reaching far below -0 degrees Celsius. The study also evaluates the performance of commonly used object detection methods such as Yolo v8, Yolo v5, and fast RCNN. Additionally, data augmentation techniques are explored, and those that enhance the detectors' performance in such scenarios are proposed. The code and the dataset will be available at https://nvd.ltu-ai.dev
[ { "version": "v1", "created": "Thu, 27 Apr 2023 18:55:43 GMT" } ]
2023-05-01T00:00:00
[ [ "Mokayed", "Hamam", "" ], [ "Nayebiastaneh", "Amirhossein", "" ], [ "De", "Kanjar", "" ], [ "Sozos", "Stergios", "" ], [ "Hagner", "Olle", "" ], [ "Backe", "Bjorn", "" ] ]
new_dataset
0.999773
2304.14492
Mohammed Al-Rawi
Mohammed Al-Rawi
Ultra-Fast Zernike Moments using FFT and GPU
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Zernike moments can be used to generate invariant features that are applied in various machine vision applications. They, however, suffer from slow implementation and numerical stability problems. We propose a novel method for computing Zernike using Fast Fourier Transform (FFT) and GPU computing. The method can be used to generate accurate moments up to high orders, and can compute Zernike moments of 4K resolution images in real-time. Numerical accuracies of Zernike moments computed with the proposed FFT approach have been analyzed using the orthogonality property and the results show that they beat other methods in numerical stability. The proposed method is simple and fast and can make use of the huge GPU-FFT libraries that are available in several programming frameworks.
[ { "version": "v1", "created": "Thu, 6 Apr 2023 14:39:08 GMT" } ]
2023-05-01T00:00:00
[ [ "Al-Rawi", "Mohammed", "" ] ]
new_dataset
0.995465
2304.14500
Fang Chen
Fang Chen, Heiko Balzter, Peng Ren and Huiyu Zhou
SRCNet: Seminal Representation Collaborative Network for Marine Oil Spill Segmentation
arXiv admin note: substantial text overlap with arXiv:2301.01202
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Effective oil spill segmentation in Synthetic Aperture Radar (SAR) images is critical for marine oil pollution cleanup, and proper image representation is helpful for accurate image segmentation. In this paper, we propose an effective oil spill image segmentation network named SRCNet by leveraging SAR image representation and the training for oil spill segmentation simultaneously. Specifically, our proposed segmentation network is constructed with a pair of deep neural nets with the collaboration of the seminal representation that describes SAR images, where one deep neural net is the generative net which strives to produce oil spill segmentation maps, and the other is the discriminative net which trys its best to distinguish between the produced and the true segmentations, and they thus built a two-player game. Particularly, the seminal representation exploited in our proposed SRCNet originates from SAR imagery, modelling with the internal characteristics of SAR images. Thus, in the training process, the collaborated seminal representation empowers the mapped generative net to produce accurate oil spill segmentation maps efficiently with small amount of training data, promoting the discriminative net reaching its optimal solution at a fast speed. Therefore, our proposed SRCNet operates effective oil spill segmentation in an economical and efficient manner. Additionally, to increase the segmentation capability of the proposed segmentation network in terms of accurately delineating oil spill details in SAR images, a regularisation term that penalises the segmentation loss is devised. This encourages our proposed SRCNet for accurately segmenting oil spill areas from SAR images. Empirical experimental evaluations from different metrics validate the effectiveness of our proposed SRCNet for oil spill image segmentation.
[ { "version": "v1", "created": "Mon, 17 Apr 2023 13:23:03 GMT" } ]
2023-05-01T00:00:00
[ [ "Chen", "Fang", "" ], [ "Balzter", "Heiko", "" ], [ "Ren", "Peng", "" ], [ "Zhou", "Huiyu", "" ] ]
new_dataset
0.994226
2304.14501
Jiafei Duan
Jiafei Duan, Samson Yu, Nicholas Tan, Yi Ru Wang, Cheston Tan
Read My Mind: A Multi-Modal Dataset for Human Belief Prediction
Accepted to ICRA 2023 Communicating Robot Learning Across Human-Robot Interaction Workshop
null
null
null
cs.CV cs.AI cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Understanding human intentions is key to enabling effective and efficient human-robot interaction (HRI) in collaborative settings. To enable developments and evaluation of the ability of artificial intelligence (AI) systems to infer human beliefs, we introduce a large-scale multi-modal video dataset for intent prediction based on object-context relations.
[ { "version": "v1", "created": "Tue, 7 Mar 2023 06:19:38 GMT" } ]
2023-05-01T00:00:00
[ [ "Duan", "Jiafei", "" ], [ "Yu", "Samson", "" ], [ "Tan", "Nicholas", "" ], [ "Wang", "Yi Ru", "" ], [ "Tan", "Cheston", "" ] ]
new_dataset
0.998736
2304.14507
Bala Murugan MS
Vrinda Agarwal, Aaron George Pichappa, Manideep Ramisetty, Bala Murugan MS, Manoj kumar Rajagopal
Suspicious Vehicle Detection Using Licence Plate Detection And Facial Feature Recognition
eight pages and three figures
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing need to strengthen vehicle safety and detection, the availability of pre-existing methods of catching criminals and identifying vehicles manually through the various traffic surveillance cameras is not only time-consuming but also inefficient. With the advancement of technology in every field the use of real-time traffic surveillance models will help facilitate an easy approach. Keeping this in mind, the main focus of our paper is to develop a combined face recognition and number plate recognition model to ensure vehicle safety and real-time tracking of running-away criminals and stolen vehicles.
[ { "version": "v1", "created": "Tue, 18 Apr 2023 06:44:08 GMT" } ]
2023-05-01T00:00:00
[ [ "Agarwal", "Vrinda", "" ], [ "Pichappa", "Aaron George", "" ], [ "Ramisetty", "Manideep", "" ], [ "MS", "Bala Murugan", "" ], [ "Rajagopal", "Manoj kumar", "" ] ]
new_dataset
0.997272
2304.14510
Martina Paccini
Martina Paccini, Giuseppe Patan\`e, Michela Spagnuolo
3D Patient-specific Modelling and Characterisation of Muscle-Skeletal Districts
arXiv admin note: substantial text overlap with arXiv:2208.08983
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses the patient-specific characterisation of the morphology and pathologies of muscle-skeletal districts (e.g., wrist, spine) to support diagnostic activities and follow-up exams through the integration of morphological and tissue information. We propose different methods for the integration of morphological information, retrieved from the geometrical analysis of 3D surface models, with tissue information extracted from volume images. For the qualitative and quantitative validation, we will discuss the localisation of bone erosion sites on the wrists to monitor rheumatic diseases and the characterisation of the three functional regions of the spinal vertebrae to study the presence of osteoporotic fractures. The proposed approach supports the quantitative and visual evaluation of possible damages, surgery planning, and early diagnosis or follow-up studies. Finally, our analysis is general enough to be applied to different districts.
[ { "version": "v1", "created": "Tue, 18 Apr 2023 21:46:42 GMT" } ]
2023-05-01T00:00:00
[ [ "Paccini", "Martina", "" ], [ "Patanè", "Giuseppe", "" ], [ "Spagnuolo", "Michela", "" ] ]
new_dataset
0.960758
2304.14516
Valdecy Pereira
Valdecy Pereira, Marcio Pereira Basilio, Carlos Henrique Tarjano Santos
pyBibX -- A Python Library for Bibliometric and Scientometric Analysis Powered with Artificial Intelligence Tools
30 pages, 12 figures, 6 tables
null
null
null
cs.DL cs.AI
http://creativecommons.org/licenses/by/4.0/
Bibliometric and Scientometric analyses offer invaluable perspectives on the complex research terrain and collaborative dynamics spanning diverse academic disciplines. This paper presents pyBibX, a python library devised to conduct comprehensive bibliometric and scientometric analyses on raw data files sourced from Scopus, Web of Science, and PubMed, seamlessly integrating state of the art AI capabilities into its core functionality. The library executes a comprehensive EDA, presenting outcomes via visually appealing graphical illustrations. Network capabilities have been deftly integrated, encompassing Citation, Collaboration, and Similarity Analysis. Furthermore, the library incorporates AI capabilities, including Embedding vectors, Topic Modeling, Text Summarization, and other general Natural Language Processing tasks, employing models such as Sentence-BERT, BerTopic, BERT, chatGPT, and PEGASUS. As a demonstration, we have analyzed 184 documents associated with multiple-criteria decision analysis published between 1984 and 2023. The EDA emphasized a growing fascination with decision-making and fuzzy logic methodologies. Next, Network Analysis further accentuated the significance of central authors and intra-continental collaboration, identifying Canada and China as crucial collaboration hubs. Finally, AI Analysis distinguished two primary topics and chatGPT preeminence in Text Summarization. It also proved to be an indispensable instrument for interpreting results, as our library enables researchers to pose inquiries to chatGPT regarding bibliometric outcomes. Even so, data homogeneity remains a daunting challenge due to database inconsistencies. PyBibX is the first application integrating cutting-edge AI capabilities for analyzing scientific publications, enabling researchers to examine and interpret these outcomes more effectively.
[ { "version": "v1", "created": "Thu, 27 Apr 2023 20:06:07 GMT" } ]
2023-05-01T00:00:00
[ [ "Pereira", "Valdecy", "" ], [ "Basilio", "Marcio Pereira", "" ], [ "Santos", "Carlos Henrique Tarjano", "" ] ]
new_dataset
0.997869
2304.14539
Marco Peressotti
Lu\'is Cruz-Filipe, Eva Graversen, Fabrizio Montesi, Marco Peressotti
Reasoning about Choreographic Programs
null
null
null
null
cs.PL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Choreographic programming is a paradigm where a concurrent or distributed system is developed in a top-down fashion. Programs, called choreographies, detail the desired interactions between processes, and can be compiled to distributed implementations based on message passing. Choreographic languages usually guarantee deadlock-freedom and provide an operational correspondence between choreographies and their compiled implementations, but until now little work has been done on verifying other properties. This paper presents a Hoare-style logic for reasoning about the behaviour of choreographies, and illustrate its usage in representative examples. We show that this logic is sound and complete, and discuss decidability of its judgements. Using existing results from choreographic programming, we show that any functional correctness property proven for a choreography also holds for its compiled implementation.
[ { "version": "v1", "created": "Thu, 27 Apr 2023 21:37:29 GMT" } ]
2023-05-01T00:00:00
[ [ "Cruz-Filipe", "Luís", "" ], [ "Graversen", "Eva", "" ], [ "Montesi", "Fabrizio", "" ], [ "Peressotti", "Marco", "" ] ]
new_dataset
0.992137
2304.14571
Yousef Yeganeh
Yousef Yeganeh, Azade Farshad, Peter Weinberger, Seyed-Ahmad Ahmadi, Ehsan Adeli, Nassir Navab
DIAMANT: Dual Image-Attention Map Encoders For Medical Image Segmentation
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although purely transformer-based architectures showed promising performance in many computer vision tasks, many hybrid models consisting of CNN and transformer blocks are introduced to fit more specialized tasks. Nevertheless, despite the performance gain of both pure and hybrid transformer-based architectures compared to CNNs in medical imaging segmentation, their high training cost and complexity make it challenging to use them in real scenarios. In this work, we propose simple architectures based on purely convolutional layers, and show that by just taking advantage of the attention map visualizations obtained from a self-supervised pretrained vision transformer network (e.g., DINO) one can outperform complex transformer-based networks with much less computation costs. The proposed architecture is composed of two encoder branches with the original image as input in one branch and the attention map visualizations of the same image from multiple self-attention heads from a pre-trained DINO model (as multiple channels) in the other branch. The results of our experiments on two publicly available medical imaging datasets show that the proposed pipeline outperforms U-Net and the state-of-the-art medical image segmentation models.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 00:11:18 GMT" } ]
2023-05-01T00:00:00
[ [ "Yeganeh", "Yousef", "" ], [ "Farshad", "Azade", "" ], [ "Weinberger", "Peter", "" ], [ "Ahmadi", "Seyed-Ahmad", "" ], [ "Adeli", "Ehsan", "" ], [ "Navab", "Nassir", "" ] ]
new_dataset
0.998163
2304.14581
Mao Yang
Ze Liu, Bo Li, Mao Yang, ZhongJiang Yan
An Adaptive Channel Reservation MAC Protocol Based on Forwarding Traffic of Key Nodes
17 pages, 14 figures
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ad Hoc networks with multi-hop topology are widely used in military and civilian applications. One challenge for Ad Hoc networks is to design efficient Media Access Control (MAC) protocols to ensure the quality of service (QoS). In Ad Hoc networks, there is a kind of node called key node, which undertakes more forwarding traffic than other surrounding nodes. The number of neighbor nodes around key nodes is often large, and the surrounding channel environment and interference are often more complex. Thus, the key nodes can hardly get enough channel access opportunities, resulting in poor end-to-end performance. Therefore, we propose an adaptive channel reservation MAC protocol based on forwarding traffic of key nodes, which is aimed at alleviating the congestion for key nodes. Nodes initiate reservations for future transmission time according to the buffer status before sending packets and then calculate the Weight of Reservation Ability (WRA). The node adaptively adjusts its reservation opportunity by comparing the WRA with neighbor nodes, thus improving the channel access efficiency and ensuring the transmission opportunity of key nodes. Extensive simulation confirms that our proposed FTKN-CRM provides significant improvements in end-to-end performance over the IEEE 802.11ax protocol and other reservation access protocols.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 00:50:48 GMT" } ]
2023-05-01T00:00:00
[ [ "Liu", "Ze", "" ], [ "Li", "Bo", "" ], [ "Yang", "Mao", "" ], [ "Yan", "ZhongJiang", "" ] ]
new_dataset
0.988395
2304.14599
Gunther Jikeli Jr.
Gunther Jikeli, Sameer Karali, Daniel Miehling, and Katharina Soemer
Antisemitic Messages? A Guide to High-Quality Annotation and a Labeled Dataset of Tweets
null
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
One of the major challenges in automatic hate speech detection is the lack of datasets that cover a wide range of biased and unbiased messages and that are consistently labeled. We propose a labeling procedure that addresses some of the common weaknesses of labeled datasets. We focus on antisemitic speech on Twitter and create a labeled dataset of 6,941 tweets that cover a wide range of topics common in conversations about Jews, Israel, and antisemitism between January 2019 and December 2021 by drawing from representative samples with relevant keywords. Our annotation process aims to strictly apply a commonly used definition of antisemitism by forcing annotators to specify which part of the definition applies, and by giving them the option to personally disagree with the definition on a case-by-case basis. Labeling tweets that call out antisemitism, report antisemitism, or are otherwise related to antisemitism (such as the Holocaust) but are not actually antisemitic can help reduce false positives in automated detection. The dataset includes 1,250 tweets (18%) that are antisemitic according to the International Holocaust Remembrance Alliance (IHRA) definition of antisemitism. It is important to note, however, that the dataset is not comprehensive. Many topics are still not covered, and it only includes tweets collected from Twitter between January 2019 and December 2021. Additionally, the dataset only includes tweets that were written in English. Despite these limitations, we hope that this is a meaningful contribution to improving the automated detection of antisemitic speech.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 02:52:38 GMT" } ]
2023-05-01T00:00:00
[ [ "Jikeli", "Gunther", "" ], [ "Karali", "Sameer", "" ], [ "Miehling", "Daniel", "" ], [ "Soemer", "Katharina", "" ] ]
new_dataset
0.998768
2304.14622
Babar Shahzaad
Babar Shahzaad, Balsam Alkouz, Jermaine Janszen, Athman Bouguettaya
Optimizing Drone Delivery in Smart Cities
8 pages, 3 figures. This is an accepted paper and it is going to appear in IEEE Internet Computing magazine
null
null
null
cs.RO cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel context-aware drone delivery framework for optimizing package delivery through skyway networks in smart cities. We reformulate the problem of finding an optimal drone service delivery pathway as a more congruent and elegant drone delivery service composition problem. In this respect, we propose a novel line-of-sight heuristic-based context-aware composition algorithm that selects and composes near-optimal drone delivery services. We conducted an extensive experiment using a real dataset to show the robustness of our proposed approach.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 04:32:26 GMT" } ]
2023-05-01T00:00:00
[ [ "Shahzaad", "Babar", "" ], [ "Alkouz", "Balsam", "" ], [ "Janszen", "Jermaine", "" ], [ "Bouguettaya", "Athman", "" ] ]
new_dataset
0.99347
2304.14653
Murugeshwari B
B. Murugeshwari, D. Saral Jeeva Jothi, B. Hemalatha, S. Neelavathy Pari
Trust Aware Privacy Preserving Routing Protocol for Wireless Adhoc Network
null
null
10.14445/22315381/IJETT-V70I9P236
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Wireless Ad-Hoc Networks are especially helpful and quite well for essential circumstances such as defense, public safety, and disaster recovery. MANETs require communication privacy and security, notably in core routing protocols, when functioning in hostile or suspicious environments. The Trust Aware Privacy-Preserving Protocol (TAP3) is a mechanism for supporting the origin in proactively selecting a trust-able target and doing privacy-preserving route verification. We suggest TAP3 using the fellow recommendation model for MANETs in this work. Nodes use their features to discover their fellow node and use the trust to create strong connections with the random node via a multi-hop trusting chain by identifying the secure location. The verification duties are then spread among the nodes and validate the log updates without exposing the nodes' details. Unlike previous models that uncover node vulnerabilities or misconduct after an attack, TAP3 may guarantee the origin node to prevent data from being transferred through malicious nodes from the beginning and do verification without needing a third party. Our results show that this approach can locate problematic nodes with minimal overhead than the conventional routing protocol.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 06:49:53 GMT" } ]
2023-05-01T00:00:00
[ [ "Murugeshwari", "B.", "" ], [ "Jothi", "D. Saral Jeeva", "" ], [ "Hemalatha", "B.", "" ], [ "Pari", "S. Neelavathy", "" ] ]
new_dataset
0.992332
2304.14657
Jieting Chen
Jieting Chen, Junkai Ding, Wenping Chen, Qin Jin
Knowledge Enhanced Model for Live Video Comment Generation
null
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Live video commenting is popular on video media platforms, as it can create a chatting atmosphere and provide supplementary information for users while watching videos. Automatically generating live video comments can improve user experience and enable human-like generation for bot chatting. Existing works mostly focus on short video datasets while ignoring other important video types such as long videos like movies. In this work, we collect a new Movie Live Comments (MovieLC) dataset to support research on live video comment generation for long videos. We also propose a knowledge enhanced generation model inspired by the divergent and informative nature of live video comments. Our model adopts a pre-training encoder-decoder framework and incorporates external knowledge. Extensive experiments show that both objective metrics and human evaluation demonstrate the effectiveness of our proposed model. The MovieLC dataset and our code will be released.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 07:03:50 GMT" } ]
2023-05-01T00:00:00
[ [ "Chen", "Jieting", "" ], [ "Ding", "Junkai", "" ], [ "Chen", "Wenping", "" ], [ "Jin", "Qin", "" ] ]
new_dataset
0.998123
2304.14659
Alexandre Quemy
Alexandre Quemy, Marc Schoenauer, Johann Dreo
MultiZenoTravel: a Tunable Benchmark for Multi-Objective Planning with Known Pareto Front
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Multi-objective AI planning suffers from a lack of benchmarks exhibiting known Pareto Fronts. In this work, we propose a tunable benchmark generator, together with a dedicated solver that provably computes the true Pareto front of the resulting instances. First, we prove a proposition allowing us to characterize the optimal plans for a constrained version of the problem, and then show how to reduce the general problem to the constrained one. Second, we provide a constructive way to find all the Pareto-optimal plans and discuss the complexity of the algorithm. We provide an implementation that allows the solver to handle realistic instances in a reasonable time. Finally, as a practical demonstration, we used this solver to find all Pareto-optimal plans between the two largest airports in the world, considering the routes between the 50 largest airports, spherical distances between airports and a made-up risk.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 07:09:23 GMT" } ]
2023-05-01T00:00:00
[ [ "Quemy", "Alexandre", "" ], [ "Schoenauer", "Marc", "" ], [ "Dreo", "Johann", "" ] ]
new_dataset
0.998683
2304.14662
Tong Zhu
Tong Zhu, Guoliang Zhang, Zechang Li, Zijian Yu, Junfei Ren, Mengsong Wu, Zhefeng Wang, Baoxing Huai, Pingfu Chao, Wenliang Chen
CED: Catalog Extraction from Documents
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Sentence-by-sentence information extraction from long documents is an exhausting and error-prone task. As the indicator of document skeleton, catalogs naturally chunk documents into segments and provide informative cascade semantics, which can help to reduce the search space. Despite their usefulness, catalogs are hard to be extracted without the assist from external knowledge. For documents that adhere to a specific template, regular expressions are practical to extract catalogs. However, handcrafted heuristics are not applicable when processing documents from different sources with diverse formats. To address this problem, we build a large manually annotated corpus, which is the first dataset for the Catalog Extraction from Documents (CED) task. Based on this corpus, we propose a transition-based framework for parsing documents into catalog trees. The experimental results demonstrate that our proposed method outperforms baseline systems and shows a good ability to transfer. We believe the CED task could fill the gap between raw text segments and information extraction tasks on extremely long documents. Data and code are available at \url{https://github.com/Spico197/CatalogExtraction}
[ { "version": "v1", "created": "Fri, 28 Apr 2023 07:32:00 GMT" } ]
2023-05-01T00:00:00
[ [ "Zhu", "Tong", "" ], [ "Zhang", "Guoliang", "" ], [ "Li", "Zechang", "" ], [ "Yu", "Zijian", "" ], [ "Ren", "Junfei", "" ], [ "Wu", "Mengsong", "" ], [ "Wang", "Zhefeng", "" ], [ "Huai", "Baoxing", "" ], [ "Chao", "Pingfu", "" ], [ "Chen", "Wenliang", "" ] ]
new_dataset
0.999419
2304.14678
Wen Zhang
Wen Zhang, Zhen Yao, Mingyang Chen, Zhiwei Huang and Huajun Chen
NeuralKG-ind: A Python Library for Inductive Knowledge Graph Representation Learning
Accepted by SIGIR2023 Demonstration Track
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since the dynamic characteristics of knowledge graphs, many inductive knowledge graph representation learning (KGRL) works have been proposed in recent years, focusing on enabling prediction over new entities. NeuralKG-ind is the first library of inductive KGRL as an important update of NeuralKG library. It includes standardized processes, rich existing methods, decoupled modules, and comprehensive evaluation metrics. With NeuralKG-ind, it is easy for researchers and engineers to reproduce, redevelop, and compare inductive KGRL methods. The library, experimental methodologies, and model re-implementing results of NeuralKG-ind are all publicly released at https://github.com/zjukg/NeuralKG/tree/ind .
[ { "version": "v1", "created": "Fri, 28 Apr 2023 08:09:08 GMT" } ]
2023-05-01T00:00:00
[ [ "Zhang", "Wen", "" ], [ "Yao", "Zhen", "" ], [ "Chen", "Mingyang", "" ], [ "Huang", "Zhiwei", "" ], [ "Chen", "Huajun", "" ] ]
new_dataset
0.987897
2304.14714
Binqiang Wang
Binqiang Wang and Gang Dong and Yaqian Zhao and Rengang Li and Lu Cao and Lihua Lu
SGED: A Benchmark dataset for Performance Evaluation of Spiking Gesture Emotion Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the field of affective computing, researchers in the community have promoted the performance of models and algorithms by using the complementarity of multimodal information. However, the emergence of more and more modal information makes the development of datasets unable to keep up with the progress of existing modal sensing equipment. Collecting and studying multimodal data is a complex and significant work. In order to supplement the challenge of partial missing of community data. We collected and labeled a new homogeneous multimodal gesture emotion recognition dataset based on the analysis of the existing data sets. This data set complements the defects of homogeneous multimodal data and provides a new research direction for emotion recognition. Moreover, we propose a pseudo dual-flow network based on this dataset, and verify the application potential of this dataset in the affective computing community. The experimental results demonstrate that it is feasible to use the traditional visual information and spiking visual information based on homogeneous multimodal data for visual emotion recognition.The dataset is available at \url{https://github.com/201528014227051/SGED}
[ { "version": "v1", "created": "Fri, 28 Apr 2023 09:32:09 GMT" } ]
2023-05-01T00:00:00
[ [ "Wang", "Binqiang", "" ], [ "Dong", "Gang", "" ], [ "Zhao", "Yaqian", "" ], [ "Li", "Rengang", "" ], [ "Cao", "Lu", "" ], [ "Lu", "Lihua", "" ] ]
new_dataset
0.999576
2304.14791
Naif Mehanna
Naif Mehanna (CRIStAL, CNRS, SPIRALS), Walter Rudametkin (UR, IUF, CNRS, IRISA, DiverSe)
Caught in the Game: On the History and Evolution of Web Browser Gaming
null
TheWebConference 2023, Apr 2023, Austin (TX), United States
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Web browsers have come a long way since their inception, evolving from a simple means of displaying text documents over the network to complex software stacks with advanced graphics and network capabilities. As personal computers grew in popularity, developers jumped at the opportunity to deploy cross-platform games with centralized management and a low barrier to entry. Simply going to the right address is now enough to start a game. From text-based to GPU-powered 3D games, browser gaming has evolved to become a strong alternative to traditional console and mobile-based gaming, targeting both casual and advanced gamers. Browser technology has also evolved to accommodate more demanding applications, sometimes even supplanting functions typically left to the operating system. Today, websites display rich, computationally intensive, hardware-accelerated graphics, allowing developers to build ever-more impressive applications and games.In this paper, we present the evolution of browser gaming and the technologies that enabled it, from the release of the first text-based games in the early 1990s to current open-world and game-engine-powered browser games. We discuss the societal impact of browser gaming and how it has allowed a new target audience to accessdigital gaming. Finally, we review the potential future evolution ofthe browser gaming industry.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 12:02:16 GMT" } ]
2023-05-01T00:00:00
[ [ "Mehanna", "Naif", "", "CRIStAL, CNRS, SPIRALS" ], [ "Rudametkin", "Walter", "", "UR, IUF,\n CNRS, IRISA, DiverSe" ] ]
new_dataset
0.99557
2304.14803
Elisa Leonardelli
Elisa Leonardelli, Alexandra Uma, Gavin Abercrombie, Dina Almanea, Valerio Basile, Tommaso Fornaciari, Barbara Plank, Verena Rieser, Massimo Poesio
SemEval-2023 Task 11: Learning With Disagreements (LeWiDi)
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
NLP datasets annotated with human judgments are rife with disagreements between the judges. This is especially true for tasks depending on subjective judgments such as sentiment analysis or offensive language detection. Particularly in these latter cases, the NLP community has come to realize that the approach of 'reconciling' these different subjective interpretations is inappropriate. Many NLP researchers have therefore concluded that rather than eliminating disagreements from annotated corpora, we should preserve them-indeed, some argue that corpora should aim to preserve all annotator judgments. But this approach to corpus creation for NLP has not yet been widely accepted. The objective of the LeWiDi series of shared tasks is to promote this approach to developing NLP models by providing a unified framework for training and evaluating with such datasets. We report on the second LeWiDi shared task, which differs from the first edition in three crucial respects: (i) it focuses entirely on NLP, instead of both NLP and computer vision tasks in its first edition; (ii) it focuses on subjective tasks, instead of covering different types of disagreements-as training with aggregated labels for subjective NLP tasks is a particularly obvious misrepresentation of the data; and (iii) for the evaluation, we concentrate on soft approaches to evaluation. This second edition of LeWiDi attracted a wide array of participants resulting in 13 shared task submission papers.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 12:20:35 GMT" } ]
2023-05-01T00:00:00
[ [ "Leonardelli", "Elisa", "" ], [ "Uma", "Alexandra", "" ], [ "Abercrombie", "Gavin", "" ], [ "Almanea", "Dina", "" ], [ "Basile", "Valerio", "" ], [ "Fornaciari", "Tommaso", "" ], [ "Plank", "Barbara", "" ], [ "Rieser", "Verena", "" ], [ "Poesio", "Massimo", "" ] ]
new_dataset
0.986403
2304.14811
Junge Zhang
Junge Zhang, Feihu Zhang, Shaochen Kuang, Li Zhang
NeRF-LiDAR: Generating Realistic LiDAR Point Clouds with Neural Radiance Fields
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Labeling LiDAR point clouds for training autonomous driving is extremely expensive and difficult. LiDAR simulation aims at generating realistic LiDAR data with labels for training and verifying self-driving algorithms more efficiently. Recently, Neural Radiance Fields (NeRF) have been proposed for novel view synthesis using implicit reconstruction of 3D scenes. Inspired by this, we present NeRF-LIDAR, a novel LiDAR simulation method that leverages real-world information to generate realistic LIDAR point clouds. Different from existing LiDAR simulators, we use real images and point cloud data collected by self-driving cars to learn the 3D scene representation, point cloud generation and label rendering. We verify the effectiveness of our NeRF-LiDAR by training different 3D segmentation models on the generated LiDAR point clouds. It reveals that the trained models are able to achieve similar accuracy when compared with the same model trained on the real LiDAR data. Besides, the generated data is capable of boosting the accuracy through pre-training which helps reduce the requirements of the real labeled data.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 12:41:28 GMT" } ]
2023-05-01T00:00:00
[ [ "Zhang", "Junge", "" ], [ "Zhang", "Feihu", "" ], [ "Kuang", "Shaochen", "" ], [ "Zhang", "Li", "" ] ]
new_dataset
0.998836
2304.14918
Johannes Czech
Johannes Czech, Jannis Bl\"uml, Kristian Kersting
Representation Matters: The Game of Chess Poses a Challenge to Vision Transformers
11 pages, 5 figures, 8 tables
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
While transformers have gained the reputation as the "Swiss army knife of AI", no one has challenged them to master the game of chess, one of the classical AI benchmarks. Simply using vision transformers (ViTs) within AlphaZero does not master the game of chess, mainly because ViTs are too slow. Even making them more efficient using a combination of MobileNet and NextViT does not beat what actually matters: a simple change of the input representation and value loss, resulting in a greater boost of up to 180 Elo points over AlphaZero.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 15:33:39 GMT" } ]
2023-05-01T00:00:00
[ [ "Czech", "Johannes", "" ], [ "Blüml", "Jannis", "" ], [ "Kersting", "Kristian", "" ] ]
new_dataset
0.995215
2304.14937
David Wong
James Bungay, Osasenaga Emokpae, Samuel D. Relton, Jane Alty, Stefan Williams, Hui Fang, David C. Wong
Contactless hand tremor amplitude measurement using smartphones: development and pilot evaluation
Accepted to IEEE EMBC 2023, Sydney (pre-refereed version)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Background: Physiological tremor is defined as an involuntary and rhythmic shaking. Tremor of the hand is a key symptom of multiple neurological diseases, and its frequency and amplitude differs according to both disease type and disease progression. In routine clinical practice, tremor frequency and amplitude are assessed by expert rating using a 0 to 4 integer scale. Such ratings are subjective and have poor inter-rater reliability. There is thus a clinical need for a practical and accurate method for objectively assessing hand tremor. Objective: to develop a proof of principle method to measure hand tremor amplitude from smartphone videos. Methods: We created a computer vision pipeline that automatically extracts salient points on the hand and produces a 1-D time series of movement due to tremor, in pixels. Using the smartphones' depth measurement, we convert this measure into real distance units. We assessed the accuracy of the method using 60 videos of simulated tremor of different amplitudes from two healthy adults. Videos were taken at distances of 50, 75 and 100 cm between hand and camera. The participants had skin tone II and VI on the Fitzpatrick scale. We compared our method to a gold-standard measurement from a slide rule. Bland-Altman methods agreement analysis indicated a bias of 0.04 cm and 95% limits of agreement from -1.27 to 1.20 cm. Furthermore, we qualitatively observed that the method was robust to differences in skin tone and limited occlusion, such as a band-aid affixed to the participant's hand. Clinical relevance: We have demonstrated how tremor amplitude can be measured from smartphone videos. In conjunction with tremor frequency, this approach could be used to help diagnose and monitor neurological diseases
[ { "version": "v1", "created": "Fri, 28 Apr 2023 15:48:49 GMT" } ]
2023-05-01T00:00:00
[ [ "Bungay", "James", "" ], [ "Emokpae", "Osasenaga", "" ], [ "Relton", "Samuel D.", "" ], [ "Alty", "Jane", "" ], [ "Williams", "Stefan", "" ], [ "Fang", "Hui", "" ], [ "Wong", "David C.", "" ] ]
new_dataset
0.999174
2304.14947
Mariya Kilina
Mariya Kilina, Tommaso Elia, Syed Yusha Kareem, Alessandro Carfi, Fulvio Mastrogiovanni
Embodiment perception of a smart home assistant
Published at International Conference on Social Robotics 2022
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Demographic growth and rise in the average age of the population is increasing the demand for the elderly assistance. Health care oriented ambient intelligence technologies are fundamental to support elderly peoples' autonomy. In this paper, we present a smart home system that is able to recognize human activities and is integrated with a proactive vocal assistant. We chose one of possible user scenarios to show the performance of this smart home system and to perform a preliminary comparison between users' experience while watching videos of a volunteer interacting with an embodied versus a not-embodied assistant. The scenario is recorded from the user's point of view, while the user interacts with a robot assistant or a simple vocal assistant. The results of the User Experience Questionnaire show that participants found the robot assistant considerably more attractive, innovative and stimulating in comparison to the vocal assistant.
[ { "version": "v1", "created": "Fri, 28 Apr 2023 16:06:14 GMT" } ]
2023-05-01T00:00:00
[ [ "Kilina", "Mariya", "" ], [ "Elia", "Tommaso", "" ], [ "Kareem", "Syed Yusha", "" ], [ "Carfi", "Alessandro", "" ], [ "Mastrogiovanni", "Fulvio", "" ] ]
new_dataset
0.998539
2203.11667
Duc A. Hoang
Duc A. Hoang
TS-Reconfiguration of $k$-Path Vertex Covers in Caterpillars for $k \geq 4$
12 pages, 3 figures, minor revision, update title and abstract
Theory and Applications of Graphs: Vol. 10: Iss. 1, Article 8 (2023)
10.20429/tag.2023.10108
null
cs.DS cs.DM math.CO
http://creativecommons.org/licenses/by-sa/4.0/
A $k$-path vertex cover ($k$-PVC) of a graph $G$ is a vertex subset $I$ such that each path on $k$ vertices in $G$ contains at least one member of $I$. Imagine that a token is placed on each vertex of a $k$-PVC. Given two $k$-PVCs $I, J$ of a graph $G$, the $k$-Path Vertex Cover Reconfiguration ($k$-PVCR) under Token Sliding ($\mathsf{TS}$) problem asks if there is a sequence of $k$-PVCs between $I$ and $J$ where each intermediate member is obtained from its predecessor by sliding a token from some vertex to one of its unoccupied neighbors. This problem is known to be $\mathtt{PSPACE}$-complete even for planar graphs of maximum degree $3$ and bounded treewidth and can be solved in polynomial time for paths and cycles. Its complexity for trees remains unknown. In this paper, as a first step toward answering this question, for $k \geq 4$, we present a polynomial-time algorithm that solves $k$-PVCR under $\mathsf{TS}$ for caterpillars (i.e., trees formed by attaching leaves to a path).
[ { "version": "v1", "created": "Tue, 22 Mar 2022 12:41:14 GMT" }, { "version": "v2", "created": "Mon, 23 May 2022 01:39:46 GMT" }, { "version": "v3", "created": "Mon, 8 Aug 2022 14:27:15 GMT" } ]
2023-04-28T00:00:00
[ [ "Hoang", "Duc A.", "" ] ]
new_dataset
0.995191