id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2206.00717
Yue Qi
Yue Qi, Mojtaba Vaezi, H. Vincent Poor
K-Receiver Wiretap Channel: Optimal Encoding Order and Signaling Design
arXiv admin note: substantial text overlap with arXiv:2205.06412. The paper will appear in TWC
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The K-receiver wiretap channel is a channel model where a transmitter broadcasts K independent messages to K intended receivers while keeping them secret from an eavesdropper. The capacity region of the K-receiver multiple-input multiple-output (MIMO) wiretap channel has been characterized by using dirty-paper coding and stochastic encoding. However, K factorial encoding orders may need to be enumerated to evaluate the capacity region, which makes the problem intractable. In addition, even though the capacity region is known, the optimal signaling to achieve the capacity region is unknown. In this paper, we determine one optimal encoding order to achieve every point on the capacity region, and thus reduce the encoding complexity K factorial times. We prove that the optimal decoding order for the K-receiver MIMO wiretap channel is the same as that for the MIMO broadcast channel without secrecy. To be specific, the descending weight ordering in the weighted sum-rate (WSR) maximization problem determines the optimal encoding order. Next, to reach the border of the secrecy capacity region, we form a WSR maximization problem and apply the block successive maximization method to solve this nonconvex problem and find the input covariance matrices corresponding to each message. Numerical results are used to verify the optimality of the encoding order and to demonstrate the efficacy of the proposed signaling design.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 18:58:08 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 21:36:58 GMT" } ]
2023-04-04T00:00:00
[ [ "Qi", "Yue", "" ], [ "Vaezi", "Mojtaba", "" ], [ "Poor", "H. Vincent", "" ] ]
new_dataset
0.977845
2206.15476
Marius Dragoi
Marius Dragoi, Elena Burceanu, Emanuela Haller, Andrei Manolache and Florin Brad
AnoShift: A Distribution Shift Benchmark for Unsupervised Anomaly Detection
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analyzing the distribution shift of data is a growing research direction in nowadays Machine Learning (ML), leading to emerging new benchmarks that focus on providing a suitable scenario for studying the generalization properties of ML models. The existing benchmarks are focused on supervised learning, and to the best of our knowledge, there is none for unsupervised learning. Therefore, we introduce an unsupervised anomaly detection benchmark with data that shifts over time, built over Kyoto-2006+, a traffic dataset for network intrusion detection. This type of data meets the premise of shifting the input distribution: it covers a large time span ($10$ years), with naturally occurring changes over time (eg users modifying their behavior patterns, and software updates). We first highlight the non-stationary nature of the data, using a basic per-feature analysis, t-SNE, and an Optimal Transport approach for measuring the overall distribution distances between years. Next, we propose AnoShift, a protocol splitting the data in IID, NEAR, and FAR testing splits. We validate the performance degradation over time with diverse models, ranging from classical approaches to deep learning. Finally, we show that by acknowledging the distribution shift problem and properly addressing it, the performance can be improved compared to the classical training which assumes independent and identically distributed data (on average, by up to $3\%$ for our approach). Dataset and code are available at https://github.com/bit-ml/AnoShift/.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 17:59:22 GMT" }, { "version": "v2", "created": "Mon, 10 Oct 2022 18:23:23 GMT" }, { "version": "v3", "created": "Mon, 17 Oct 2022 16:14:03 GMT" }, { "version": "v4", "created": "Mon, 3 Apr 2023 16:00:22 GMT" } ]
2023-04-04T00:00:00
[ [ "Dragoi", "Marius", "" ], [ "Burceanu", "Elena", "" ], [ "Haller", "Emanuela", "" ], [ "Manolache", "Andrei", "" ], [ "Brad", "Florin", "" ] ]
new_dataset
0.999674
2207.08562
Haoran Luo
Haoran Luo, Haihong E, Ling Tan, Gengxian Zhou, Tianyu Yao, Kaiyang Wan
DHGE: Dual-View Hyper-Relational Knowledge Graph Embedding for Link Prediction and Entity Typing
Accepted by AAAI 2023
null
null
null
cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the field of representation learning on knowledge graphs (KGs), a hyper-relational fact consists of a main triple and several auxiliary attribute-value descriptions, which is considered more comprehensive and specific than a triple-based fact. However, currently available hyper-relational KG embedding methods in a single view are limited in application because they weaken the hierarchical structure that represents the affiliation between entities. To overcome this limitation, we propose a dual-view hyper-relational KG structure (DH-KG) that contains a hyper-relational instance view for entities and a hyper-relational ontology view for concepts that are abstracted hierarchically from the entities. This paper defines link prediction and entity typing tasks on DH-KG for the first time and constructs two DH-KG datasets, JW44K-6K, extracted from Wikidata, and HTDM based on medical data. Furthermore, we propose DHGE, a DH-KG embedding model based on GRAN encoders, HGNNs, and joint learning. DHGE outperforms baseline models on DH-KG, according to experimental results. Finally, we provide an example of how this technology can be used to treat hypertension. Our model and new datasets are publicly available.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 12:44:59 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 08:24:38 GMT" }, { "version": "v3", "created": "Fri, 24 Feb 2023 15:57:49 GMT" }, { "version": "v4", "created": "Fri, 31 Mar 2023 21:56:20 GMT" } ]
2023-04-04T00:00:00
[ [ "Luo", "Haoran", "" ], [ "E", "Haihong", "" ], [ "Tan", "Ling", "" ], [ "Zhou", "Gengxian", "" ], [ "Yao", "Tianyu", "" ], [ "Wan", "Kaiyang", "" ] ]
new_dataset
0.952082
2210.03117
Hanoona Bangalath Rasheed Ms
Muhammad Uzair Khattak, Hanoona Rasheed, Muhammad Maaz, Salman Khan, Fahad Shahbaz Khan
MaPLe: Multi-modal Prompt Learning
Accepted at CVPR2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Pre-trained vision-language (V-L) models such as CLIP have shown excellent generalization ability to downstream tasks. However, they are sensitive to the choice of input text prompts and require careful selection of prompt templates to perform well. Inspired by the Natural Language Processing (NLP) literature, recent CLIP adaptation approaches learn prompts as the textual inputs to fine-tune CLIP for downstream tasks. We note that using prompting to adapt representations in a single branch of CLIP (language or vision) is sub-optimal since it does not allow the flexibility to dynamically adjust both representation spaces on a downstream task. In this work, we propose Multi-modal Prompt Learning (MaPLe) for both vision and language branches to improve alignment between the vision and language representations. Our design promotes strong coupling between the vision-language prompts to ensure mutual synergy and discourages learning independent uni-modal solutions. Further, we learn separate prompts across different early stages to progressively model the stage-wise feature relationships to allow rich context learning. We evaluate the effectiveness of our approach on three representative tasks of generalization to novel classes, new target datasets and unseen domain shifts. Compared with the state-of-the-art method Co-CoOp, MaPLe exhibits favorable performance and achieves an absolute gain of 3.45% on novel classes and 2.72% on overall harmonic-mean, averaged over 11 diverse image recognition datasets. Our code and pre-trained models are available at https://github.com/muzairkhattak/multimodal-prompt-learning.
[ { "version": "v1", "created": "Thu, 6 Oct 2022 17:59:56 GMT" }, { "version": "v2", "created": "Sat, 25 Mar 2023 22:10:13 GMT" }, { "version": "v3", "created": "Sat, 1 Apr 2023 06:47:44 GMT" } ]
2023-04-04T00:00:00
[ [ "Khattak", "Muhammad Uzair", "" ], [ "Rasheed", "Hanoona", "" ], [ "Maaz", "Muhammad", "" ], [ "Khan", "Salman", "" ], [ "Khan", "Fahad Shahbaz", "" ] ]
new_dataset
0.998073
2210.16579
Aditya Agarwal
Bipasha Sen, Aditya Agarwal, Vinay P Namboodiri, C. V. Jawahar
INR-V: A Continuous Representation Space for Video-based Generative Tasks
Published in Transactions on Machine Learning Research (10/2022); https://openreview.net/forum?id=aIoEkwc2oB
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generating videos is a complex task that is accomplished by generating a set of temporally coherent images frame-by-frame. This limits the expressivity of videos to only image-based operations on the individual video frames needing network designs to obtain temporally coherent trajectories in the underlying image space. We propose INR-V, a video representation network that learns a continuous space for video-based generative tasks. INR-V parameterizes videos using implicit neural representations (INRs), a multi-layered perceptron that predicts an RGB value for each input pixel location of the video. The INR is predicted using a meta-network which is a hypernetwork trained on neural representations of multiple video instances. Later, the meta-network can be sampled to generate diverse novel videos enabling many downstream video-based generative tasks. Interestingly, we find that conditional regularization and progressive weight initialization play a crucial role in obtaining INR-V. The representation space learned by INR-V is more expressive than an image space showcasing many interesting properties not possible with the existing works. For instance, INR-V can smoothly interpolate intermediate videos between known video instances (such as intermediate identities, expressions, and poses in face videos). It can also in-paint missing portions in videos to recover temporally coherent full videos. In this work, we evaluate the space learned by INR-V on diverse generative tasks such as video interpolation, novel video generation, video inversion, and video inpainting against the existing baselines. INR-V significantly outperforms the baselines on several of these demonstrated tasks, clearly showcasing the potential of the proposed representation space.
[ { "version": "v1", "created": "Sat, 29 Oct 2022 11:54:58 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 02:58:58 GMT" } ]
2023-04-04T00:00:00
[ [ "Sen", "Bipasha", "" ], [ "Agarwal", "Aditya", "" ], [ "Namboodiri", "Vinay P", "" ], [ "Jawahar", "C. V.", "" ] ]
new_dataset
0.974979
2211.00895
Jongho Choi
Jongho Choi, Kyogu Lee
Pop2Piano : Pop Audio-based Piano Cover Generation
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Piano covers of pop music are enjoyed by many people. However, the task of automatically generating piano covers of pop music is still understudied. This is partly due to the lack of synchronized {Pop, Piano Cover} data pairs, which made it challenging to apply the latest data-intensive deep learning-based methods. To leverage the power of the data-driven approach, we make a large amount of paired and synchronized {Pop, Piano Cover} data using an automated pipeline. In this paper, we present Pop2Piano, a Transformer network that generates piano covers given waveforms of pop music. To the best of our knowledge, this is the first model to generate a piano cover directly from pop audio without using melody and chord extraction modules. We show that Pop2Piano, trained with our dataset, is capable of producing plausible piano covers.
[ { "version": "v1", "created": "Wed, 2 Nov 2022 05:42:22 GMT" }, { "version": "v2", "created": "Sat, 1 Apr 2023 06:02:16 GMT" } ]
2023-04-04T00:00:00
[ [ "Choi", "Jongho", "" ], [ "Lee", "Kyogu", "" ] ]
new_dataset
0.998866
2211.05776
Lu Qi
Lu Qi, Jason Kuen, Weidong Guo, Tiancheng Shen, Jiuxiang Gu, Jiaya Jia, Zhe Lin, Ming-Hsuan Yang
High-Quality Entity Segmentation
The project webiste: http://luqi.info/entityv2.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dense image segmentation tasks e.g., semantic, panoptic) are useful for image editing, but existing methods can hardly generalize well in an in-the-wild setting where there are unrestricted image domains, classes, and image resolution and quality variations. Motivated by these observations, we construct a new entity segmentation dataset, with a strong focus on high-quality dense segmentation in the wild. The dataset contains images spanning diverse image domains and entities, along with plentiful high-resolution images and high-quality mask annotations for training and testing. Given the high-quality and -resolution nature of the dataset, we propose CropFormer which is designed to tackle the intractability of instance-level segmentation on high-resolution images. It improves mask prediction by fusing high-res image crops that provide more fine-grained image details and the full image. CropFormer is the first query-based Transformer architecture that can effectively fuse mask predictions from multiple image views, by learning queries that effectively associate the same entities across the full image and its crop. With CropFormer, we achieve a significant AP gain of $1.9$ on the challenging entity segmentation task. Furthermore, CropFormer consistently improves the accuracy of traditional segmentation tasks and datasets. The dataset and code will be released at http://luqi.info/entityv2.github.io/.
[ { "version": "v1", "created": "Thu, 10 Nov 2022 18:58:22 GMT" }, { "version": "v2", "created": "Sat, 12 Nov 2022 04:10:32 GMT" }, { "version": "v3", "created": "Sun, 2 Apr 2023 22:01:17 GMT" } ]
2023-04-04T00:00:00
[ [ "Qi", "Lu", "" ], [ "Kuen", "Jason", "" ], [ "Guo", "Weidong", "" ], [ "Shen", "Tiancheng", "" ], [ "Gu", "Jiuxiang", "" ], [ "Jia", "Jiaya", "" ], [ "Lin", "Zhe", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
new_dataset
0.966317
2211.10624
Jiaxin Deng
Jiaxin Deng, Dong Shen, Haojie Pan, Xiangyu Wu, Ximan Liu, Gaofeng Meng, Fan Yang, Size Li, Ruiji Fu, Zhongyuan Wang
A Unified Model for Video Understanding and Knowledge Embedding with Heterogeneous Knowledge Graph Dataset
Accepted by ICMR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video understanding is an important task in short video business platforms and it has a wide application in video recommendation and classification. Most of the existing video understanding works only focus on the information that appeared within the video content, including the video frames, audio and text. However, introducing common sense knowledge from the external Knowledge Graph (KG) dataset is essential for video understanding when referring to the content which is less relevant to the video. Owing to the lack of video knowledge graph dataset, the work which integrates video understanding and KG is rare. In this paper, we propose a heterogeneous dataset that contains the multi-modal video entity and fruitful common sense relations. This dataset also provides multiple novel video inference tasks like the Video-Relation-Tag (VRT) and Video-Relation-Video (VRV) tasks. Furthermore, based on this dataset, we propose an end-to-end model that jointly optimizes the video understanding objective with knowledge graph embedding, which can not only better inject factual knowledge into video understanding but also generate effective multi-modal entity embedding for KG. Comprehensive experiments indicate that combining video understanding embedding with factual knowledge benefits the content-based video retrieval performance. Moreover, it also helps the model generate better knowledge graph embedding which outperforms traditional KGE-based methods on VRT and VRV tasks with at least 42.36% and 17.73% improvement in HITS@10.
[ { "version": "v1", "created": "Sat, 19 Nov 2022 09:00:45 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 03:10:21 GMT" } ]
2023-04-04T00:00:00
[ [ "Deng", "Jiaxin", "" ], [ "Shen", "Dong", "" ], [ "Pan", "Haojie", "" ], [ "Wu", "Xiangyu", "" ], [ "Liu", "Ximan", "" ], [ "Meng", "Gaofeng", "" ], [ "Yang", "Fan", "" ], [ "Li", "Size", "" ], [ "Fu", "Ruiji", "" ], [ "Wang", "Zhongyuan", "" ] ]
new_dataset
0.995514
2211.12886
Haim Sawdayee
Haim Sawdayee, Amir Vaxman, Amit H. Bermano
OReX: Object Reconstruction from Planar Cross-sections Using Neural Fields
CVPR 2023
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reconstructing 3D shapes from planar cross-sections is a challenge inspired by downstream applications like medical imaging and geographic informatics. The input is an in/out indicator function fully defined on a sparse collection of planes in space, and the output is an interpolation of the indicator function to the entire volume. Previous works addressing this sparse and ill-posed problem either produce low quality results, or rely on additional priors such as target topology, appearance information, or input normal directions. In this paper, we present OReX, a method for 3D shape reconstruction from slices alone, featuring a Neural Field as the interpolation prior. A modest neural network is trained on the input planes to return an inside/outside estimate for a given 3D coordinate, yielding a powerful prior that induces smoothness and self-similarities. The main challenge for this approach is high-frequency details, as the neural prior is overly smoothing. To alleviate this, we offer an iterative estimation architecture and a hierarchical input sampling scheme that encourage coarse-to-fine training, allowing the training process to focus on high frequencies at later stages. In addition, we identify and analyze a ripple-like effect stemming from the mesh extraction step. We mitigate it by regularizing the spatial gradients of the indicator function around input in/out boundaries during network training, tackling the problem at the root. Through extensive qualitative and quantitative experimentation, we demonstrate our method is robust, accurate, and scales well with the size of the input. We report state-of-the-art results compared to previous approaches and recent potential solutions, and demonstrate the benefit of our individual contributions through analysis and ablation studies.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 11:44:35 GMT" }, { "version": "v2", "created": "Wed, 1 Mar 2023 08:18:42 GMT" }, { "version": "v3", "created": "Sun, 2 Apr 2023 09:31:02 GMT" } ]
2023-04-04T00:00:00
[ [ "Sawdayee", "Haim", "" ], [ "Vaxman", "Amir", "" ], [ "Bermano", "Amit H.", "" ] ]
new_dataset
0.998791
2211.17260
Minjung Son
Minjung Son, Jeong Joon Park, Leonidas Guibas, Gordon Wetzstein
SinGRAF: Learning a 3D Generative Radiance Field for a Single Scene
CVPR 2023. Project page: https://www.computationalimaging.org/publications/singraf/
null
null
null
cs.CV cs.AI cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generative models have shown great promise in synthesizing photorealistic 3D objects, but they require large amounts of training data. We introduce SinGRAF, a 3D-aware generative model that is trained with a few input images of a single scene. Once trained, SinGRAF generates different realizations of this 3D scene that preserve the appearance of the input while varying scene layout. For this purpose, we build on recent progress in 3D GAN architectures and introduce a novel progressive-scale patch discrimination approach during training. With several experiments, we demonstrate that the results produced by SinGRAF outperform the closest related works in both quality and diversity by a large margin.
[ { "version": "v1", "created": "Wed, 30 Nov 2022 18:55:27 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 14:26:57 GMT" } ]
2023-04-04T00:00:00
[ [ "Son", "Minjung", "" ], [ "Park", "Jeong Joon", "" ], [ "Guibas", "Leonidas", "" ], [ "Wetzstein", "Gordon", "" ] ]
new_dataset
0.988767
2212.00776
Rui Tian
Rui Tian, Zuxuan Wu, Qi Dai, Han Hu, Yu Qiao, Yu-Gang Jiang
ResFormer: Scaling ViTs with Multi-Resolution Training
CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Vision Transformers (ViTs) have achieved overwhelming success, yet they suffer from vulnerable resolution scalability, i.e., the performance drops drastically when presented with input resolutions that are unseen during training. We introduce, ResFormer, a framework that is built upon the seminal idea of multi-resolution training for improved performance on a wide spectrum of, mostly unseen, testing resolutions. In particular, ResFormer operates on replicated images of different resolutions and enforces a scale consistency loss to engage interactive information across different scales. More importantly, to alternate among varying resolutions effectively, especially novel ones in testing, we propose a global-local positional embedding strategy that changes smoothly conditioned on input sizes. We conduct extensive experiments for image classification on ImageNet. The results provide strong quantitative evidence that ResFormer has promising scaling abilities towards a wide range of resolutions. For instance, ResFormer-B-MR achieves a Top-1 accuracy of 75.86% and 81.72% when evaluated on relatively low and high resolutions respectively (i.e., 96 and 640), which are 48% and 7.49% better than DeiT-B. We also demonstrate, moreover, ResFormer is flexible and can be easily extended to semantic segmentation, object detection and video action recognition. Code is available at https://github.com/ruitian12/resformer.
[ { "version": "v1", "created": "Thu, 1 Dec 2022 18:57:20 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 06:55:09 GMT" } ]
2023-04-04T00:00:00
[ [ "Tian", "Rui", "" ], [ "Wu", "Zuxuan", "" ], [ "Dai", "Qi", "" ], [ "Hu", "Han", "" ], [ "Qiao", "Yu", "" ], [ "Jiang", "Yu-Gang", "" ] ]
new_dataset
0.986199
2212.04808
Muhammad Anwaar Khalid
Muhammad Anwaar Khalid, Kanwal Zulfiqar, Ulfat Bashir, Areeba Shaheen, Rida Iqbal, Zarnab Rizwan, Ghina Rizwan, Muhammad Moazam Fraz
CEPHA29: Automatic Cephalometric Landmark Detection Challenge 2023
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Quantitative cephalometric analysis is the most widely used clinical and research tool in modern orthodontics. Accurate localization of cephalometric landmarks enables the quantification and classification of anatomical abnormalities, however, the traditional manual way of marking these landmarks is a very tedious job. Endeavours have constantly been made to develop automated cephalometric landmark detection systems but they are inadequate for orthodontic applications. The fundamental reason for this is that the amount of publicly available datasets as well as the images provided for training in these datasets are insufficient for an AI model to perform well. To facilitate the development of robust AI solutions for morphometric analysis, we organise the CEPHA29 Automatic Cephalometric Landmark Detection Challenge in conjunction with IEEE International Symposium on Biomedical Imaging (ISBI 2023). In this context, we provide the largest known publicly available dataset, consisting of 1000 cephalometric X-ray images. We hope that our challenge will not only derive forward research and innovation in automatic cephalometric landmark identification but will also signal the beginning of a new era in the discipline.
[ { "version": "v1", "created": "Fri, 9 Dec 2022 12:25:58 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 10:27:21 GMT" } ]
2023-04-04T00:00:00
[ [ "Khalid", "Muhammad Anwaar", "" ], [ "Zulfiqar", "Kanwal", "" ], [ "Bashir", "Ulfat", "" ], [ "Shaheen", "Areeba", "" ], [ "Iqbal", "Rida", "" ], [ "Rizwan", "Zarnab", "" ], [ "Rizwan", "Ghina", "" ], [ "Fraz", "Muhammad Moazam", "" ] ]
new_dataset
0.990823
2212.04843
Bruno Rossi
Martin Macak, Matus Stovcik, Tomas Rebok, Mouzhi Ge, Bruno Rossi, Barbora Buhnova
CopAS: A Big Data Forensic Analytics System
null
null
null
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advancing digitization of our society, network security has become one of the critical concerns for most organizations. In this paper, we present CopAS, a system targeted at Big Data forensics analysis, allowing network operators to comfortably analyze and correlate large amounts of network data to get insights about potentially malicious and suspicious events. We demonstrate the practical usage of CopAS for insider attack detection on a publicly available PCAP dataset and show how the system can be used to detect insiders hiding their malicious activity in the large amounts of data streams generated during the operations of an organization within the network.
[ { "version": "v1", "created": "Fri, 9 Dec 2022 13:22:41 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 08:51:18 GMT" } ]
2023-04-04T00:00:00
[ [ "Macak", "Martin", "" ], [ "Stovcik", "Matus", "" ], [ "Rebok", "Tomas", "" ], [ "Ge", "Mouzhi", "" ], [ "Rossi", "Bruno", "" ], [ "Buhnova", "Barbora", "" ] ]
new_dataset
0.997923
2212.05923
So Yeon Min
So Yeon Min, Yao-Hung Hubert Tsai, Wei Ding, Ali Farhadi, Ruslan Salakhutdinov, Yonatan Bisk, Jian Zhang
Self-Supervised Object Goal Navigation with In-Situ Finetuning
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
A household robot should be able to navigate to target objects without requiring users to first annotate everything in their home. Most current approaches to object navigation do not test on real robots and rely solely on reconstructed scans of houses and their expensively labeled semantic 3D meshes. In this work, our goal is to build an agent that builds self-supervised models of the world via exploration, the same as a child might - thus we (1) eschew the expense of labeled 3D mesh and (2) enable self-supervised in-situ finetuning in the real world. We identify a strong source of self-supervision (Location Consistency - LocCon) that can train all components of an ObjectNav agent, using unannotated simulated houses. Our key insight is that embodied agents can leverage location consistency as a self-supervision signal - collecting images from different views/angles and applying contrastive learning. We show that our agent can perform competitively in the real world and simulation. Our results also indicate that supervised training with 3D mesh annotations causes models to learn simulation artifacts, which are not transferrable to the real world. In contrast, our LocCon shows the most robust transfer in the real world among the set of models we compare to, and that the real-world performance of all models can be further improved with self-supervised LocCon in-situ training.
[ { "version": "v1", "created": "Fri, 9 Dec 2022 03:41:40 GMT" }, { "version": "v2", "created": "Sun, 2 Apr 2023 01:39:47 GMT" } ]
2023-04-04T00:00:00
[ [ "Min", "So Yeon", "" ], [ "Tsai", "Yao-Hung Hubert", "" ], [ "Ding", "Wei", "" ], [ "Farhadi", "Ali", "" ], [ "Salakhutdinov", "Ruslan", "" ], [ "Bisk", "Yonatan", "" ], [ "Zhang", "Jian", "" ] ]
new_dataset
0.977203
2212.06250
Ahmed Abdelreheem Mr.
Ahmed Abdelreheem, Kyle Olszewski, Hsin-Ying Lee, Peter Wonka, Panos Achlioptas
ScanEnts3D: Exploiting Phrase-to-3D-Object Correspondences for Improved Visio-Linguistic Models in 3D Scenes
The project's webpage is https://scanents3d.github.io/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The two popular datasets ScanRefer [16] and ReferIt3D [3] connect natural language to real-world 3D data. In this paper, we curate a large-scale and complementary dataset extending both the aforementioned ones by associating all objects mentioned in a referential sentence to their underlying instances inside a 3D scene. Specifically, our Scan Entities in 3D (ScanEnts3D) dataset provides explicit correspondences between 369k objects across 84k natural referential sentences, covering 705 real-world scenes. Crucially, we show that by incorporating intuitive losses that enable learning from this novel dataset, we can significantly improve the performance of several recently introduced neural listening architectures, including improving the SoTA in both the Nr3D and ScanRefer benchmarks by 4.3% and 5.0%, respectively. Moreover, we experiment with competitive baselines and recent methods for the task of language generation and show that, as with neural listeners, 3D neural speakers can also noticeably benefit by training with ScanEnts3D, including improving the SoTA by 13.2 CIDEr points on the Nr3D benchmark. Overall, our carefully conducted experimental studies strongly support the conclusion that, by learning on ScanEnts3D, commonly used visio-linguistic 3D architectures can become more efficient and interpretable in their generalization without needing to provide these newly collected annotations at test time. The project's webpage is https://scanents3d.github.io/ .
[ { "version": "v1", "created": "Mon, 12 Dec 2022 21:25:58 GMT" }, { "version": "v2", "created": "Sat, 1 Apr 2023 12:13:27 GMT" } ]
2023-04-04T00:00:00
[ [ "Abdelreheem", "Ahmed", "" ], [ "Olszewski", "Kyle", "" ], [ "Lee", "Hsin-Ying", "" ], [ "Wonka", "Peter", "" ], [ "Achlioptas", "Panos", "" ] ]
new_dataset
0.997187
2212.08045
Michael Tschannen
Michael Tschannen, Basil Mustafa, Neil Houlsby
CLIPPO: Image-and-Language Understanding from Pixels Only
CVPR 2023. Code and pretrained models are available at https://github.com/google-research/big_vision/blob/main/big_vision/configs/proj/clippo/README.md
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal models are becoming increasingly effective, in part due to unified components, such as the Transformer architecture. However, multimodal models still often consist of many task- and modality-specific pieces and training procedures. For example, CLIP (Radford et al., 2021) trains independent text and image towers via a contrastive loss. We explore an additional unification: the use of a pure pixel-based model to perform image, text, and multimodal tasks. Our model is trained with contrastive loss alone, so we call it CLIP-Pixels Only (CLIPPO). CLIPPO uses a single encoder that processes both regular images and text rendered as images. CLIPPO performs image-based tasks such as retrieval and zero-shot image classification almost as well as CLIP-style models, with half the number of parameters and no text-specific tower or embedding. When trained jointly via image-text contrastive learning and next-sentence contrastive learning, CLIPPO can perform well on natural language understanding tasks, without any word-level loss (language modelling or masked language modelling), outperforming pixel-based prior work. Surprisingly, CLIPPO can obtain good accuracy in visual question answering, simply by rendering the question and image together. Finally, we exploit the fact that CLIPPO does not require a tokenizer to show that it can achieve strong performance on multilingual multimodal retrieval without modifications.
[ { "version": "v1", "created": "Thu, 15 Dec 2022 18:52:08 GMT" }, { "version": "v2", "created": "Sat, 1 Apr 2023 21:01:36 GMT" } ]
2023-04-04T00:00:00
[ [ "Tschannen", "Michael", "" ], [ "Mustafa", "Basil", "" ], [ "Houlsby", "Neil", "" ] ]
new_dataset
0.999113
2212.08067
Yufan Ren
Yufan Ren, Fangjinhua Wang, Tong Zhang, Marc Pollefeys and Sabine S\"usstrunk
VolRecon: Volume Rendering of Signed Ray Distance Functions for Generalizable Multi-View Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The success of the Neural Radiance Fields (NeRF) in novel view synthesis has inspired researchers to propose neural implicit scene reconstruction. However, most existing neural implicit reconstruction methods optimize per-scene parameters and therefore lack generalizability to new scenes. We introduce VolRecon, a novel generalizable implicit reconstruction method with Signed Ray Distance Function (SRDF). To reconstruct the scene with fine details and little noise, VolRecon combines projection features aggregated from multi-view features, and volume features interpolated from a coarse global feature volume. Using a ray transformer, we compute SRDF values of sampled points on a ray and then render color and depth. On DTU dataset, VolRecon outperforms SparseNeuS by about 30% in sparse view reconstruction and achieves comparable accuracy as MVSNet in full view reconstruction. Furthermore, our approach exhibits good generalization performance on the large-scale ETH3D benchmark.
[ { "version": "v1", "created": "Thu, 15 Dec 2022 18:59:54 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 06:54:50 GMT" } ]
2023-04-04T00:00:00
[ [ "Ren", "Yufan", "" ], [ "Wang", "Fangjinhua", "" ], [ "Zhang", "Tong", "" ], [ "Pollefeys", "Marc", "" ], [ "Süsstrunk", "Sabine", "" ] ]
new_dataset
0.97656
2212.14704
Jiale Xu
Jiale Xu, Xintao Wang, Weihao Cheng, Yan-Pei Cao, Ying Shan, Xiaohu Qie, Shenghua Gao
Dream3D: Zero-Shot Text-to-3D Synthesis Using 3D Shape Prior and Text-to-Image Diffusion Models
Accepted by CVPR 2023. Project page: https://bluestyle97.github.io/dream3d/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Recent CLIP-guided 3D optimization methods, such as DreamFields and PureCLIPNeRF, have achieved impressive results in zero-shot text-to-3D synthesis. However, due to scratch training and random initialization without prior knowledge, these methods often fail to generate accurate and faithful 3D structures that conform to the input text. In this paper, we make the first attempt to introduce explicit 3D shape priors into the CLIP-guided 3D optimization process. Specifically, we first generate a high-quality 3D shape from the input text in the text-to-shape stage as a 3D shape prior. We then use it as the initialization of a neural radiance field and optimize it with the full prompt. To address the challenging text-to-shape generation task, we present a simple yet effective approach that directly bridges the text and image modalities with a powerful text-to-image diffusion model. To narrow the style domain gap between the images synthesized by the text-to-image diffusion model and shape renderings used to train the image-to-shape generator, we further propose to jointly optimize a learnable text prompt and fine-tune the text-to-image diffusion model for rendering-style image generation. Our method, Dream3D, is capable of generating imaginative 3D content with superior visual quality and shape accuracy compared to state-of-the-art methods.
[ { "version": "v1", "created": "Wed, 28 Dec 2022 18:23:47 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 15:55:40 GMT" } ]
2023-04-04T00:00:00
[ [ "Xu", "Jiale", "" ], [ "Wang", "Xintao", "" ], [ "Cheng", "Weihao", "" ], [ "Cao", "Yan-Pei", "" ], [ "Shan", "Ying", "" ], [ "Qie", "Xiaohu", "" ], [ "Gao", "Shenghua", "" ] ]
new_dataset
0.998853
2301.02379
Jinbo Xing
Jinbo Xing, Menghan Xia, Yuechen Zhang, Xiaodong Cun, Jue Wang, Tien-Tsin Wong
CodeTalker: Speech-Driven 3D Facial Animation with Discrete Motion Prior
CVPR2023 Camera-Ready. Project Page: https://doubiiu.github.io/projects/codetalker/, Code: https://github.com/Doubiiu/CodeTalker
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Speech-driven 3D facial animation has been widely studied, yet there is still a gap to achieving realism and vividness due to the highly ill-posed nature and scarcity of audio-visual data. Existing works typically formulate the cross-modal mapping into a regression task, which suffers from the regression-to-mean problem leading to over-smoothed facial motions. In this paper, we propose to cast speech-driven facial animation as a code query task in a finite proxy space of the learned codebook, which effectively promotes the vividness of the generated motions by reducing the cross-modal mapping uncertainty. The codebook is learned by self-reconstruction over real facial motions and thus embedded with realistic facial motion priors. Over the discrete motion space, a temporal autoregressive model is employed to sequentially synthesize facial motions from the input speech signal, which guarantees lip-sync as well as plausible facial expressions. We demonstrate that our approach outperforms current state-of-the-art methods both qualitatively and quantitatively. Also, a user study further justifies our superiority in perceptual quality.
[ { "version": "v1", "created": "Fri, 6 Jan 2023 05:04:32 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 15:58:43 GMT" } ]
2023-04-04T00:00:00
[ [ "Xing", "Jinbo", "" ], [ "Xia", "Menghan", "" ], [ "Zhang", "Yuechen", "" ], [ "Cun", "Xiaodong", "" ], [ "Wang", "Jue", "" ], [ "Wong", "Tien-Tsin", "" ] ]
new_dataset
0.99868
2301.02778
Gongyang Li
Gongyang Li, Zhi Liu, Xinpeng Zhang, Weisi Lin
Lightweight Salient Object Detection in Optical Remote-Sensing Images via Semantic Matching and Edge Alignment
11 pages, 4 figures, Accepted by IEEE Transactions on Geoscience and Remote Sensing 2023
null
10.1109/TGRS.2023.3235717
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, relying on convolutional neural networks (CNNs), many methods for salient object detection in optical remote sensing images (ORSI-SOD) are proposed. However, most methods ignore the huge parameters and computational cost brought by CNNs, and only a few pay attention to the portability and mobility. To facilitate practical applications, in this paper, we propose a novel lightweight network for ORSI-SOD based on semantic matching and edge alignment, termed SeaNet. Specifically, SeaNet includes a lightweight MobileNet-V2 for feature extraction, a dynamic semantic matching module (DSMM) for high-level features, an edge self-alignment module (ESAM) for low-level features, and a portable decoder for inference. First, the high-level features are compressed into semantic kernels. Then, semantic kernels are used to activate salient object locations in two groups of high-level features through dynamic convolution operations in DSMM. Meanwhile, in ESAM, cross-scale edge information extracted from two groups of low-level features is self-aligned through L2 loss and used for detail enhancement. Finally, starting from the highest-level features, the decoder infers salient objects based on the accurate locations and fine details contained in the outputs of the two modules. Extensive experiments on two public datasets demonstrate that our lightweight SeaNet not only outperforms most state-of-the-art lightweight methods but also yields comparable accuracy with state-of-the-art conventional methods, while having only 2.76M parameters and running with 1.7G FLOPs for 288x288 inputs. Our code and results are available at https://github.com/MathLee/SeaNet.
[ { "version": "v1", "created": "Sat, 7 Jan 2023 04:33:51 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 05:02:47 GMT" } ]
2023-04-04T00:00:00
[ [ "Li", "Gongyang", "" ], [ "Liu", "Zhi", "" ], [ "Zhang", "Xinpeng", "" ], [ "Lin", "Weisi", "" ] ]
new_dataset
0.998414
2303.07945
Heeseung Kim
Chaehun Shin, Heeseung Kim, Che Hyun Lee, Sang-gil Lee, Sungroh Yoon
Edit-A-Video: Single Video Editing with Object-Aware Consistency
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Despite the fact that text-to-video (TTV) model has recently achieved remarkable success, there have been few approaches on TTV for its extension to video editing. Motivated by approaches on TTV models adapting from diffusion-based text-to-image (TTI) models, we suggest the video editing framework given only a pretrained TTI model and a single <text, video> pair, which we term Edit-A-Video. The framework consists of two stages: (1) inflating the 2D model into the 3D model by appending temporal modules and tuning on the source video (2) inverting the source video into the noise and editing with target text prompt and attention map injection. Each stage enables the temporal modeling and preservation of semantic attributes of the source video. One of the key challenges for video editing include a background inconsistency problem, where the regions not included for the edit suffer from undesirable and inconsistent temporal alterations. To mitigate this issue, we also introduce a novel mask blending method, termed as sparse-causal blending (SC Blending). We improve previous mask blending methods to reflect the temporal consistency so that the area where the editing is applied exhibits smooth transition while also achieving spatio-temporal consistency of the unedited regions. We present extensive experimental results over various types of text and videos, and demonstrate the superiority of the proposed method compared to baselines in terms of background consistency, text alignment, and video editing quality.
[ { "version": "v1", "created": "Tue, 14 Mar 2023 14:35:59 GMT" }, { "version": "v2", "created": "Thu, 23 Mar 2023 03:04:45 GMT" }, { "version": "v3", "created": "Sat, 1 Apr 2023 01:45:15 GMT" } ]
2023-04-04T00:00:00
[ [ "Shin", "Chaehun", "" ], [ "Kim", "Heeseung", "" ], [ "Lee", "Che Hyun", "" ], [ "Lee", "Sang-gil", "" ], [ "Yoon", "Sungroh", "" ] ]
new_dataset
0.994755
2303.08594
Junjie He
Junjie He, Pengyu Li, Yifeng Geng, Xuansong Xie
FastInst: A Simple Query-Based Model for Real-Time Instance Segmentation
CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent attention in instance segmentation has focused on query-based models. Despite being non-maximum suppression (NMS)-free and end-to-end, the superiority of these models on high-accuracy real-time benchmarks has not been well demonstrated. In this paper, we show the strong potential of query-based models on efficient instance segmentation algorithm designs. We present FastInst, a simple, effective query-based framework for real-time instance segmentation. FastInst can execute at a real-time speed (i.e., 32.5 FPS) while yielding an AP of more than 40 (i.e., 40.5 AP) on COCO test-dev without bells and whistles. Specifically, FastInst follows the meta-architecture of recently introduced Mask2Former. Its key designs include instance activation-guided queries, dual-path update strategy, and ground truth mask-guided learning, which enable us to use lighter pixel decoders, fewer Transformer decoder layers, while achieving better performance. The experiments show that FastInst outperforms most state-of-the-art real-time counterparts, including strong fully convolutional baselines, in both speed and accuracy. Code can be found at https://github.com/junjiehe96/FastInst .
[ { "version": "v1", "created": "Wed, 15 Mar 2023 13:06:30 GMT" }, { "version": "v2", "created": "Sat, 1 Apr 2023 17:55:21 GMT" } ]
2023-04-04T00:00:00
[ [ "He", "Junjie", "" ], [ "Li", "Pengyu", "" ], [ "Geng", "Yifeng", "" ], [ "Xie", "Xuansong", "" ] ]
new_dataset
0.995837
2303.11240
Patrick Gerard
Patrick Gerard, Nicholas Botzer, Tim Weninger
Truth Social Dataset
7 pages, 5 figures, ICWSM 2023
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Formally announced to the public following former President Donald Trump's bans and suspensions from mainstream social networks in early 2022 after his role in the January 6 Capitol Riots, Truth Social was launched as an "alternative" social media platform that claims to be a refuge for free speech, offering a platform for those disaffected by the content moderation policies of the existing, mainstream social networks. The subsequent rise of Truth Social has been driven largely by hard-line supporters of the former president as well as those affected by the content moderation of other social networks. These distinct qualities combined with its status as the main mouthpiece of the former president positions Truth Social as a particularly influential social media platform and give rise to several research questions. However, outside of a handful of news reports, little is known about the new social media platform partially due to a lack of well-curated data. In the current work, we describe a dataset of over 823,000 posts to Truth Social and and social network with over 454,000 distinct users. In addition to the dataset itself, we also present some basic analysis of its content, certain temporal features, and its network.
[ { "version": "v1", "created": "Mon, 20 Mar 2023 16:26:24 GMT" } ]
2023-04-04T00:00:00
[ [ "Gerard", "Patrick", "" ], [ "Botzer", "Nicholas", "" ], [ "Weninger", "Tim", "" ] ]
new_dataset
0.999891
2303.12570
Fengji Zhang
Fengji Zhang, Bei Chen, Yue Zhang, Jin Liu, Daoguang Zan, Yi Mao, Jian-Guang Lou, Weizhu Chen
RepoCoder: Repository-Level Code Completion Through Iterative Retrieval and Generation
null
null
null
null
cs.CL cs.AI cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of repository-level code completion is to continue writing the unfinished code based on a broader context of the repository. While for automated code completion tools, it is difficult to utilize the useful information scattered in different files. We propose RepoCoder, a simple, generic, and effective framework to address the challenge. It streamlines the repository-level code completion process by incorporating a similarity-based retriever and a pre-trained code language model, which allows for the effective utilization of repository-level information for code completion and grants the ability to generate code at various levels of granularity. Furthermore, RepoCoder utilizes a novel iterative retrieval-generation paradigm that bridges the gap between retrieval context and the intended completion target. We also propose a new benchmark RepoEval, which consists of the latest and high-quality real-world repositories covering line, API invocation, and function body completion scenarios. We test the performance of RepoCoder by using various combinations of code retrievers and generators. Experimental results indicate that RepoCoder significantly improves the zero-shot code completion baseline by over 10% in all settings and consistently outperforms the vanilla retrieval-augmented code completion approach. Furthermore, we validate the effectiveness of RepoCoder through comprehensive analysis, providing valuable insights for future research.
[ { "version": "v1", "created": "Wed, 22 Mar 2023 13:54:46 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 08:07:16 GMT" } ]
2023-04-04T00:00:00
[ [ "Zhang", "Fengji", "" ], [ "Chen", "Bei", "" ], [ "Zhang", "Yue", "" ], [ "Liu", "Jin", "" ], [ "Zan", "Daoguang", "" ], [ "Mao", "Yi", "" ], [ "Lou", "Jian-Guang", "" ], [ "Chen", "Weizhu", "" ] ]
new_dataset
0.985557
2303.13962
Jianzhu Huai
Yuan Zhuang, Binliang Wang, Jianzhu Huai, Miao Li
4D iRIOM: 4D Imaging Radar Inertial Odometry and Mapping
8 pages, 8 figures, 4 tables, the proofread version will appear on RA-L soon
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Millimeter wave radar can measure distances, directions, and Doppler velocity for objects in harsh conditions such as fog. The 4D imaging radar with both vertical and horizontal data resembling an image can also measure objects' height. Previous studies have used 3D radars for ego-motion estimation. But few methods leveraged the rich data of imaging radars, and they usually omitted the mapping aspect, thus leading to inferior odometry accuracy. This paper presents a real-time imaging radar inertial odometry and mapping method, iRIOM, based on the submap concept. To deal with moving objects and multipath reflections, we use the graduated non-convexity method to robustly and efficiently estimate ego-velocity from a single scan. To measure the agreement between sparse non-repetitive radar scan points and submap points, the distribution-to-multi-distribution distance for matches is adopted. The ego-velocity, scan-to-submap matches are fused with the 6D inertial data by an iterative extended Kalman filter to get the platform's 3D position and orientation. A loop closure module is also developed to curb the odometry module's drift. To our knowledge, iRIOM based on the two modules is the first 4D radar inertial SLAM system. On our and third-party data, we show iRIOM's favorable odometry accuracy and mapping consistency against the FastLIO-SLAM and the EKFRIO. Also, the ablation study reveal the benefit of inertial data versus the constant velocity model, and scan-to-submap matching versus scan-to-scan matching.
[ { "version": "v1", "created": "Fri, 24 Mar 2023 12:36:26 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 03:53:59 GMT" } ]
2023-04-04T00:00:00
[ [ "Zhuang", "Yuan", "" ], [ "Wang", "Binliang", "" ], [ "Huai", "Jianzhu", "" ], [ "Li", "Miao", "" ] ]
new_dataset
0.997827
2303.17774
Gengxin Liu
Gengxin Liu, Qian Sun, Haibin Huang, Chongyang Ma, Yulan Guo, Li Yi, Hui Huang, Ruizhen Hu
Semi-Weakly Supervised Object Kinematic Motion Prediction
CVPR 2023
null
null
null
cs.CV cs.AI cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a 3D object, kinematic motion prediction aims to identify the mobile parts as well as the corresponding motion parameters. Due to the large variations in both topological structure and geometric details of 3D objects, this remains a challenging task and the lack of large scale labeled data also constrain the performance of deep learning based approaches. In this paper, we tackle the task of object kinematic motion prediction problem in a semi-weakly supervised manner. Our key observations are two-fold. First, although 3D dataset with fully annotated motion labels is limited, there are existing datasets and methods for object part semantic segmentation at large scale. Second, semantic part segmentation and mobile part segmentation is not always consistent but it is possible to detect the mobile parts from the underlying 3D structure. Towards this end, we propose a graph neural network to learn the map between hierarchical part-level segmentation and mobile parts parameters, which are further refined based on geometric alignment. This network can be first trained on PartNet-Mobility dataset with fully labeled mobility information and then applied on PartNet dataset with fine-grained and hierarchical part-level segmentation. The network predictions yield a large scale of 3D objects with pseudo labeled mobility information and can further be used for weakly-supervised learning with pre-existing segmentation. Our experiments show there are significant performance boosts with the augmented data for previous method designed for kinematic motion prediction on 3D partial scans.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 02:37:36 GMT" }, { "version": "v2", "created": "Mon, 3 Apr 2023 02:36:17 GMT" } ]
2023-04-04T00:00:00
[ [ "Liu", "Gengxin", "" ], [ "Sun", "Qian", "" ], [ "Huang", "Haibin", "" ], [ "Ma", "Chongyang", "" ], [ "Guo", "Yulan", "" ], [ "Yi", "Li", "" ], [ "Huang", "Hui", "" ], [ "Hu", "Ruizhen", "" ] ]
new_dataset
0.995991
2304.00111
Yonghui Wu
Aokun Chen, Daniel Paredes, Zehao Yu, Xiwei Lou, Roberta Brunson, Jamie N. Thomas, Kimberly A. Martinez, Robert J. Lucero, Tanja Magoc, Laurence M. Solberg, Urszula A. Snigurska, Sarah E. Ser, Mattia Prosperi, Jiang Bian, Ragnhildur I. Bjarnadottir, Yonghui Wu
Identifying Symptoms of Delirium from Clinical Narratives Using Natural Language Processing
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Delirium is an acute decline or fluctuation in attention, awareness, or other cognitive function that can lead to serious adverse outcomes. Despite the severe outcomes, delirium is frequently unrecognized and uncoded in patients' electronic health records (EHRs) due to its transient and diverse nature. Natural language processing (NLP), a key technology that extracts medical concepts from clinical narratives, has shown great potential in studies of delirium outcomes and symptoms. To assist in the diagnosis and phenotyping of delirium, we formed an expert panel to categorize diverse delirium symptoms, composed annotation guidelines, created a delirium corpus with diverse delirium symptoms, and developed NLP methods to extract delirium symptoms from clinical notes. We compared 5 state-of-the-art transformer models including 2 models (BERT and RoBERTa) from the general domain and 3 models (BERT_MIMIC, RoBERTa_MIMIC, and GatorTron) from the clinical domain. GatorTron achieved the best strict and lenient F1 scores of 0.8055 and 0.8759, respectively. We conducted an error analysis to identify challenges in annotating delirium symptoms and developing NLP systems. To the best of our knowledge, this is the first large language model-based delirium symptom extraction system. Our study lays the foundation for the future development of computable phenotypes and diagnosis methods for delirium.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 20:16:44 GMT" } ]
2023-04-04T00:00:00
[ [ "Chen", "Aokun", "" ], [ "Paredes", "Daniel", "" ], [ "Yu", "Zehao", "" ], [ "Lou", "Xiwei", "" ], [ "Brunson", "Roberta", "" ], [ "Thomas", "Jamie N.", "" ], [ "Martinez", "Kimberly A.", "" ], [ "Lucero", "Robert J.", "" ], [ "Magoc", "Tanja", "" ], [ "Solberg", "Laurence M.", "" ], [ "Snigurska", "Urszula A.", "" ], [ "Ser", "Sarah E.", "" ], [ "Prosperi", "Mattia", "" ], [ "Bian", "Jiang", "" ], [ "Bjarnadottir", "Ragnhildur I.", "" ], [ "Wu", "Yonghui", "" ] ]
new_dataset
0.99305
2304.00122
Harish Karunakaran
Harish Karunakaran, Gopeshh Raaj Subbaraj
Trajectory Control for Differential Drive Mobile Manipulators
9 pages
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Mobile manipulator systems are comprised of a mobile platform with one or more manipulators and are of great interest in a number of applications such as indoor warehouses, mining, construction, forestry etc. We present an approach for computing actuator commands for such systems so that they can follow desired end-effector and platform trajectories without the violation of the nonholonomic constraints of the system in an indoor warehouse environment. We work with the Fetch robot which consists of a 7-DOF manipulator with a differential drive mobile base to validate our method. The major contributions of our project are, writing the dynamics of the system, Trajectory planning for the manipulator and the mobile base, state machine for the pick and place task and the inverse kinematics of the manipulator. Our results indicate that we are able to successfully implement trajectory control on the mobile base and the manipulator of the Fetch robot.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 20:47:32 GMT" } ]
2023-04-04T00:00:00
[ [ "Karunakaran", "Harish", "" ], [ "Subbaraj", "Gopeshh Raaj", "" ] ]
new_dataset
0.996611
2304.00235
Suman Adhya
Suman Adhya, Debarshi Kumar Sanyal
What Does the Indian Parliament Discuss? An Exploratory Analysis of the Question Hour in the Lok Sabha
Accepted at the workshop PoliticalNLP co-located with the conference LREC 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The TCPD-IPD dataset is a collection of questions and answers discussed in the Lower House of the Parliament of India during the Question Hour between 1999 and 2019. Although it is difficult to analyze such a huge collection manually, modern text analysis tools can provide a powerful means to navigate it. In this paper, we perform an exploratory analysis of the dataset. In particular, we present insightful corpus-level statistics and a detailed analysis of three subsets of the dataset. In the latter analysis, the focus is on understanding the temporal evolution of topics using a dynamic topic model. We observe that the parliamentary conversation indeed mirrors the political and socio-economic tensions of each period.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 05:43:22 GMT" } ]
2023-04-04T00:00:00
[ [ "Adhya", "Suman", "" ], [ "Sanyal", "Debarshi Kumar", "" ] ]
new_dataset
0.998921
2304.00265
Masayuki Tezuka
Masayuki Tezuka, Keisuke Tanaka
Pointcheval-Sanders Signature-Based Synchronized Aggregate Signature
null
ICISC 2022
10.1007/978-3-031-29371-9_16
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synchronized aggregate signature is a special type of signature that all signers have a synchronized time period and allows aggregating signatures which are generated in the same period. This signature has a wide range of applications for systems that have a natural reporting period such as log and sensor data, or blockchain protocol. In CT-RSA 2016, Pointcheval and Sanders proposed the new randomizable signature scheme. Since this signature scheme is based on type-3 pairing, this signature achieves a short signature size and efficient signature verification. In this paper, we design the Pointchcval-Sanders signature-based synchronized aggregate signature scheme and prove its security under the generalized Pointcheval-Sanders assumption in the random oracle model. Our scheme offers the most efficient aggregate signature verification among synchronized aggregate signature schemes based on bilinear groups.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 09:12:41 GMT" } ]
2023-04-04T00:00:00
[ [ "Tezuka", "Masayuki", "" ], [ "Tanaka", "Keisuke", "" ] ]
new_dataset
0.992955
2304.00350
Won Ik Cho
Won Ik Cho, Yoon Kyung Lee, Seoyeon Bae, Jihwan Kim, Sangah Park, Moosung Kim, Sowon Hahn, Nam Soo Kim
When Crowd Meets Persona: Creating a Large-Scale Open-Domain Persona Dialogue Corpus
Presented at HCOMP 2022 as Works-in-Progress
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Building a natural language dataset requires caution since word semantics is vulnerable to subtle text change or the definition of the annotated concept. Such a tendency can be seen in generative tasks like question-answering and dialogue generation and also in tasks that create a categorization-based corpus, like topic classification or sentiment analysis. Open-domain conversations involve two or more crowdworkers freely conversing about any topic, and collecting such data is particularly difficult for two reasons: 1) the dataset should be ``crafted" rather than ``obtained" due to privacy concerns, and 2) paid creation of such dialogues may differ from how crowdworkers behave in real-world settings. In this study, we tackle these issues when creating a large-scale open-domain persona dialogue corpus, where persona implies that the conversation is performed by several actors with a fixed persona and user-side workers from an unspecified crowd.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 16:10:36 GMT" } ]
2023-04-04T00:00:00
[ [ "Cho", "Won Ik", "" ], [ "Lee", "Yoon Kyung", "" ], [ "Bae", "Seoyeon", "" ], [ "Kim", "Jihwan", "" ], [ "Park", "Sangah", "" ], [ "Kim", "Moosung", "" ], [ "Hahn", "Sowon", "" ], [ "Kim", "Nam Soo", "" ] ]
new_dataset
0.99978
2304.00358
Steven Obua
Steven Obua
Logic is Algebra
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logic really is just algebra, given one uses the right kind of algebra, and the right kind of logic. The right kind of algebra is abstraction algebra, and the right kind of logic is abstraction logic.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 16:51:57 GMT" } ]
2023-04-04T00:00:00
[ [ "Obua", "Steven", "" ] ]
new_dataset
0.999882
2304.00359
Yukang Cao
Yukang Cao, Kai Han, Kwan-Yee K. Wong
SeSDF: Self-evolved Signed Distance Field for Implicit 3D Clothed Human Reconstruction
25 pages, 21 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We address the problem of clothed human reconstruction from a single image or uncalibrated multi-view images. Existing methods struggle with reconstructing detailed geometry of a clothed human and often require a calibrated setting for multi-view reconstruction. We propose a flexible framework which, by leveraging the parametric SMPL-X model, can take an arbitrary number of input images to reconstruct a clothed human model under an uncalibrated setting. At the core of our framework is our novel self-evolved signed distance field (SeSDF) module which allows the framework to learn to deform the signed distance field (SDF) derived from the fitted SMPL-X model, such that detailed geometry reflecting the actual clothed human can be encoded for better reconstruction. Besides, we propose a simple method for self-calibration of multi-view images via the fitted SMPL-X parameters. This lifts the requirement of tedious manual calibration and largely increases the flexibility of our method. Further, we introduce an effective occlusion-aware feature fusion strategy to account for the most useful features to reconstruct the human model. We thoroughly evaluate our framework on public benchmarks, demonstrating significant superiority over the state-of-the-arts both qualitatively and quantitatively.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 16:58:19 GMT" } ]
2023-04-04T00:00:00
[ [ "Cao", "Yukang", "" ], [ "Han", "Kai", "" ], [ "Wong", "Kwan-Yee K.", "" ] ]
new_dataset
0.988144
2304.00378
Xiou Ge
Xiou Ge, Yun-Cheng Wang, Bin Wang, C.-C. Jay Kuo
Knowledge Graph Embedding with 3D Compound Geometric Transformations
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
The cascade of 2D geometric transformations were exploited to model relations between entities in a knowledge graph (KG), leading to an effective KG embedding (KGE) model, CompoundE. Furthermore, the rotation in the 3D space was proposed as a new KGE model, Rotate3D, by leveraging its non-commutative property. Inspired by CompoundE and Rotate3D, we leverage 3D compound geometric transformations, including translation, rotation, scaling, reflection, and shear and propose a family of KGE models, named CompoundE3D, in this work. CompoundE3D allows multiple design variants to match rich underlying characteristics of a KG. Since each variant has its own advantages on a subset of relations, an ensemble of multiple variants can yield superior performance. The effectiveness and flexibility of CompoundE3D are experimentally verified on four popular link prediction datasets.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 19:56:51 GMT" } ]
2023-04-04T00:00:00
[ [ "Ge", "Xiou", "" ], [ "Wang", "Yun-Cheng", "" ], [ "Wang", "Bin", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
new_dataset
0.973438
2304.00411
Tomoya Sasaki
Tomoya Sasaki, Narin Okazaki, Takatoshi Yoshida, Alfonso Balandra, Zendai Kashino and Masahiko Inami
SolefulTap: Augmenting Tap Dancing Experience using a Floor-Type Impact Display
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
We propose SolefulTap for a novel tap dancing experience. It allows users to feel as if they are tap dancing or appreciate a tap dancing performance using the sensations of their own feet. SolefulTap uses a method called Step Augmentation that provides audio-haptic feedback to users, generating impacts in response to users' simple step motions. Our prototype uses a floor-type impact display consisting of pressure sensors, which detect users' steps, and solenoids, which generate feedback through impact. Through a preliminary user study, we confirmed that the system can provide untrained users with the experience of tap dancing. This study serves as a case study that provides insight into how a reactive environment can affect the human capabilities of physical expression and the sensation experienced.
[ { "version": "v1", "created": "Sat, 1 Apr 2023 23:53:42 GMT" } ]
2023-04-04T00:00:00
[ [ "Sasaki", "Tomoya", "" ], [ "Okazaki", "Narin", "" ], [ "Yoshida", "Takatoshi", "" ], [ "Balandra", "Alfonso", "" ], [ "Kashino", "Zendai", "" ], [ "Inami", "Masahiko", "" ] ]
new_dataset
0.998316
2304.00460
Yibo Yan
Yibo Yan, Seth Frey, Amy Zhang, Vladimir Filkov, Likang Yin
GitHub OSS Governance File Dataset
5 pages, 1 figure, 1 table, to be published in MSR 2023 Data and Tool Showcase Track
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Open-source Software (OSS) has become a valuable resource in both industry and academia over the last few decades. Despite the innovative structures they develop to support the projects, OSS projects and their communities have complex needs and face risks such as getting abandoned. To manage the internal social dynamics and community evolution, OSS developer communities have started relying on written governance documents that assign roles and responsibilities to different community actors. To facilitate the study of the impact and effectiveness of formal governance documents on OSS projects and communities, we present a longitudinal dataset of 710 GitHub-hosted OSS projects with \path{GOVERNANCE.MD} governance files. This dataset includes all commits made to the repository, all issues and comments created on GitHub, and all revisions made to the governance file. We hope its availability will foster more research interest in studying how OSS communities govern their projects and the impact of governance files on communities.
[ { "version": "v1", "created": "Sun, 2 Apr 2023 06:07:00 GMT" } ]
2023-04-04T00:00:00
[ [ "Yan", "Yibo", "" ], [ "Frey", "Seth", "" ], [ "Zhang", "Amy", "" ], [ "Filkov", "Vladimir", "" ], [ "Yin", "Likang", "" ] ]
new_dataset
0.999417
2304.00467
Haiping Wang
Haiping Wang, Yuan Liu, Zhen Dong, Yulan Guo, Yu-Shen Liu, Wenping Wang, Bisheng Yang
Robust Multiview Point Cloud Registration with Reliable Pose Graph Initialization and History Reweighting
Accepted by CVPR 2023; Code at https://github.com/WHU-USI3DV/SGHR
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a new method for the multiview registration of point cloud. Previous multiview registration methods rely on exhaustive pairwise registration to construct a densely-connected pose graph and apply Iteratively Reweighted Least Square (IRLS) on the pose graph to compute the scan poses. However, constructing a densely-connected graph is time-consuming and contains lots of outlier edges, which makes the subsequent IRLS struggle to find correct poses. To address the above problems, we first propose to use a neural network to estimate the overlap between scan pairs, which enables us to construct a sparse but reliable pose graph. Then, we design a novel history reweighting function in the IRLS scheme, which has strong robustness to outlier edges on the graph. In comparison with existing multiview registration methods, our method achieves 11% higher registration recall on the 3DMatch dataset and ~13% lower registration errors on the ScanNet dataset while reducing ~70% required pairwise registrations. Comprehensive ablation studies are conducted to demonstrate the effectiveness of our designs.
[ { "version": "v1", "created": "Sun, 2 Apr 2023 06:43:40 GMT" } ]
2023-04-04T00:00:00
[ [ "Wang", "Haiping", "" ], [ "Liu", "Yuan", "" ], [ "Dong", "Zhen", "" ], [ "Guo", "Yulan", "" ], [ "Liu", "Yu-Shen", "" ], [ "Wang", "Wenping", "" ], [ "Yang", "Bisheng", "" ] ]
new_dataset
0.995995
2304.00592
Cheng Deng
Cheng Deng, Bo Tong, Luoyi Fu, Jiaxin Ding, Dexing Cao, Xinbing Wang, Chenghu Zhou
PK-Chat: Pointer Network Guided Knowledge Driven Generative Dialogue Model
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the research of end-to-end dialogue systems, using real-world knowledge to generate natural, fluent, and human-like utterances with correct answers is crucial. However, domain-specific conversational dialogue systems may be incoherent and introduce erroneous external information to answer questions due to the out-of-vocabulary issue or the wrong knowledge from the parameters of the neural network. In this work, we propose PK-Chat, a Pointer network guided Knowledge-driven generative dialogue model, incorporating a unified pretrained language model and a pointer network over knowledge graphs. The words generated by PK-Chat in the dialogue are derived from the prediction of word lists and the direct prediction of the external knowledge graph knowledge. Moreover, based on the PK-Chat, a dialogue system is built for academic scenarios in the case of geosciences. Finally, an academic dialogue benchmark is constructed to evaluate the quality of dialogue systems in academic scenarios and the source code is available online.
[ { "version": "v1", "created": "Sun, 2 Apr 2023 18:23:13 GMT" } ]
2023-04-04T00:00:00
[ [ "Deng", "Cheng", "" ], [ "Tong", "Bo", "" ], [ "Fu", "Luoyi", "" ], [ "Ding", "Jiaxin", "" ], [ "Cao", "Dexing", "" ], [ "Wang", "Xinbing", "" ], [ "Zhou", "Chenghu", "" ] ]
new_dataset
0.958816
2304.00634
Dwip Dalal
Dwip Dalal, Vivek Srivastava, Mayank Singh
MMT: A Multilingual and Multi-Topic Indian Social Media Dataset
null
EACL Workshop C3NLP 2023
null
null
cs.CL cs.LG cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Social media plays a significant role in cross-cultural communication. A vast amount of this occurs in code-mixed and multilingual form, posing a significant challenge to Natural Language Processing (NLP) tools for processing such information, like language identification, topic modeling, and named-entity recognition. To address this, we introduce a large-scale multilingual, and multi-topic dataset (MMT) collected from Twitter (1.7 million Tweets), encompassing 13 coarse-grained and 63 fine-grained topics in the Indian context. We further annotate a subset of 5,346 tweets from the MMT dataset with various Indian languages and their code-mixed counterparts. Also, we demonstrate that the currently existing tools fail to capture the linguistic diversity in MMT on two downstream tasks, i.e., topic modeling and language identification. To facilitate future research, we will make the anonymized and annotated dataset available in the public domain.
[ { "version": "v1", "created": "Sun, 2 Apr 2023 21:39:00 GMT" } ]
2023-04-04T00:00:00
[ [ "Dalal", "Dwip", "" ], [ "Srivastava", "Vivek", "" ], [ "Singh", "Mayank", "" ] ]
new_dataset
0.999892
2304.00676
Zilin Huang
Zilin Huang, Sikai Chen, Yuzhuang Pian, Zihao Sheng, Soyoung Ahn, and David A. Noyce
CV2X-LOCA: Roadside Unit-Enabled Cooperative Localization Framework for Autonomous Vehicles
null
null
null
null
cs.RO cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An accurate and robust localization system is crucial for autonomous vehicles (AVs) to enable safe driving in urban scenes. While existing global navigation satellite system (GNSS)-based methods are effective at locating vehicles in open-sky regions, achieving high-accuracy positioning in urban canyons such as lower layers of multi-layer bridges, streets beside tall buildings, tunnels, etc., remains a challenge. In this paper, we investigate the potential of cellular-vehicle-to-everything (C-V2X) wireless communications in improving the localization performance of AVs under GNSS-denied environments. Specifically, we propose the first roadside unit (RSU)-enabled cooperative localization framework, namely CV2X-LOCA, that only uses C-V2X channel state information to achieve lane-level positioning accuracy. CV2X-LOCA consists of four key parts: data processing module, coarse positioning module, environment parameter correcting module, and vehicle trajectory filtering module. These modules jointly handle challenges present in dynamic C-V2X networks. Extensive simulation and field experiments show that CV2X-LOCA achieves state-of-the-art performance for vehicle localization even under noisy conditions with high-speed movement and sparse RSUs coverage environments. The study results also provide insights into future investment decisions for transportation agencies regarding deploying RSUs cost-effectively.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 01:35:54 GMT" } ]
2023-04-04T00:00:00
[ [ "Huang", "Zilin", "" ], [ "Chen", "Sikai", "" ], [ "Pian", "Yuzhuang", "" ], [ "Sheng", "Zihao", "" ], [ "Ahn", "Soyoung", "" ], [ "Noyce", "David A.", "" ] ]
new_dataset
0.999678
2304.00717
Ziqing Yang
Xin Yao, Ziqing Yang, Yiming Cui, Shijin Wang
MiniRBT: A Two-stage Distilled Small Chinese Pre-trained Model
4 pages
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In natural language processing, pre-trained language models have become essential infrastructures. However, these models often suffer from issues such as large size, long inference time, and challenging deployment. Moreover, most mainstream pre-trained models focus on English, and there are insufficient studies on small Chinese pre-trained models. In this paper, we introduce MiniRBT, a small Chinese pre-trained model that aims to advance research in Chinese natural language processing. MiniRBT employs a narrow and deep student model and incorporates whole word masking and two-stage distillation during pre-training to make it well-suited for most downstream tasks. Our experiments on machine reading comprehension and text classification tasks reveal that MiniRBT achieves 94% performance relative to RoBERTa, while providing a 6.8x speedup, demonstrating its effectiveness and efficiency.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 04:45:57 GMT" } ]
2023-04-04T00:00:00
[ [ "Yao", "Xin", "" ], [ "Yang", "Ziqing", "" ], [ "Cui", "Yiming", "" ], [ "Wang", "Shijin", "" ] ]
new_dataset
0.998692
2304.00736
Linhan Yang
Linhan Yang, Bidan Huang, Qingbiao Li, Ya-Yen Tsai, Wang Wei Lee, Chaoyang Song, Jia Pan
TacGNN:Learning Tactile-based In-hand Manipulation with a Blind Robot
8 pages, 4 figures, accepted by RAL
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a novel framework for tactile-based dexterous manipulation learning with a blind anthropomorphic robotic hand, i.e. without visual sensing. First, object-related states were extracted from the raw tactile signals by a graph-based perception model - TacGNN. The resulting tactile features were then utilized in the policy learning of an in-hand manipulation task in the second stage. This method was examined by a Baoding ball task - simultaneously manipulating two spheres around each other by 180 degrees in hand. We conducted experiments on object states prediction and in-hand manipulation using a reinforcement learning algorithm (PPO). Results show that TacGNN is effective in predicting object-related states during manipulation by decreasing the RMSE of prediction to 0.096cm comparing to other methods, such as MLP, CNN, and GCN. Finally, the robot hand could finish an in-hand manipulation task solely relying on the robotic own perception - tactile sensing and proprioception. In addition, our methods are tested on three tasks with different difficulty levels and transferred to the real robot without further training.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 06:15:46 GMT" } ]
2023-04-04T00:00:00
[ [ "Yang", "Linhan", "" ], [ "Huang", "Bidan", "" ], [ "Li", "Qingbiao", "" ], [ "Tsai", "Ya-Yen", "" ], [ "Lee", "Wang Wei", "" ], [ "Song", "Chaoyang", "" ], [ "Pan", "Jia", "" ] ]
new_dataset
0.978344
2304.00757
Khalid Alnujaidi
Khalid Alnujaidi, Ghada Alhabib, Abdulaziz Alodhieb
Spot-the-Camel: Computer Vision for Safer Roads
arXiv admin note: text overlap with arXiv:2301.09339
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
As the population grows and more land is being used for urbanization, ecosystems are disrupted by our roads and cars. This expansion of infrastructure cuts through wildlife territories, leading to many instances of Wildlife-Vehicle Collision (WVC). These instances of WVC are a global issue that is having a global socio-economic impact, resulting in billions of dollars in property damage and, at times, fatalities for vehicle occupants. In Saudi Arabia, this issue is similar, with instances of Camel-Vehicle Collision (CVC) being particularly deadly due to the large size of camels, which results in a 25% fatality rate [1]. The focus of this work is to test different object detection models on the task of detecting camels on the road. The Deep Learning (DL) object detection models used in the experiments are: Center Net, Efficient Det, Faster R-CNN, SSD, and YOLOv8. Results of the experiments show that YOLOv8 performed the best in terms of accuracy and was the most efficient in training. In the future, the plan is to expand on this work by developing a system to make countryside roads safer.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 07:16:14 GMT" } ]
2023-04-04T00:00:00
[ [ "Alnujaidi", "Khalid", "" ], [ "Alhabib", "Ghada", "" ], [ "Alodhieb", "Abdulaziz", "" ] ]
new_dataset
0.999609
2304.00763
Jerome White
Jerome White, Chandan Agrawal, Anmol Ojha, Apoorv Agnihotri, Makkunda Sharma, Jigar Doshi
BOLLWM: A real-world dataset for bollworm pest monitoring from cotton fields in India
null
ICLR 2023 workshop on Practical Machine Learning for Developing Countries
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a dataset of agricultural pest images captured over five years by thousands of small holder farmers and farming extension workers across India. The dataset has been used to support a mobile application that relies on artificial intelligence to assist farmers with pest management decisions. Creation came from a mix of organized data collection, and from mobile application usage that was less controlled. This makes the dataset unique within the pest detection community, exhibiting a number of characteristics that place it closer to other non-agricultural objected detection datasets. This not only makes the dataset applicable to future pest management applications, it opens the door for a wide variety of other research agendas.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 07:31:30 GMT" } ]
2023-04-04T00:00:00
[ [ "White", "Jerome", "" ], [ "Agrawal", "Chandan", "" ], [ "Ojha", "Anmol", "" ], [ "Agnihotri", "Apoorv", "" ], [ "Sharma", "Makkunda", "" ], [ "Doshi", "Jigar", "" ] ]
new_dataset
0.9999
2304.00804
Michael Maravgakis
Despina-Ekaterini Argiropoulos, Dimitrios Papageorgiou, Michael Maravgakis, Drosakis Drosakis and Panos Trahanias
Two-layer adaptive trajectory tracking controller for quadruped robots on slippery terrains
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Task space trajectory tracking for quadruped robots plays a crucial role on achieving dexterous maneuvers in unstructured environments. To fulfill the control objective, the robot should apply forces through the contact of the legs with the supporting surface, while maintaining its stability and controllability. In order to ensure the operation of the robot under these conditions, one has to account for the possibility of unstable contact of the legs that arises when the robot operates on partially or globally slippery terrains. In this work, we propose an adaptive trajectory tracking controller for quadruped robots, which involves two prioritized layers of adaptation for avoiding possible slippage of one or multiple legs. The adaptive framework is evaluated through simulations and validated through experiments.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 08:53:35 GMT" } ]
2023-04-04T00:00:00
[ [ "Argiropoulos", "Despina-Ekaterini", "" ], [ "Papageorgiou", "Dimitrios", "" ], [ "Maravgakis", "Michael", "" ], [ "Drosakis", "Drosakis", "" ], [ "Trahanias", "Panos", "" ] ]
new_dataset
0.985737
2304.00827
Qichao Ying
Yangming Zhou, Yuzhou Yang, Qichao Ying, Zhenxing Qian and Xinpeng Zhang
Multi-modal Fake News Detection on Social Media via Multi-grained Information Fusion
Accepted by ICMR 2023
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
The easy sharing of multimedia content on social media has caused a rapid dissemination of fake news, which threatens society's stability and security. Therefore, fake news detection has garnered extensive research interest in the field of social forensics. Current methods primarily concentrate on the integration of textual and visual features but fail to effectively exploit multi-modal information at both fine-grained and coarse-grained levels. Furthermore, they suffer from an ambiguity problem due to a lack of correlation between modalities or a contradiction between the decisions made by each modality. To overcome these challenges, we present a Multi-grained Multi-modal Fusion Network (MMFN) for fake news detection. Inspired by the multi-grained process of human assessment of news authenticity, we respectively employ two Transformer-based pre-trained models to encode token-level features from text and images. The multi-modal module fuses fine-grained features, taking into account coarse-grained features encoded by the CLIP encoder. To address the ambiguity problem, we design uni-modal branches with similarity-based weighting to adaptively adjust the use of multi-modal features. Experimental results demonstrate that the proposed framework outperforms state-of-the-art methods on three prevalent datasets.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 09:13:59 GMT" } ]
2023-04-04T00:00:00
[ [ "Zhou", "Yangming", "" ], [ "Yang", "Yuzhou", "" ], [ "Ying", "Qichao", "" ], [ "Qian", "Zhenxing", "" ], [ "Zhang", "Xinpeng", "" ] ]
new_dataset
0.99107
2304.00869
Iakovos Evdaimon
Iakovos Evdaimon, Hadi Abdine, Christos Xypolopoulos, Stamatis Outsios, Michalis Vazirgiannis, Giorgos Stamou
GreekBART: The First Pretrained Greek Sequence-to-Sequence Model
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The era of transfer learning has revolutionized the fields of Computer Vision and Natural Language Processing, bringing powerful pretrained models with exceptional performance across a variety of tasks. Specifically, Natural Language Processing tasks have been dominated by transformer-based language models. In Natural Language Inference and Natural Language Generation tasks, the BERT model and its variants, as well as the GPT model and its successors, demonstrated exemplary performance. However, the majority of these models are pretrained and assessed primarily for the English language or on a multilingual corpus. In this paper, we introduce GreekBART, the first Seq2Seq model based on BART-base architecture and pretrained on a large-scale Greek corpus. We evaluate and compare GreekBART against BART-random, Greek-BERT, and XLM-R on a variety of discriminative tasks. In addition, we examine its performance on two NLG tasks from GreekSUM, a newly introduced summarization dataset for the Greek language. The model, the code, and the new summarization dataset will be publicly available.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 10:48:51 GMT" } ]
2023-04-04T00:00:00
[ [ "Evdaimon", "Iakovos", "" ], [ "Abdine", "Hadi", "" ], [ "Xypolopoulos", "Christos", "" ], [ "Outsios", "Stamatis", "" ], [ "Vazirgiannis", "Michalis", "" ], [ "Stamou", "Giorgos", "" ] ]
new_dataset
0.99702
2304.00892
Brahim Tamadazte
Maxime Adjigble and Brahim Tamadazte and Cristiana de Farias and Rustam Stolkin and Naresh Marturi
Asservissement visuel 3D direct dans le domaine spectral
8 pages, 5 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper presents a direct 3D visual servo scheme for the automatic alignment of point clouds (respectively, objects) using visual information in the spectral domain. Specifically, we propose an alignment method for 3D models/point clouds that works by estimating the global transformation between a reference point cloud and a target point cloud using harmonic domain data analysis. A 3D discrete Fourier transform (DFT) in $\mathbb{R}^3$ is used for translation estimation and real spherical harmonics in $SO(3)$ are used for rotation estimation. This approach allows us to derive a decoupled visual servo controller with 6 degrees of freedom. We then show how this approach can be used as a controller for a robotic arm to perform a positioning task. Unlike existing 3D visual servo methods, our method works well with partial point clouds and in cases of large initial transformations between the initial and desired position. Additionally, using spectral data (instead of spatial data) for the transformation estimation makes our method robust to sensor-induced noise and partial occlusions. Our method has been successfully validated experimentally on point clouds obtained with a depth camera mounted on a robotic arm.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 11:28:02 GMT" } ]
2023-04-04T00:00:00
[ [ "Adjigble", "Maxime", "" ], [ "Tamadazte", "Brahim", "" ], [ "de Farias", "Cristiana", "" ], [ "Stolkin", "Rustam", "" ], [ "Marturi", "Naresh", "" ] ]
new_dataset
0.997978
2304.00906
Dan Saattrup Nielsen
Dan Saattrup Nielsen
ScandEval: A Benchmark for Scandinavian Natural Language Processing
17 pages, 11 figures, camera-ready NoDaLiDa 2023 submission
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper introduces a Scandinavian benchmarking platform, ScandEval, which can benchmark any pretrained model on four different tasks in the Scandinavian languages. The datasets used in two of the tasks, linguistic acceptability and question answering, are new. We develop and release a Python package and command-line interface, scandeval, which can benchmark any model that has been uploaded to the Hugging Face Hub, with reproducible results. Using this package, we benchmark more than 100 Scandinavian or multilingual models and present the results of these in an interactive online leaderboard, as well as provide an analysis of the results. The analysis shows that there is substantial cross-lingual transfer among the Mainland Scandinavian languages (Danish, Swedish and Norwegian), with limited cross-lingual transfer between the group of Mainland Scandinavian languages and the group of Insular Scandinavian languages (Icelandic and Faroese). The benchmarking results also show that the investment in language technology in Norway, Sweden and Denmark has led to language models that outperform massively multilingual models such as XLM-RoBERTa and mDeBERTaV3. We release the source code for both the package and leaderboard.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 11:51:46 GMT" } ]
2023-04-04T00:00:00
[ [ "Nielsen", "Dan Saattrup", "" ] ]
new_dataset
0.999833
2304.00913
Ankit Yadav
Ankit Yadav, Shubham Chandel, Sushant Chatufale and Anil Bandhakavi
LAHM : Large Annotated Dataset for Multi-Domain and Multilingual Hate Speech Identification
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Current research on hate speech analysis is typically oriented towards monolingual and single classification tasks. In this paper, we present a new multilingual hate speech analysis dataset for English, Hindi, Arabic, French, German and Spanish languages for multiple domains across hate speech - Abuse, Racism, Sexism, Religious Hate and Extremism. To the best of our knowledge, this paper is the first to address the problem of identifying various types of hate speech in these five wide domains in these six languages. In this work, we describe how we created the dataset, created annotations at high level and low level for different domains and how we use it to test the current state-of-the-art multilingual and multitask learning approaches. We evaluate our dataset in various monolingual, cross-lingual and machine translation classification settings and compare it against open source English datasets that we aggregated and merged for this task. Then we discuss how this approach can be used to create large scale hate-speech datasets and how to leverage our annotations in order to improve hate speech detection and classification in general.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 12:03:45 GMT" } ]
2023-04-04T00:00:00
[ [ "Yadav", "Ankit", "" ], [ "Chandel", "Shubham", "" ], [ "Chatufale", "Sushant", "" ], [ "Bandhakavi", "Anil", "" ] ]
new_dataset
0.999873
2304.00946
Xiang Wang
Xiang Wang, Shiwei Zhang, Zhiwu Qing, Changxin Gao, Yingya Zhang, Deli Zhao, Nong Sang
MoLo: Motion-augmented Long-short Contrastive Learning for Few-shot Action Recognition
Accepted by CVPR-2023. Code: https://github.com/alibaba-mmai-research/MoLo
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current state-of-the-art approaches for few-shot action recognition achieve promising performance by conducting frame-level matching on learned visual features. However, they generally suffer from two limitations: i) the matching procedure between local frames tends to be inaccurate due to the lack of guidance to force long-range temporal perception; ii) explicit motion learning is usually ignored, leading to partial information loss. To address these issues, we develop a Motion-augmented Long-short Contrastive Learning (MoLo) method that contains two crucial components, including a long-short contrastive objective and a motion autodecoder. Specifically, the long-short contrastive objective is to endow local frame features with long-form temporal awareness by maximizing their agreement with the global token of videos belonging to the same class. The motion autodecoder is a lightweight architecture to reconstruct pixel motions from the differential features, which explicitly embeds the network with motion dynamics. By this means, MoLo can simultaneously learn long-range temporal context and motion cues for comprehensive few-shot matching. To demonstrate the effectiveness, we evaluate MoLo on five standard benchmarks, and the results show that MoLo favorably outperforms recent advanced methods. The source code is available at https://github.com/alibaba-mmai-research/MoLo.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 13:09:39 GMT" } ]
2023-04-04T00:00:00
[ [ "Wang", "Xiang", "" ], [ "Zhang", "Shiwei", "" ], [ "Qing", "Zhiwu", "" ], [ "Gao", "Changxin", "" ], [ "Zhang", "Yingya", "" ], [ "Zhao", "Deli", "" ], [ "Sang", "Nong", "" ] ]
new_dataset
0.999335
2304.00954
Fnu Aryan
Aryan, Bowen Li, Sebastian Scherer, Yun-Jou Lin, Chen Wang
AirLoc: Object-based Indoor Relocalization
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Indoor relocalization is vital for both robotic tasks like autonomous exploration and civil applications such as navigation with a cell phone in a shopping mall. Some previous approaches adopt geometrical information such as key-point features or local textures to carry out indoor relocalization, but they either easily fail in an environment with visually similar scenes or require many database images. Inspired by the fact that humans often remember places by recognizing unique landmarks, we resort to objects, which are more informative than geometry elements. In this work, we propose a simple yet effective object-based indoor relocalization approach, dubbed AirLoc. To overcome the critical challenges of object reidentification and remembering object relationships, we extract object-wise appearance embedding and inter-object geometric relationships. The geometry and appearance features are integrated to generate cumulative scene features. This results in a robust, accurate, and portable indoor relocalization system, which outperforms the state-of-the-art methods in room-level relocalization by 9.5% of PR-AUC and 7% of accuracy. In addition to exhaustive evaluation, we also carry out real-world tests, where AirLoc shows robustness in challenges like severe occlusion, perceptual aliasing, viewpoint shift, and deformation.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 13:16:47 GMT" } ]
2023-04-04T00:00:00
[ [ "Aryan", "", "" ], [ "Li", "Bowen", "" ], [ "Scherer", "Sebastian", "" ], [ "Lin", "Yun-Jou", "" ], [ "Wang", "Chen", "" ] ]
new_dataset
0.998891
2304.00979
Xinwei Liu
Xinwei Liu, Kiran Raja, Renfang Wang, Hong Qiu, Hucheng Wu, Dechao Sun, Qiguang Zheng, Nian Liu, Xiaoxia Wang, Gehang Huang, Raghavendra Ramachandra, Christoph Busch
A Latent Fingerprint in the Wild Database
Submitted to IEEE Transactions on Information Forensics and Security (under review)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Latent fingerprints are among the most important and widely used evidence in crime scenes, digital forensics and law enforcement worldwide. Despite the number of advancements reported in recent works, we note that significant open issues such as independent benchmarking and lack of large-scale evaluation databases for improving the algorithms are inadequately addressed. The available databases are mostly of semi-public nature, lack of acquisition in the wild environment, and post-processing pipelines. Moreover, they do not represent a realistic capture scenario similar to real crime scenes, to benchmark the robustness of the algorithms. Further, existing databases for latent fingerprint recognition do not have a large number of unique subjects/fingerprint instances or do not provide ground truth/reference fingerprint images to conduct a cross-comparison against the latent. In this paper, we introduce a new wild large-scale latent fingerprint database that includes five different acquisition scenarios: reference fingerprints from (1) optical and (2) capacitive sensors, (3) smartphone fingerprints, latent fingerprints captured from (4) wall surface, (5) Ipad surface, and (6) aluminium foil surface. The new database consists of 1,318 unique fingerprint instances captured in all above mentioned settings. A total of 2,636 reference fingerprints from optical and capacitive sensors, 1,318 fingerphotos from smartphones, and 9,224 latent fingerprints from each of the 132 subjects were provided in this work. The dataset is constructed considering various age groups, equal representations of genders and backgrounds. In addition, we provide an extensive set of analysis of various subset evaluations to highlight open challenges for future directions in latent fingerprint recognition research.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 13:47:38 GMT" } ]
2023-04-04T00:00:00
[ [ "Liu", "Xinwei", "" ], [ "Raja", "Kiran", "" ], [ "Wang", "Renfang", "" ], [ "Qiu", "Hong", "" ], [ "Wu", "Hucheng", "" ], [ "Sun", "Dechao", "" ], [ "Zheng", "Qiguang", "" ], [ "Liu", "Nian", "" ], [ "Wang", "Xiaoxia", "" ], [ "Huang", "Gehang", "" ], [ "Ramachandra", "Raghavendra", "" ], [ "Busch", "Christoph", "" ] ]
new_dataset
0.995288
2304.01003
Ivano Lauriola
Stefano Campese, Ivano Lauriola, Alessandro Moschitti
QUADRo: Dataset and Models for QUestion-Answer Database Retrieval
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
An effective paradigm for building Automated Question Answering systems is the re-use of previously answered questions, e.g., for FAQs or forum applications. Given a database (DB) of question/answer (q/a) pairs, it is possible to answer a target question by scanning the DB for similar questions. In this paper, we scale this approach to open domain, making it competitive with other standard methods, e.g., unstructured document or graph based. For this purpose, we (i) build a large scale DB of 6.3M q/a pairs, using public questions, (ii) design a new system based on neural IR and a q/a pair reranker, and (iii) construct training and test data to perform comparative experiments with our models. We demonstrate that Transformer-based models using (q,a) pairs outperform models only based on question representation, for both neural search and reranking. Additionally, we show that our DB-based approach is competitive with Web-based methods, i.e., a QA system built on top the BING search engine, demonstrating the challenge of finding relevant information. Finally, we make our data and models available for future research.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 00:42:07 GMT" } ]
2023-04-04T00:00:00
[ [ "Campese", "Stefano", "" ], [ "Lauriola", "Ivano", "" ], [ "Moschitti", "Alessandro", "" ] ]
new_dataset
0.999435
2304.01073
Mona Wang
Watson Jia, Mona Wang, Liang Wang, and Prateek Mittal
QUICstep: Circumventing QUIC-based Censorship
null
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Governments around the world limit free and open communication on the Internet through censorship. To reliably identify and block access to certain web domains, censors inspect the plaintext TLS SNI field sent in TLS handshakes. With QUIC rapidly displacing TCP as the dominant transport-layer protocol on the web, censorship regimes have already begun prosecuting network traffic delivered over QUIC. With QUIC censorship poised to expand, censorship circumvention tools must similarly adapt. We present QUICstep, a censorship-resilient, application-agnostic, performant, and easy-to-implement approach to censorship circumvention in the QUIC era. QUICstep circumvents TLS SNI censorship by conducting a QUIC-TLS handshake over an encrypted tunnel to hide the SNI field from censors and performs connection migration to resume the QUIC session in plain sight of the censor. Our evaluation finds that QUICstep successfully establishes QUIC sessions in the presence of a proof-of-concept censor with minimal latency overhead.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 15:31:58 GMT" } ]
2023-04-04T00:00:00
[ [ "Jia", "Watson", "" ], [ "Wang", "Mona", "" ], [ "Wang", "Liang", "" ], [ "Mittal", "Prateek", "" ] ]
new_dataset
0.996455
2304.01080
Alejandro Linares-Barranco A. Linares-Barranco
Antonio Rios-Navarro, Enrique Pi\~nero-Fuentes, Salvador Canas-Moreno, Aqib Javed, Jin Harkin, Alejandro Linares-Barranco
LIPSFUS: A neuromorphic dataset for audio-visual sensory fusion of lip reading
Submitted to ISCAS2023, 4 pages, plus references, github link provided
null
null
null
cs.SD cs.RO eess.AS
http://creativecommons.org/licenses/by/4.0/
This paper presents a sensory fusion neuromorphic dataset collected with precise temporal synchronization using a set of Address-Event-Representation sensors and tools. The target application is the lip reading of several keywords for different machine learning applications, such as digits, robotic commands, and auxiliary rich phonetic short words. The dataset is enlarged with a spiking version of an audio-visual lip reading dataset collected with frame-based cameras. LIPSFUS is publicly available and it has been validated with a deep learning architecture for audio and visual classification. It is intended for sensory fusion architectures based on both artificial and spiking neural network algorithms.
[ { "version": "v1", "created": "Tue, 28 Mar 2023 12:27:43 GMT" } ]
2023-04-04T00:00:00
[ [ "Rios-Navarro", "Antonio", "" ], [ "Piñero-Fuentes", "Enrique", "" ], [ "Canas-Moreno", "Salvador", "" ], [ "Javed", "Aqib", "" ], [ "Harkin", "Jin", "" ], [ "Linares-Barranco", "Alejandro", "" ] ]
new_dataset
0.999793
2304.01102
Julian Aron Prenner
Julian Aron Prenner and Romain Robbes
RunBugRun -- An Executable Dataset for Automated Program Repair
null
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, we can notice a transition to data-driven techniques in Automated Program Repair (APR), in particular towards deep neural networks. This entails training on hundreds of thousands or even millions of non-executable code fragments. We would like to bring more attention to an aspect of code often neglected in Neural Program Repair (NPR), namely its execution. Code execution has several significant advantages. It allows for test-based evaluation of candidate fixes and can provide valuable information to aid repair. In this work we present a fully executable dataset of 450,000 small buggy/fixed program pairs originally submitted to programming competition websites written in eight different programming languages. Along with the dataset we provide infrastructure to compile, safely execute and test programs as well as fine-grained bug-type labels. To give a point of reference, we provide basic evaluation results for two baselines, one based on a generate-and-validate approach and one on deep learning. With this dataset we follow several goals: we want to lift Neural Program Repair beyond fully static code representations, foster the use of execution-based features and, by including several different languages, counterbalance the predominance of Java in the current landscape of APR datasets and benchmarks.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 16:02:00 GMT" } ]
2023-04-04T00:00:00
[ [ "Prenner", "Julian Aron", "" ], [ "Robbes", "Romain", "" ] ]
new_dataset
0.997328
2304.01179
Nadav Schneider
Nadav Schneider, Shimon Shouei, Saleem Ghantous, Elad Feldman
Hate Speech Targets Detection in Parler using BERT
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Online social networks have become a fundamental component of our everyday life. Unfortunately, these platforms are also a stage for hate speech. Popular social networks have regularized rules against hate speech. Consequently, social networks like Parler and Gab advocating and claiming to be free speech platforms have evolved. These platforms have become a district for hate speech against diverse targets. We present in our paper a pipeline for detecting hate speech and its targets and use it for creating Parler hate targets' distribution. The pipeline consists of two models; one for hate speech detection and the second for target classification, both based on BERT with Back-Translation and data pre-processing for improved results. The source code used in this work, as well as other relevant sources, are available at: https://github.com/NadavSc/HateRecognition.git
[ { "version": "v1", "created": "Mon, 3 Apr 2023 17:49:04 GMT" } ]
2023-04-04T00:00:00
[ [ "Schneider", "Nadav", "" ], [ "Shouei", "Shimon", "" ], [ "Ghantous", "Saleem", "" ], [ "Feldman", "Elad", "" ] ]
new_dataset
0.999262
2304.01186
Yue Ma
Yue Ma, Yingqing He, Xiaodong Cun, Xintao Wang, Ying Shan, Xiu Li, Qifeng Chen
Follow Your Pose: Pose-Guided Text-to-Video Generation using Pose-Free Videos
Project page: https://follow-your-pose.github.io/; Github repository: https://github.com/mayuelala/FollowYourPose
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generating text-editable and pose-controllable character videos have an imperious demand in creating various digital human. Nevertheless, this task has been restricted by the absence of a comprehensive dataset featuring paired video-pose captions and the generative prior models for videos. In this work, we design a novel two-stage training scheme that can utilize easily obtained datasets (i.e.,image pose pair and pose-free video) and the pre-trained text-to-image (T2I) model to obtain the pose-controllable character videos. Specifically, in the first stage, only the keypoint-image pairs are used only for a controllable text-to-image generation. We learn a zero-initialized convolu- tional encoder to encode the pose information. In the second stage, we finetune the motion of the above network via a pose-free video dataset by adding the learnable temporal self-attention and reformed cross-frame self-attention blocks. Powered by our new designs, our method successfully generates continuously pose-controllable character videos while keeps the editing and concept composition ability of the pre-trained T2I model. The code and models will be made publicly available.
[ { "version": "v1", "created": "Mon, 3 Apr 2023 17:55:14 GMT" } ]
2023-04-04T00:00:00
[ [ "Ma", "Yue", "" ], [ "He", "Yingqing", "" ], [ "Cun", "Xiaodong", "" ], [ "Wang", "Xintao", "" ], [ "Shan", "Ying", "" ], [ "Li", "Xiu", "" ], [ "Chen", "Qifeng", "" ] ]
new_dataset
0.99892
2304.01194
Akshay Dudhane
Akshay Dudhane, Syed Waqas Zamir, Salman Khan, Fahad Shahbaz Khan, Ming-Hsuan Yang
Burstormer: Burst Image Restoration and Enhancement Transformer
Accepted at CVPR 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
On a shutter press, modern handheld cameras capture multiple images in rapid succession and merge them to generate a single image. However, individual frames in a burst are misaligned due to inevitable motions and contain multiple degradations. The challenge is to properly align the successive image shots and merge their complimentary information to achieve high-quality outputs. Towards this direction, we propose Burstormer: a novel transformer-based architecture for burst image restoration and enhancement. In comparison to existing works, our approach exploits multi-scale local and non-local features to achieve improved alignment and feature fusion. Our key idea is to enable inter-frame communication in the burst neighborhoods for information aggregation and progressive fusion while modeling the burst-wide context. However, the input burst frames need to be properly aligned before fusing their information. Therefore, we propose an enhanced deformable alignment module for aligning burst features with regards to the reference frame. Unlike existing methods, the proposed alignment module not only aligns burst features but also exchanges feature information and maintains focused communication with the reference frame through the proposed reference-based feature enrichment mechanism, which facilitates handling complex motions. After multi-level alignment and enrichment, we re-emphasize on inter-frame communication within burst using a cyclic burst sampling module. Finally, the inter-frame information is aggregated using the proposed burst feature fusion module followed by progressive upsampling. Our Burstormer outperforms state-of-the-art methods on burst super-resolution, burst denoising and burst low-light enhancement. Our codes and pretrained models are available at https:// github.com/akshaydudhane16/Burstormer
[ { "version": "v1", "created": "Mon, 3 Apr 2023 17:58:44 GMT" } ]
2023-04-04T00:00:00
[ [ "Dudhane", "Akshay", "" ], [ "Zamir", "Syed Waqas", "" ], [ "Khan", "Salman", "" ], [ "Khan", "Fahad Shahbaz", "" ], [ "Yang", "Ming-Hsuan", "" ] ]
new_dataset
0.99884
2101.00784
Zekun Wang
Zekun Wang, Pengwei Wang, Peter C. Louis, Lee E. Wheless, Yuankai Huo
WearMask: Fast In-browser Face Mask Detection with Serverless Edge Computing for COVID-19
null
Electronic Imaging, 2023, pp 229-1 - 229-6
10.2352/EI.2023.35.11.HPCI-229
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The COVID-19 epidemic has been a significant healthcare challenge in the United States. According to the Centers for Disease Control and Prevention (CDC), COVID-19 infection is transmitted predominately by respiratory droplets generated when people breathe, talk, cough, or sneeze. Wearing a mask is the primary, effective, and convenient method of blocking 80% of all respiratory infections. Therefore, many face mask detection and monitoring systems have been developed to provide effective supervision for hospitals, airports, publication transportation, sports venues, and retail locations. However, the current commercial face mask detection systems are typically bundled with specific software or hardware, impeding public accessibility. In this paper, we propose an in-browser serverless edge-computing based face mask detection solution, called Web-based efficient AI recognition of masks (WearMask), which can be deployed on any common devices (e.g., cell phones, tablets, computers) that have internet connections using web browsers, without installing any software. The serverless edge-computing design minimizes the extra hardware costs (e.g., specific devices or cloud computing servers). The contribution of the proposed method is to provide a holistic edge-computing framework of integrating (1) deep learning models (YOLO), (2) high-performance neural network inference computing framework (NCNN), and (3) a stack-based virtual machine (WebAssembly). For end-users, our web-based solution has advantages of (1) serverless edge-computing design with minimal device limitation and privacy risk, (2) installation free deployment, (3) low computing requirements, and (4) high detection speed. Our WearMask application has been launched with public access at facemask-detection.com.
[ { "version": "v1", "created": "Mon, 4 Jan 2021 05:50:48 GMT" } ]
2023-04-03T00:00:00
[ [ "Wang", "Zekun", "" ], [ "Wang", "Pengwei", "" ], [ "Louis", "Peter C.", "" ], [ "Wheless", "Lee E.", "" ], [ "Huo", "Yuankai", "" ] ]
new_dataset
0.999706
2203.00806
Simon Le Cleac'h
Taylor A. Howell and Simon Le Cleac'h and Jan Br\"udigam and J. Zico Kolter and Mac Schwager and Zachary Manchester
Dojo: A Differentiable Physics Engine for Robotics
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Dojo, a differentiable physics engine for robotics that prioritizes stable simulation, accurate contact physics, and differentiability with respect to states, actions, and system parameters. Dojo achieves stable simulation at low sample rates and conserves energy and momentum by employing a variational integrator. A nonlinear complementarity problem with second-order cones for friction models hard contact, and is reliably solved using a custom primal-dual interior-point method. Special properties of the interior-point method are exploited using implicit differentiation to efficiently compute smooth gradients that provide useful information through contact events. We demonstrate Dojo with a number of examples including: planning, policy optimization, and system identification, that demonstrate the engine's unique ability to simulate hard contact while providing smooth, analytic gradients.
[ { "version": "v1", "created": "Wed, 2 Mar 2022 00:56:23 GMT" }, { "version": "v2", "created": "Thu, 3 Mar 2022 06:12:42 GMT" }, { "version": "v3", "created": "Mon, 27 Jun 2022 18:09:13 GMT" }, { "version": "v4", "created": "Fri, 31 Mar 2023 01:31:26 GMT" } ]
2023-04-03T00:00:00
[ [ "Howell", "Taylor A.", "" ], [ "Cleac'h", "Simon Le", "" ], [ "Brüdigam", "Jan", "" ], [ "Kolter", "J. Zico", "" ], [ "Schwager", "Mac", "" ], [ "Manchester", "Zachary", "" ] ]
new_dataset
0.992743
2203.16799
Sreyan Ghosh
Sreyan Ghosh and S Ramaneswaran and Utkarsh Tyagi and Harshvardhan Srivastava and Samden Lepcha and S Sakshi and Dinesh Manocha
M-MELD: A Multilingual Multi-Party Dataset for Emotion Recognition in Conversations
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Expression of emotions is a crucial part of daily human communication. Emotion recognition in conversations (ERC) is an emerging field of study, where the primary task is to identify the emotion behind each utterance in a conversation. Though a lot of work has been done on ERC in the past, these works only focus on ERC in the English language, thereby ignoring any other languages. In this paper, we present Multilingual MELD (M-MELD), where we extend the Multimodal EmotionLines Dataset (MELD) \cite{poria2018meld} to 4 other languages beyond English, namely Greek, Polish, French, and Spanish. Beyond just establishing strong baselines for all of these 4 languages, we also propose a novel architecture, DiscLSTM, that uses both sequential and conversational discourse context in a conversational dialogue for ERC. Our proposed approach is computationally efficient, can transfer across languages using just a cross-lingual encoder, and achieves better performance than most uni-modal text approaches in the literature on both MELD and M-MELD. We make our data and code publicly on GitHub.
[ { "version": "v1", "created": "Thu, 31 Mar 2022 05:07:16 GMT" }, { "version": "v2", "created": "Fri, 1 Apr 2022 04:38:19 GMT" }, { "version": "v3", "created": "Tue, 8 Nov 2022 21:07:06 GMT" }, { "version": "v4", "created": "Fri, 31 Mar 2023 13:25:05 GMT" } ]
2023-04-03T00:00:00
[ [ "Ghosh", "Sreyan", "" ], [ "Ramaneswaran", "S", "" ], [ "Tyagi", "Utkarsh", "" ], [ "Srivastava", "Harshvardhan", "" ], [ "Lepcha", "Samden", "" ], [ "Sakshi", "S", "" ], [ "Manocha", "Dinesh", "" ] ]
new_dataset
0.999784
2205.13489
Wang Zhihua
Zhihua Wang, Keshuo Xu, Yang Yang, Jianlei Dong, Shuhang Gu, Lihao Xu, Yuming Fang, and Kede Ma
Measuring Perceptual Color Differences of Smartphone Photographs
10 figures, 8 tables, 14 pages
null
null
null
cs.CV cs.GR eess.IV
http://creativecommons.org/licenses/by/4.0/
Measuring perceptual color differences (CDs) is of great importance in modern smartphone photography. Despite the long history, most CD measures have been constrained by psychophysical data of homogeneous color patches or a limited number of simplistic natural photographic images. It is thus questionable whether existing CD measures generalize in the age of smartphone photography characterized by greater content complexities and learning-based image signal processors. In this paper, we put together so far the largest image dataset for perceptual CD assessment, in which the photographic images are 1) captured by six flagship smartphones, 2) altered by Photoshop, 3) post-processed by built-in filters of the smartphones, and 4) reproduced with incorrect color profiles. We then conduct a large-scale psychophysical experiment to gather perceptual CDs of 30,000 image pairs in a carefully controlled laboratory environment. Based on the newly established dataset, we make one of the first attempts to construct an end-to-end learnable CD formula based on a lightweight neural network, as a generalization of several previous metrics. Extensive experiments demonstrate that the optimized formula outperforms 33 existing CD measures by a large margin, offers reasonable local CD maps without the use of dense supervision, generalizes well to homogeneous color patch data, and empirically behaves as a proper metric in the mathematical sense. Our dataset and code are publicly available at https://github.com/hellooks/CDNet.
[ { "version": "v1", "created": "Thu, 26 May 2022 16:57:04 GMT" }, { "version": "v2", "created": "Fri, 31 Mar 2023 15:07:28 GMT" } ]
2023-04-03T00:00:00
[ [ "Wang", "Zhihua", "" ], [ "Xu", "Keshuo", "" ], [ "Yang", "Yang", "" ], [ "Dong", "Jianlei", "" ], [ "Gu", "Shuhang", "" ], [ "Xu", "Lihao", "" ], [ "Fang", "Yuming", "" ], [ "Ma", "Kede", "" ] ]
new_dataset
0.998739
2206.02241
Fabian Peller-Konrad
Fabian Peller-Konrad, Rainer Kartmann, Christian R. G. Dreher, Andre Meixner, Fabian Reister, Markus Grotz, Tamim Asfour
A Memory System of a Robot Cognitive Architecture and its Implementation in ArmarX
35 pages, 19 figures, submitted to RAS
Robotics and Autonomous Systems (2023)
10.1016/j.robot.2023.104415
ROBOT: 104415
cs.AI cs.RO
http://creativecommons.org/licenses/by/4.0/
Cognitive agents such as humans and robots perceive their environment through an abundance of sensors producing streams of data that need to be processed to generate intelligent behavior. A key question of cognition-enabled and AI-driven robotics is how to organize and manage knowledge efficiently in a cognitive robot control architecture. We argue, that memory is a central active component of such architectures that mediates between semantic and sensorimotor representations, orchestrates the flow of data streams and events between different processes and provides the components of a cognitive architecture with data-driven services for the abstraction of semantics from sensorimotor data, the parametrization of symbolic plans for execution and prediction of action effects. Based on related work, and the experience gained in developing our ARMAR humanoid robot systems, we identified conceptual and technical requirements of a memory system as central component of cognitive robot control architecture that facilitate the realization of high-level cognitive abilities such as explaining, reasoning, prospection, simulation and augmentation. Conceptually, a memory should be active, support multi-modal data representations, associate knowledge, be introspective, and have an inherently episodic structure. Technically, the memory should support a distributed design, be access-efficient and capable of long-term data storage. We introduce the memory system for our cognitive robot control architecture and its implementation in the robot software framework ArmarX. We evaluate the efficiency of the memory system with respect to transfer speeds, compression, reproduction and prediction capabilities.
[ { "version": "v1", "created": "Sun, 5 Jun 2022 19:15:29 GMT" }, { "version": "v2", "created": "Fri, 17 Jun 2022 09:42:05 GMT" }, { "version": "v3", "created": "Tue, 31 Jan 2023 12:33:13 GMT" } ]
2023-04-03T00:00:00
[ [ "Peller-Konrad", "Fabian", "" ], [ "Kartmann", "Rainer", "" ], [ "Dreher", "Christian R. G.", "" ], [ "Meixner", "Andre", "" ], [ "Reister", "Fabian", "" ], [ "Grotz", "Markus", "" ], [ "Asfour", "Tamim", "" ] ]
new_dataset
0.994781
2208.01765
Dianne O'Leary
Jennifer Head and Dianne P. O'Leary
Mary Kenneth Keller: First US PhD in Computer Science
This revision expands the abstract, adds a reference to a condensed version of this paper published in a journal, references Keller's work on ACM curricula, and notes an IEEE prize in her honor
IEEE Annals of the History of Computing 45(1):55--63, January-March 2023
10.1109/MAHC.2022.3231763
null
cs.GL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In June 1965, Sister Mary Kenneth Keller, BVM, received the first US PhD in Computer Science, and this paper outlines her life and accomplishments. As a scholar, she has the distinction of being an early advocate of learning-by-example in artificial intelligence. Her main scholarly contribution was in shaping computer science education in high schools and small colleges. She was an evangelist for viewing the computer as a symbol manipulator, for providing computer literacy to everyone, and for the use of computers in service to humanity. She was far ahead of her time in working to ensure a place for women in technology and in eliminating barriers preventing their participation, such as poor access to education and daycare. She was a strong and spirited woman, a visionary in seeing how computers would revolutionize our lives. A condensation of this paper appeared as, ``The Legacy of Mary Kenneth Keller, First U.S. Ph.D. in Computer Science," Jennifer Head and Dianne P. O'Leary, IEEE Annals of the History of Computing 45(1):55--63, January-March 2023.
[ { "version": "v1", "created": "Tue, 2 Aug 2022 21:42:01 GMT" }, { "version": "v2", "created": "Thu, 30 Mar 2023 18:18:18 GMT" } ]
2023-04-03T00:00:00
[ [ "Head", "Jennifer", "" ], [ "O'Leary", "Dianne P.", "" ] ]
new_dataset
0.999177
2210.10094
Yasas Seneviratne
Yasas Seneviratne, Korakit Seemakhupt, Sihang Liu, Samira Khan
NearPM: A Near-Data Processing System for Storage-Class Applications
null
null
null
null
cs.CE cs.GT
http://creativecommons.org/publicdomain/zero/1.0/
Persistent Memory (PM) technologies enable program recovery to a consistent state in a case of failure. To ensure this crash-consistent behavior, programs need to enforce persist ordering by employing mechanisms, such as logging and checkpointing, which introduce additional data movement. The emerging near-data processing (NDP) architec-tures can effectively reduce this data movement overhead. In this work we propose NearPM, a near data processor that supports accelerable primitives in crash consistent programs. Using these primitives NearPM accelerate commonly used crash consistency mechanisms logging, checkpointing, and shadow-paging. NearPM further reduces the synchronization overheads between the NDP and the CPU to guarantee persistent ordering by moving ordering handling near memory. We ensures a correct persist ordering between CPU and NDP devices, as well as among multiple NDP devices with Partitioned Persist Ordering (PPO). We prototype NearPM on an FPGA platform.1 NearPM executes data-intensive operations in crash consistency mechanisms with correct ordering guarantees while the rest of the program runs on the CPU. We evaluate nine PM workloads, where each work load supports three crash consistency mechanisms -logging, checkpointing, and shadow paging. Overall, NearPM achieves 4.3-9.8X speedup in the NDP-offloaded operations and 1.22-1.35X speedup in end-to-end execution.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 18:45:54 GMT" }, { "version": "v2", "created": "Fri, 31 Mar 2023 15:24:27 GMT" } ]
2023-04-03T00:00:00
[ [ "Seneviratne", "Yasas", "" ], [ "Seemakhupt", "Korakit", "" ], [ "Liu", "Sihang", "" ], [ "Khan", "Samira", "" ] ]
new_dataset
0.969305
2211.03726
Carl Doersch
Carl Doersch, Ankush Gupta, Larisa Markeeva, Adri\`a Recasens, Lucas Smaira, Yusuf Aytar, Jo\~ao Carreira, Andrew Zisserman, Yi Yang
TAP-Vid: A Benchmark for Tracking Any Point in a Video
Published in NeurIPS Datasets and Benchmarks track, 2022
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Generic motion understanding from video involves not only tracking objects, but also perceiving how their surfaces deform and move. This information is useful to make inferences about 3D shape, physical properties and object interactions. While the problem of tracking arbitrary physical points on surfaces over longer video clips has received some attention, no dataset or benchmark for evaluation existed, until now. In this paper, we first formalize the problem, naming it tracking any point (TAP). We introduce a companion benchmark, TAP-Vid, which is composed of both real-world videos with accurate human annotations of point tracks, and synthetic videos with perfect ground-truth point tracks. Central to the construction of our benchmark is a novel semi-automatic crowdsourced pipeline which uses optical flow estimates to compensate for easier, short-term motion like camera shake, allowing annotators to focus on harder sections of video. We validate our pipeline on synthetic data and propose a simple end-to-end point tracking model TAP-Net, showing that it outperforms all prior methods on our benchmark when trained on synthetic data.
[ { "version": "v1", "created": "Mon, 7 Nov 2022 17:57:02 GMT" }, { "version": "v2", "created": "Fri, 31 Mar 2023 11:51:40 GMT" } ]
2023-04-03T00:00:00
[ [ "Doersch", "Carl", "" ], [ "Gupta", "Ankush", "" ], [ "Markeeva", "Larisa", "" ], [ "Recasens", "Adrià", "" ], [ "Smaira", "Lucas", "" ], [ "Aytar", "Yusuf", "" ], [ "Carreira", "João", "" ], [ "Zisserman", "Andrew", "" ], [ "Yang", "Yi", "" ] ]
new_dataset
0.999819
2211.07021
Eddie Bkheet
Eddie Bkheet, Anne-Lise D'Angelo, Adam Goldbraikh, Shlomi Laufer
Using Hand Pose Estimation To Automate Open Surgery Training Feedback
Accepted to IPCAI 2023, 12 pages, 5 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Purpose: This research aims to facilitate the use of state-of-the-art computer vision algorithms for the automated training of surgeons and the analysis of surgical footage. By estimating 2D hand poses, we model the movement of the practitioner's hands, and their interaction with surgical instruments, to study their potential benefit for surgical training. Methods: We leverage pre-trained models on a publicly-available hands dataset to create our own in-house dataset of 100 open surgery simulation videos with 2D hand poses. We also assess the ability of pose estimations to segment surgical videos into gestures and tool-usage segments and compare them to kinematic sensors and I3D features. Furthermore, we introduce 6 novel surgical dexterity proxies stemming from domain experts' training advice, all of which our framework can automatically detect given raw video footage. Results: State-of-the-art gesture segmentation accuracy of 88.35\% on the Open Surgery Simulation dataset is achieved with the fusion of 2D poses and I3D features from multiple angles. The introduced surgical skill proxies presented significant differences for novices compared to experts and produced actionable feedback for improvement. Conclusion: This research demonstrates the benefit of pose estimations for open surgery by analyzing their effectiveness in gesture segmentation and skill assessment. Gesture segmentation using pose estimations achieved comparable results to physical sensors while being remote and markerless. Surgical dexterity proxies that rely on pose estimation proved they can be used to work towards automated training feedback. We hope our findings encourage additional collaboration on novel skill proxies to make surgical training more efficient.
[ { "version": "v1", "created": "Sun, 13 Nov 2022 21:47:31 GMT" }, { "version": "v2", "created": "Thu, 30 Mar 2023 19:14:54 GMT" } ]
2023-04-03T00:00:00
[ [ "Bkheet", "Eddie", "" ], [ "D'Angelo", "Anne-Lise", "" ], [ "Goldbraikh", "Adam", "" ], [ "Laufer", "Shlomi", "" ] ]
new_dataset
0.999553
2211.11417
Ehsan Pajouheshgar
Ehsan Pajouheshgar, Yitao Xu, Tong Zhang, Sabine S\"usstrunk
DyNCA: Real-time Dynamic Texture Synthesis Using Neural Cellular Automata
Link to the demo: https://dynca.github.io/
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Current Dynamic Texture Synthesis (DyTS) models can synthesize realistic videos. However, they require a slow iterative optimization process to synthesize a single fixed-size short video, and they do not offer any post-training control over the synthesis process. We propose Dynamic Neural Cellular Automata (DyNCA), a framework for real-time and controllable dynamic texture synthesis. Our method is built upon the recently introduced NCA models and can synthesize infinitely long and arbitrary-sized realistic video textures in real time. We quantitatively and qualitatively evaluate our model and show that our synthesized videos appear more realistic than the existing results. We improve the SOTA DyTS performance by $2\sim 4$ orders of magnitude. Moreover, our model offers several real-time video controls including motion speed, motion direction, and an editing brush tool. We exhibit our trained models in an online interactive demo that runs on local hardware and is accessible on personal computers and smartphones.
[ { "version": "v1", "created": "Mon, 21 Nov 2022 13:01:52 GMT" }, { "version": "v2", "created": "Thu, 30 Mar 2023 21:56:33 GMT" } ]
2023-04-03T00:00:00
[ [ "Pajouheshgar", "Ehsan", "" ], [ "Xu", "Yitao", "" ], [ "Zhang", "Tong", "" ], [ "Süsstrunk", "Sabine", "" ] ]
new_dataset
0.999639
2211.11525
Bruno Spilak
Raul Bag, Bruno Spilak, Julian Winkel, Wolfgang Karl H\"ardle
Quantinar: a blockchain p2p ecosystem for honest scientific research
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Living in the Information Age, the power of data and correct statistical analysis has never been more prevalent. Academics and practitioners require nowadays an accurate application of quantitative methods. Yet many branches are subject to a crisis of integrity, which is shown in an improper use of statistical models, $p$-hacking, HARKing, or failure to replicate results. We propose the use of a Peer-to-Peer (P2P) ecosystem based on a blockchain network, Quantinar (quantinar.com), to support quantitative analytics knowledge paired with code in the form of Quantlets (quantlet.com) or software snippets. The integration of blockchain technology makes Quantinar a decentralized autonomous organization (DAO) that ensures fully transparent and reproducible scientific research.
[ { "version": "v1", "created": "Sun, 13 Nov 2022 11:28:04 GMT" }, { "version": "v2", "created": "Fri, 31 Mar 2023 14:29:58 GMT" } ]
2023-04-03T00:00:00
[ [ "Bag", "Raul", "" ], [ "Spilak", "Bruno", "" ], [ "Winkel", "Julian", "" ], [ "Härdle", "Wolfgang Karl", "" ] ]
new_dataset
0.99238
2301.05570
Mike Sharples PhD
Mike Sharples
John Clark's Latin Verse Machine: 19th Century Computational Creativity
13 pages, 5 figures, 1 table. Submitted to IEEE Annals of the History of Computing
IEEE Annals of the History of Computing, 45, 1, 31-42 (2023)
10.1109/MAHC.2023.3241258
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
John Clark was inventor of the Eureka machine to generate hexameter Latin verse. He labored for 13 years from 1832 to implement the device that could compose at random over 26 million different lines of well-formed verse. This paper proposes that Clark should be regarded as an early cognitive scientist. Clark described his machine as an illustration of a theory of "kaleidoscopic evolution" whereby the Latin verse is "conceived in the mind of the machine" then mechanically produced and displayed. We describe the background to automated generation of verse, the design and mechanics of Eureka, its reception in London in 1845 and its place in the history of language generation by machine. The article interprets Clark's theory of kaleidoscopic evolution in terms of modern cognitive science. It suggests that Clark has not been given the recognition he deserves as a pioneer of computational creativity.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 14:20:04 GMT" }, { "version": "v2", "created": "Mon, 30 Jan 2023 15:21:58 GMT" } ]
2023-04-03T00:00:00
[ [ "Sharples", "Mike", "" ] ]
new_dataset
0.999569
2302.09665
Zirong Chen
Zirong Chen, Issa Li, Haoxiang Zhang, Sarah Preum, John A. Stankovic, Meiyi Ma
CitySpec with Shield: A Secure Intelligent Assistant for Requirement Formalization
arXiv admin note: substantial text overlap with arXiv:2206.03132
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
An increasing number of monitoring systems have been developed in smart cities to ensure that the real-time operations of a city satisfy safety and performance requirements. However, many existing city requirements are written in English with missing, inaccurate, or ambiguous information. There is a high demand for assisting city policymakers in converting human-specified requirements to machine-understandable formal specifications for monitoring systems. To tackle this limitation, we build CitySpec, the first intelligent assistant system for requirement specification in smart cities. To create CitySpec, we first collect over 1,500 real-world city requirements across different domains (e.g., transportation and energy) from over 100 cities and extract city-specific knowledge to generate a dataset of city vocabulary with 3,061 words. We also build a translation model and enhance it through requirement synthesis and develop a novel online learning framework with shielded validation. The evaluation results on real-world city requirements show that CitySpec increases the sentence-level accuracy of requirement specification from 59.02% to 86.64%, and has strong adaptability to a new city and a new domain (e.g., the F1 score for requirements in Seattle increases from 77.6% to 93.75% with online learning). After the enhancement from the shield function, CitySpec is now immune to most known textual adversarial inputs (e.g., the attack success rate of DeepWordBug after the shield function is reduced to 0% from 82.73%). We test the CitySpec with 18 participants from different domains. CitySpec shows its strong usability and adaptability to different domains, and also its robustness to malicious inputs.
[ { "version": "v1", "created": "Sun, 19 Feb 2023 20:11:06 GMT" }, { "version": "v2", "created": "Thu, 30 Mar 2023 23:25:57 GMT" } ]
2023-04-03T00:00:00
[ [ "Chen", "Zirong", "" ], [ "Li", "Issa", "" ], [ "Zhang", "Haoxiang", "" ], [ "Preum", "Sarah", "" ], [ "Stankovic", "John A.", "" ], [ "Ma", "Meiyi", "" ] ]
new_dataset
0.997534
2303.15616
Xuyang Shen
Xuyang Shen and Dong Li and Jinxing Zhou and Zhen Qin and Bowen He and Xiaodong Han and Aixuan Li and Yuchao Dai and Lingpeng Kong and Meng Wang and Yu Qiao and Yiran Zhong
Fine-grained Audible Video Description
accepted to CVPR 2023, Xuyang Shen, Dong Li and Jinxing Zhou contribute equally, code link: github.com/OpenNLPLab/FAVDBench, dataset link: www.avlbench.opennlplab.cn
null
null
17
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
We explore a new task for audio-visual-language modeling called fine-grained audible video description (FAVD). It aims to provide detailed textual descriptions for the given audible videos, including the appearance and spatial locations of each object, the actions of moving objects, and the sounds in videos. Existing visual-language modeling tasks often concentrate on visual cues in videos while undervaluing the language and audio modalities. On the other hand, FAVD requires not only audio-visual-language modeling skills but also paragraph-level language generation abilities. We construct the first fine-grained audible video description benchmark (FAVDBench) to facilitate this research. For each video clip, we first provide a one-sentence summary of the video, ie, the caption, followed by 4-6 sentences describing the visual details and 1-2 audio-related descriptions at the end. The descriptions are provided in both English and Chinese. We create two new metrics for this task: an EntityScore to gauge the completeness of entities in the visual descriptions, and an AudioScore to assess the audio descriptions. As a preliminary approach to this task, we propose an audio-visual-language transformer that extends existing video captioning model with an additional audio branch. We combine the masked language modeling and auto-regressive language modeling losses to optimize our model so that it can produce paragraph-level descriptions. We illustrate the efficiency of our model in audio-visual-language modeling by evaluating it against the proposed benchmark using both conventional captioning metrics and our proposed metrics. We further put our benchmark to the test in video generation models, demonstrating that employing fine-grained video descriptions can create more intricate videos than using captions.
[ { "version": "v1", "created": "Mon, 27 Mar 2023 22:03:48 GMT" } ]
2023-04-03T00:00:00
[ [ "Shen", "Xuyang", "" ], [ "Li", "Dong", "" ], [ "Zhou", "Jinxing", "" ], [ "Qin", "Zhen", "" ], [ "He", "Bowen", "" ], [ "Han", "Xiaodong", "" ], [ "Li", "Aixuan", "" ], [ "Dai", "Yuchao", "" ], [ "Kong", "Lingpeng", "" ], [ "Wang", "Meng", "" ], [ "Qiao", "Yu", "" ], [ "Zhong", "Yiran", "" ] ]
new_dataset
0.999774
2303.17582
Ahmad Amine
Ahmad Amine, Mostafa Aldilati, Hadi Hasan, Noel Maalouf, Imad H. Elhajj
Human-Robot Interaction using VAHR: Virtual Assistant, Human, and Robots in the Loop
7 pages, 7 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robots have become ubiquitous tools in various industries and households, highlighting the importance of human-robot interaction (HRI). This has increased the need for easy and accessible communication between humans and robots. Recent research has focused on the intersection of virtual assistant technology, such as Amazon's Alexa, with robots and its effect on HRI. This paper presents the Virtual Assistant, Human, and Robots in the loop (VAHR) system, which utilizes bidirectional communication to control multiple robots through Alexa. VAHR's performance was evaluated through a human-subjects experiment, comparing objective and subjective metrics of traditional keyboard and mouse interfaces to VAHR. The results showed that VAHR required 41% less Robot Attention Demand and ensured 91% more Fan-out time compared to the standard method. Additionally, VAHR led to a 62.5% improvement in multi-tasking, highlighting the potential for efficient human-robot interaction in physically- and mentally-demanding scenarios. However, subjective metrics revealed a need for human operators to build confidence and trust with this new method of operation.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 17:49:55 GMT" }, { "version": "v2", "created": "Fri, 31 Mar 2023 15:53:06 GMT" } ]
2023-04-03T00:00:00
[ [ "Amine", "Ahmad", "" ], [ "Aldilati", "Mostafa", "" ], [ "Hasan", "Hadi", "" ], [ "Maalouf", "Noel", "" ], [ "Elhajj", "Imad H.", "" ] ]
new_dataset
0.98301
2303.17619
Pooja Prajod
Pooja Prajod, Matteo Lavit Nicora, Matteo Malosio, Elisabeth Andr\'e
Gaze-based Attention Recognition for Human-Robot Collaboration
Accepted to PETRA 2023
null
null
null
cs.HC cs.AI cs.CV cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
Attention (and distraction) recognition is a key factor in improving human-robot collaboration. We present an assembly scenario where a human operator and a cobot collaborate equally to piece together a gearbox. The setup provides multiple opportunities for the cobot to adapt its behavior depending on the operator's attention, which can improve the collaboration experience and reduce psychological strain. As a first step, we recognize the areas in the workspace that the human operator is paying attention to, and consequently, detect when the operator is distracted. We propose a novel deep-learning approach to develop an attention recognition model. First, we train a convolutional neural network to estimate the gaze direction using a publicly available image dataset. Then, we use transfer learning with a small dataset to map the gaze direction onto pre-defined areas of interest. Models trained using this approach performed very well in leave-one-subject-out evaluation on the small dataset. We performed an additional validation of our models using the video snippets collected from participants working as an operator in the presented assembly scenario. Although the recall for the Distracted class was lower in this case, the models performed well in recognizing the areas the operator paid attention to. To the best of our knowledge, this is the first work that validated an attention recognition model using data from a setting that mimics industrial human-robot collaboration. Our findings highlight the need for validation of attention recognition solutions in such full-fledged, non-guided scenarios.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 11:55:38 GMT" } ]
2023-04-03T00:00:00
[ [ "Prajod", "Pooja", "" ], [ "Nicora", "Matteo Lavit", "" ], [ "Malosio", "Matteo", "" ], [ "André", "Elisabeth", "" ] ]
new_dataset
0.996754
2303.17647
Danyang Liu
Danyang Liu, Frank Keller
Detecting and Grounding Important Characters in Visual Stories
AAAI 2023
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Characters are essential to the plot of any story. Establishing the characters before writing a story can improve the clarity of the plot and the overall flow of the narrative. However, previous work on visual storytelling tends to focus on detecting objects in images and discovering relationships between them. In this approach, characters are not distinguished from other objects when they are fed into the generation pipeline. The result is a coherent sequence of events rather than a character-centric story. In order to address this limitation, we introduce the VIST-Character dataset, which provides rich character-centric annotations, including visual and textual co-reference chains and importance ratings for characters. Based on this dataset, we propose two new tasks: important character detection and character grounding in visual stories. For both tasks, we develop simple, unsupervised models based on distributional similarity and pre-trained vision-and-language models. Our new dataset, together with these models, can serve as the foundation for subsequent work on analysing and generating stories from a character-centric perspective.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 18:24:06 GMT" } ]
2023-04-03T00:00:00
[ [ "Liu", "Danyang", "" ], [ "Keller", "Frank", "" ] ]
new_dataset
0.977279
2303.17667
Nicholas Milikich
Nicholas Milikich and Joshua Johnson
Taureau: A Stock Market Movement Inference Framework Based on Twitter Sentiment Analysis
null
null
null
null
cs.CY cs.SI q-fin.CP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of fast-paced information dissemination and retrieval, it has become inherently important to resort to automated means of predicting stock market prices. In this paper, we propose Taureau, a framework that leverages Twitter sentiment analysis for predicting stock market movement. The aim of our research is to determine whether Twitter, which is assumed to be representative of the general public, can give insight into the public perception of a particular company and has any correlation to that company's stock price movement. We intend to utilize this correlation to predict stock price movement. We first utilize Tweepy and getOldTweets to obtain historical tweets indicating public opinions for a set of top companies during periods of major events. We filter and label the tweets using standard programming libraries. We then vectorize and generate word embedding from the obtained tweets. Afterward, we leverage TextBlob, a state-of-the-art sentiment analytics engine, to assess and quantify the users' moods based on the tweets. Next, we correlate the temporal dimensions of the obtained sentiment scores with monthly stock price movement data. Finally, we design and evaluate a predictive model to forecast stock price movement from lagged sentiment scores. We evaluate our framework using actual stock price movement data to assess its ability to predict movement direction.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 19:12:08 GMT" } ]
2023-04-03T00:00:00
[ [ "Milikich", "Nicholas", "" ], [ "Johnson", "Joshua", "" ] ]
new_dataset
0.992597
2303.17717
Weimin Jin
Fengjiao Zou, Jennifer Ogle, Weimin Jin, Patrick Gerard, Daniel Petty, and Andrew Robb
Pedestrian Behavior Interacting with Autonomous Vehicles during Unmarked Midblock Multilane Crossings: Role of Infrastructure Design, AV Operations and Signaling
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
One of the main challenges autonomous vehicles (AVs) will face is interacting with pedestrians, especially at unmarked midblock locations where the right-of-way is unspecified. This study investigates pedestrian crossing behavior given different roadway centerline features (i.e., undivided, two-way left-turn lane (TWLTL), and median) and various AV operational schemes portrayed to pedestrians through on-vehicle signals (i.e., no signal, yellow negotiating indication, and yellow/blue negotiating/no-yield indications). This study employs virtual reality to simulate an urban unmarked midblock environment where pedestrians interact with AVs. Results demonstrate that both roadway centerline design features and AV operations and signaling significantly impact pedestrian unmarked midblock crossing behavior, including the waiting time at the curb, waiting time in the middle of the road, and the total crossing time. But only the roadway centerline features significantly impact the walking time. Participants in the undivided scene spent a longer time waiting at the curb and walking on the road than in the median and TWLTL scenes, but they spent a shorter time waiting in the middle. Compared to the AV without a signal, the design of yellow signal significantly reduced pedestrian waiting time at the curb and in the middle. But yellow/blue significantly increased the pedestrian waiting time. Interaction effects between roadway centerline design features and AV operations and signaling are significant only for waiting time in the middle. For middle waiting time, yellow/blue signals had the most impact on the median road type and the least on the undivided road. Demographics, past behaviors, and walking exposure are also explored. Older individuals tend to wait longer, and pedestrian past crossing behaviors and past walking exposures do not significantly impact pedestrian walking behavior.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 21:36:51 GMT" } ]
2023-04-03T00:00:00
[ [ "Zou", "Fengjiao", "" ], [ "Ogle", "Jennifer", "" ], [ "Jin", "Weimin", "" ], [ "Gerard", "Patrick", "" ], [ "Petty", "Daniel", "" ], [ "Robb", "Andrew", "" ] ]
new_dataset
0.970586
2303.17845
Ayokunle Ige
Ayokunle Olalekan Ige, Mohd Halim Mohd Noor
WSense: A Robust Feature Learning Module for Lightweight Human Activity Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent times, various modules such as squeeze-and-excitation, and others have been proposed to improve the quality of features learned from wearable sensor signals. However, these modules often cause the number of parameters to be large, which is not suitable for building lightweight human activity recognition models which can be easily deployed on end devices. In this research, we propose a feature learning module, termed WSense, which uses two 1D CNN and global max pooling layers to extract similar quality features from wearable sensor data while ignoring the difference in activity recognition models caused by the size of the sliding window. Experiments were carried out using CNN and ConvLSTM feature learning pipelines on a dataset obtained with a single accelerometer (WISDM) and another obtained using the fusion of accelerometers, gyroscopes, and magnetometers (PAMAP2) under various sliding window sizes. A total of nine hundred sixty (960) experiments were conducted to validate the WSense module against baselines and existing methods on the two datasets. The results showed that the WSense module aided pipelines in learning similar quality features and outperformed the baselines and existing models with a minimal and uniform model size across all sliding window segmentations. The code is available at https://github.com/AOige/WSense.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 07:12:58 GMT" } ]
2023-04-03T00:00:00
[ [ "Ige", "Ayokunle Olalekan", "" ], [ "Noor", "Mohd Halim Mohd", "" ] ]
new_dataset
0.997858
2303.17877
Kaihua Qin
Kaihua Qin, Stefanos Chaliasos, Liyi Zhou, Benjamin Livshits, Dawn Song, Arthur Gervais
The Blockchain Imitation Game
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of blockchains for automated and adversarial trading has become commonplace. However, due to the transparent nature of blockchains, an adversary is able to observe any pending, not-yet-mined transactions, along with their execution logic. This transparency further enables a new type of adversary, which copies and front-runs profitable pending transactions in real-time, yielding significant financial gains. Shedding light on such "copy-paste" malpractice, this paper introduces the Blockchain Imitation Game and proposes a generalized imitation attack methodology called Ape. Leveraging dynamic program analysis techniques, Ape supports the automatic synthesis of adversarial smart contracts. Over a timeframe of one year (1st of August, 2021 to 31st of July, 2022), Ape could have yielded 148.96M USD in profit on Ethereum, and 42.70M USD on BNB Smart Chain (BSC). Not only as a malicious attack, we further show the potential of transaction and contract imitation as a defensive strategy. Within one year, we find that Ape could have successfully imitated 13 and 22 known Decentralized Finance (DeFi) attacks on Ethereum and BSC, respectively. Our findings suggest that blockchain validators can imitate attacks in real-time to prevent intrusions in DeFi.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 08:21:43 GMT" } ]
2023-04-03T00:00:00
[ [ "Qin", "Kaihua", "" ], [ "Chaliasos", "Stefanos", "" ], [ "Zhou", "Liyi", "" ], [ "Livshits", "Benjamin", "" ], [ "Song", "Dawn", "" ], [ "Gervais", "Arthur", "" ] ]
new_dataset
0.998176
2303.17881
Colin Drewes
Colin Drewes, Olivia Weng, Andres Meza, Alric Althoff, David Kohlbrenner, Ryan Kastner, Dustin Richmond
Pentimento: Data Remanence in Cloud FPGAs
17 Pages, 8 Figures
null
null
null
cs.CR cs.AR
http://creativecommons.org/licenses/by/4.0/
Cloud FPGAs strike an alluring balance between computational efficiency, energy efficiency, and cost. It is the flexibility of the FPGA architecture that enables these benefits, but that very same flexibility that exposes new security vulnerabilities. We show that a remote attacker can recover "FPGA pentimenti" - long-removed secret data belonging to a prior user of a cloud FPGA. The sensitive data constituting an FPGA pentimento is an analog imprint from bias temperature instability (BTI) effects on the underlying transistors. We demonstrate how this slight degradation can be measured using a time-to-digital (TDC) converter when an adversary programs one into the target cloud FPGA. This technique allows an attacker to ascertain previously safe information on cloud FPGAs, even after it is no longer explicitly present. Notably, it can allow an attacker who knows a non-secret "skeleton" (the physical structure, but not the contents) of the victim's design to (1) extract proprietary details from an encrypted FPGA design image available on the AWS marketplace and (2) recover data loaded at runtime by a previous user of a cloud FPGA using a known design. Our experiments show that BTI degradation (burn-in) and recovery are measurable and constitute a security threat to commercial cloud FPGAs.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 08:32:40 GMT" } ]
2023-04-03T00:00:00
[ [ "Drewes", "Colin", "" ], [ "Weng", "Olivia", "" ], [ "Meza", "Andres", "" ], [ "Althoff", "Alric", "" ], [ "Kohlbrenner", "David", "" ], [ "Kastner", "Ryan", "" ], [ "Richmond", "Dustin", "" ] ]
new_dataset
0.999632
2303.17892
Samy Badreddine
Samy Badreddine and Gianluca Apriceno and Andrea Passerini and Luciano Serafini
Interval Logic Tensor Networks
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce Interval Real Logic (IRL), a two-sorted logic that interprets knowledge such as sequential properties (traces) and event properties using sequences of real-featured data. We interpret connectives using fuzzy logic, event durations using trapezoidal fuzzy intervals, and fuzzy temporal relations using relationships between the intervals' areas. We propose Interval Logic Tensor Networks (ILTN), a neuro-symbolic system that learns by propagating gradients through IRL. In order to support effective learning, ILTN defines smoothened versions of the fuzzy intervals and temporal relations of IRL using softplus activations. We show that ILTN can successfully leverage knowledge expressed in IRL in synthetic tasks that require reasoning about events to predict their fuzzy durations. Our results show that the system is capable of making events compliant with background temporal knowledge.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 08:51:44 GMT" } ]
2023-04-03T00:00:00
[ [ "Badreddine", "Samy", "" ], [ "Apriceno", "Gianluca", "" ], [ "Passerini", "Andrea", "" ], [ "Serafini", "Luciano", "" ] ]
new_dataset
0.984405
2303.17912
Jo\~ao Pedro Ara\'ujo
Joao Pedro Araujo, Jiaman Li, Karthik Vetrivel, Rishi Agarwal, Deepak Gopinath, Jiajun Wu, Alexander Clegg, C. Karen Liu
CIRCLE: Capture In Rich Contextual Environments
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthesizing 3D human motion in a contextual, ecological environment is important for simulating realistic activities people perform in the real world. However, conventional optics-based motion capture systems are not suited for simultaneously capturing human movements and complex scenes. The lack of rich contextual 3D human motion datasets presents a roadblock to creating high-quality generative human motion models. We propose a novel motion acquisition system in which the actor perceives and operates in a highly contextual virtual world while being motion captured in the real world. Our system enables rapid collection of high-quality human motion in highly diverse scenes, without the concern of occlusion or the need for physical scene construction in the real world. We present CIRCLE, a dataset containing 10 hours of full-body reaching motion from 5 subjects across nine scenes, paired with ego-centric information of the environment represented in various forms, such as RGBD videos. We use this dataset to train a model that generates human motion conditioned on scene information. Leveraging our dataset, the model learns to use ego-centric scene information to achieve nontrivial reaching tasks in the context of complex 3D scenes. To download the data please visit https://stanford-tml.github.io/circle_dataset/.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 09:18:12 GMT" } ]
2023-04-03T00:00:00
[ [ "Araujo", "Joao Pedro", "" ], [ "Li", "Jiaman", "" ], [ "Vetrivel", "Karthik", "" ], [ "Agarwal", "Rishi", "" ], [ "Gopinath", "Deepak", "" ], [ "Wu", "Jiajun", "" ], [ "Clegg", "Alexander", "" ], [ "Liu", "C. Karen", "" ] ]
new_dataset
0.999448
2303.17930
Shiyao Wu
Shiyao Wu
JobHam-place with smart recommend job options and candidate filtering options
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Due to the increasing number of graduates, many applicants experience the situation about finding a job, and employers experience difficulty filtering job applicants, which might negatively impact their effectiveness. However, most job-hunting websites lack job recommendation and CV filtering or ranking functionality, which are not integrated into the system. Thus, a smart job hunter combined with the above functionality will be conducted in this project, which contains job recommendations, CV ranking and even a job dashboard for skills and job applicant functionality. Job recommendation and CV ranking starts from the automatic keyword extraction and end with the Job/CV ranking algorithm. Automatic keyword extraction is implemented by Job2Skill and the CV2Skill model based on Bert. Job2Skill consists of two components, text encoder and Gru-based layers, while CV2Skill is mainly based on Bert and fine-tunes the pre-trained model by the Resume- Entity dataset. Besides, to match skills from CV and job description and rank lists of jobs and candidates, job/CV ranking algorithms have been provided to compute the occurrence ratio of skill words based on TFIDF score and match ratio of the total skill numbers. Besides, some advanced features have been integrated into the website to improve user experiences, such as the calendar and sweetalert2 plugin. And some basic features to go through job application processes, such as job application tracking and interview arrangement.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 09:54:47 GMT" } ]
2023-04-03T00:00:00
[ [ "Wu", "Shiyao", "" ] ]
new_dataset
0.985381
2303.17935
Sandra Liu
Sandra Q. Liu, Leonardo Zamora Ya\~nez, Edward H. Adelson
GelSight EndoFlex: A Soft Endoskeleton Hand with Continuous High-Resolution Tactile Sensing
Accepted to IEEE Conference on Soft Robotics (RoboSoft) 2023
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We describe a novel three-finger robot hand that has high resolution tactile sensing along the entire length of each finger. The fingers are compliant, constructed with a soft shell supported with a flexible endoskeleton. Each finger contains two cameras, allowing tactile data to be gathered along the front and side surfaces of the fingers. The gripper can perform an enveloping grasp of an object and extract a large amount of rich tactile data in a single grasp. By capturing data from many parts of the grasped object at once, we can do object recognition with a single grasp rather than requiring multiple touches. We describe our novel design and construction techniques which allow us to simultaneously satisfy the requirements of compliance and strength, and high resolution tactile sensing over large areas. The supplementary video can be found here: https://youtu.be/H1OYADtgj9k
[ { "version": "v1", "created": "Fri, 31 Mar 2023 10:00:40 GMT" } ]
2023-04-03T00:00:00
[ [ "Liu", "Sandra Q.", "" ], [ "Yañez", "Leonardo Zamora", "" ], [ "Adelson", "Edward H.", "" ] ]
new_dataset
0.999402
2303.17946
Luca Pajola
Sara Bardi, Mauro Conti, Luca Pajola, Pier Paolo Tricomi
Social Honeypot for Humans: Luring People through Self-managed Instagram Pages
Accepted at ACNS2023
null
null
null
cs.SI cs.AI cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social Honeypots are tools deployed in Online Social Networks (OSN) to attract malevolent activities performed by spammers and bots. To this end, their content is designed to be of maximum interest to malicious users. However, by choosing an appropriate content topic, this attractive mechanism could be extended to any OSN users, rather than only luring malicious actors. As a result, honeypots can be used to attract individuals interested in a wide range of topics, from sports and hobbies to more sensitive subjects like political views and conspiracies. With all these individuals gathered in one place, honeypot owners can conduct many analyses, from social to marketing studies. In this work, we introduce a novel concept of social honeypot for attracting OSN users interested in a generic target topic. We propose a framework based on fully-automated content generation strategies and engagement plans to mimic legit Instagram pages. To validate our framework, we created 21 self-managed social honeypots (i.e., pages) on Instagram, covering three topics, four content generation strategies, and three engaging plans. In nine weeks, our honeypots gathered a total of 753 followers, 5387 comments, and 15739 likes. These results demonstrate the validity of our approach, and through statistical analysis, we examine the characteristics of effective social honeypots.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 10:20:24 GMT" } ]
2023-04-03T00:00:00
[ [ "Bardi", "Sara", "" ], [ "Conti", "Mauro", "" ], [ "Pajola", "Luca", "" ], [ "Tricomi", "Pier Paolo", "" ] ]
new_dataset
0.981957
2303.17948
Ming Yan
Ming Yan, Xin Wang, Yudi Dai, Siqi Shen, Chenglu Wen, Lan Xu, Yuexin Ma, Cheng Wang
CIMI4D: A Large Multimodal Climbing Motion Dataset under Human-scene Interactions
CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Motion capture is a long-standing research problem. Although it has been studied for decades, the majority of research focus on ground-based movements such as walking, sitting, dancing, etc. Off-grounded actions such as climbing are largely overlooked. As an important type of action in sports and firefighting field, the climbing movements is challenging to capture because of its complex back poses, intricate human-scene interactions, and difficult global localization. The research community does not have an in-depth understanding of the climbing action due to the lack of specific datasets. To address this limitation, we collect CIMI4D, a large rock \textbf{C}l\textbf{I}mbing \textbf{M}ot\textbf{I}on dataset from 12 persons climbing 13 different climbing walls. The dataset consists of around 180,000 frames of pose inertial measurements, LiDAR point clouds, RGB videos, high-precision static point cloud scenes, and reconstructed scene meshes. Moreover, we frame-wise annotate touch rock holds to facilitate a detailed exploration of human-scene interaction. The core of this dataset is a blending optimization process, which corrects for the pose as it drifts and is affected by the magnetic conditions. To evaluate the merit of CIMI4D, we perform four tasks which include human pose estimations (with/without scene constraints), pose prediction, and pose generation. The experimental results demonstrate that CIMI4D presents great challenges to existing methods and enables extensive research opportunities. We share the dataset with the research community in http://www.lidarhumanmotion.net/cimi4d/.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 10:26:47 GMT" } ]
2023-04-03T00:00:00
[ [ "Yan", "Ming", "" ], [ "Wang", "Xin", "" ], [ "Dai", "Yudi", "" ], [ "Shen", "Siqi", "" ], [ "Wen", "Chenglu", "" ], [ "Xu", "Lan", "" ], [ "Ma", "Yuexin", "" ], [ "Wang", "Cheng", "" ] ]
new_dataset
0.99986
2303.17974
An Mo
Nayan Man Singh Pradhan, Patrick Frank, An Mo, Alexander Badri-Spr\"owitz
Upside down: affordable high-performance motion platform
For associated videos, see https://youtu.be/thXPA2MYcQw For open-source files, see https://github.com/nayan-pradhan/solo-6dof-motion-platform
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parallel robots are capable of high-speed manipulation and have become essential tools in the industry. The proximal placement of their motors and the low weight of their end effectors make them ideal for generating highly dynamic motion. Therefore, parallel robots can be adopted for motion platform designs, as long as end effector loads are low. Traditional motion platforms can be large and powerful to generate multiple g acceleration. However, these designs tend to be expensive and large. Similar but smaller motion platforms feature a small work range with reduced degrees of freedom (DoFs) and a limited payload. Here we seek a medium-sized affordable parallel robot capable of powerful and high-speed 6-DoF motion in a comparably large workspace. This work explores the concept of a quadruped robot flipped upside-down, with the motion platform fixed between its feet. In particular, we exploit the high-power dynamic brushless actuation and the four-leg redundancy when moving the motion platform. We characterize the resulting motion platform by tracking sinusoidal and circular trajectories with varying loads. Dynamic motions in 6 DoFs up to 10 Hz and ~10 mm amplitude are possible when moving a mass of 300 grams. We demonstrate single-axis end-effector translations up to ~20 mm at 10 Hz for higher loads of 1.2 kg. The motion platform can be replicated easily by 3D printing and off-the-shelf components. All motion platform-related hardware and the custom-written software required to replicate are open-source.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 11:21:03 GMT" } ]
2023-04-03T00:00:00
[ [ "Pradhan", "Nayan Man Singh", "" ], [ "Frank", "Patrick", "" ], [ "Mo", "An", "" ], [ "Badri-Spröwitz", "Alexander", "" ] ]
new_dataset
0.985696
2303.17989
Panagiotis Agrafiotis
Panagiotis Agrafiotis, Anastastios Doulamis, Andreas Georgopoulos
Unsupervised crack detection on complex stone masonry surfaces
Submitted to the Journal of Cultural Heritage, Elsevier, under review as of 31st of March 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Computer vision for detecting building pathologies has interested researchers for quite some time. Vision-based crack detection is a non-destructive assessment technique, which can be useful especially for Cultural Heritage (CH) where strict regulations apply and, even simple, interventions are not permitted. Recently, shallow and deep machine learning architectures applied on various types of imagery are gaining ground. In this article a crack detection methodology for stone masonry walls is presented. In the proposed approach, crack detection is approached as an unsupervised anomaly detection problem on RGB (Red Green Blue) image patches. Towards this direction, some of the most popular state of the art CNN (Convolutional Neural Network) architectures are deployed and modified to binary classify the images or image patches by predicting a specific class for the tested imagery; 'Crack' or 'No crack', and detect and localize those cracks on the RGB imagery with high accuracy. Testing of the model was performed on various test sites and random images retrieved from the internet and collected by the authors and results suggested the high performance of specific networks compared to the rest, considering also the small numbers of epochs required for training. Those results met the accuracy delivered by more complex and computationally heavy approaches, requiring a large amount of data for training. Source code is available on GitHub https://github.com/pagraf/Crack-detection while datasets are available on Zenodo https://doi.org/10.5281/zenodo.6516913 .
[ { "version": "v1", "created": "Fri, 31 Mar 2023 12:07:23 GMT" } ]
2023-04-03T00:00:00
[ [ "Agrafiotis", "Panagiotis", "" ], [ "Doulamis", "Anastastios", "" ], [ "Georgopoulos", "Andreas", "" ] ]
new_dataset
0.996515
2303.18021
Huu-Thinh Do
Huu-Thinh Do, Franco Blanchini, Ionela Prodan
A flatness-based saturated controller design for a quadcopter with experimental validation
null
null
null
null
cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Using the properties of differential flatness, a controllable system, such as a quadcoper model, may be transformed into a linear equivalent system via a coordinate change and an input mapping. This is a straightforward advantage for the quadcopter's controller design and its real-time implementation. However, one significant hindrance is that, while the dynamics become linear in the new coordinates (the flat output space), the input constraints become convoluted. This paper addresses an explicit pre-stabilization based control scheme which handles the input constraints for the quadcopter in the flat output space with a saturation component. The system's stability is shown to hold by Lyapunov-stability arguments. Moreover, the practical viability of the proposed method is validated both in simulation and experiments over a nano-drone platform. Hence, the flatness-based saturated controller not only ensures stability and constraints satisfaction, but also requires very low computational effort, allowing for embedded implementations.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 12:55:44 GMT" } ]
2023-04-03T00:00:00
[ [ "Do", "Huu-Thinh", "" ], [ "Blanchini", "Franco", "" ], [ "Prodan", "Ionela", "" ] ]
new_dataset
0.957212
2303.18094
Agapius Bou Ghosn
Agapius Bou Ghosn, Marcus Nolte, Philip Polack, Arnaud de La Fortelle and Markus Maurer
Robust LSTM-based Vehicle Velocity Observer for Regular and Near-limits Applications
null
null
null
null
cs.RO eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate velocity estimation is key to vehicle control. While the literature describes how model-based and learning-based observers are able to estimate a vehicle's velocity in normal driving conditions, the challenge remains to estimate the velocity in near-limits maneuvers while using only conventional in-car sensors. In this paper, we introduce a novel neural network architecture based on Long Short-Term Memory (LSTM) networks to accurately estimate the vehicle's velocity in different driving conditions, including maneuvers at the limits of handling. The approach has been tested on real vehicle data and it provides more accurate estimations than state-of-the-art model-based and learning-based methods, for both regular and near-limits driving scenarios. Our approach is robust since the performance of the state-of-the-art observers deteriorates with higher dynamics, while our method adapts to different maneuvers, providing accurate estimations even at the vehicle's limits of handling.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 14:35:08 GMT" } ]
2023-04-03T00:00:00
[ [ "Ghosn", "Agapius Bou", "" ], [ "Nolte", "Marcus", "" ], [ "Polack", "Philip", "" ], [ "de La Fortelle", "Arnaud", "" ], [ "Maurer", "Markus", "" ] ]
new_dataset
0.997673
2303.18110
Ramon Sanabria
Ramon Sanabria, Nikolay Bogoychev, Nina Markl, Andrea Carmantini, Ondrej Klejch, Peter Bell
The Edinburgh International Accents of English Corpus: Towards the Democratization of English ASR
Accepted to IEEE ICASSP 2023
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
English is the most widely spoken language in the world, used daily by millions of people as a first or second language in many different contexts. As a result, there are many varieties of English. Although the great many advances in English automatic speech recognition (ASR) over the past decades, results are usually reported based on test datasets which fail to represent the diversity of English as spoken today around the globe. We present the first release of The Edinburgh International Accents of English Corpus (EdAcc). This dataset attempts to better represent the wide diversity of English, encompassing almost 40 hours of dyadic video call conversations between friends. Unlike other datasets, EdAcc includes a wide range of first and second-language varieties of English and a linguistic background profile of each speaker. Results on latest public, and commercial models show that EdAcc highlights shortcomings of current English ASR models. The best performing model, trained on 680 thousand hours of transcribed data, obtains an average of 19.7% word error rate (WER) -- in contrast to the 2.7% WER obtained when evaluated on US English clean read speech. Across all models, we observe a drop in performance on Indian, Jamaican, and Nigerian English speakers. Recordings, linguistic backgrounds, data statement, and evaluation scripts are released on our website (https://groups.inf.ed.ac.uk/edacc/) under CC-BY-SA license.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 14:56:54 GMT" } ]
2023-04-03T00:00:00
[ [ "Sanabria", "Ramon", "" ], [ "Bogoychev", "Nikolay", "" ], [ "Markl", "Nina", "" ], [ "Carmantini", "Andrea", "" ], [ "Klejch", "Ondrej", "" ], [ "Bell", "Peter", "" ] ]
new_dataset
0.999731
2303.18130
Mehmet Parlak
Mehmet Parlak
Blockchain-based Immutable Evidence and Decentralized Loss Adjustment for Autonomous Vehicle Accidents in Insurance
IEEE Global Emerging Technology Blockchain Forum 2022
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In case of an accident between two autonomous vehicles equipped with emerging technologies, how do we apportion liability among the various players? A special liability regime has not even yet been established for damages that may arise due to the accidents of autonomous vehicles. Would the immutable, time-stamped sensor records of vehicles on distributed ledger help define the intertwined relations of liability subjects right through the accident? What if the synthetic media created through deepfake gets involved in the insurance claims? While integrating AI-powered anomaly or deepfake detection into automated insurance claims processing helps to prevent insurance fraud, it is only a matter of time before deepfake becomes nearly undetectable even to elaborate forensic tools. This paper proposes a blockchain-based insurtech decentralized application to check the authenticity and provenance of the accident footage and also to decentralize the loss-adjusting process through a hybrid of decentralized and centralized databases using smart contracts.
[ { "version": "v1", "created": "Wed, 29 Mar 2023 21:50:13 GMT" } ]
2023-04-03T00:00:00
[ [ "Parlak", "Mehmet", "" ] ]
new_dataset
0.983685
2303.18132
Jakub Breier
Jakub Breier, Dirmanto Jap, Xiaolu Hou, Shivam Bhasin
A Desynchronization-Based Countermeasure Against Side-Channel Analysis of Neural Networks
Accepted to the International Symposium on Cyber Security, Cryptology and Machine Learning 2023 (CSCML)
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Model extraction attacks have been widely applied, which can normally be used to recover confidential parameters of neural networks for multiple layers. Recently, side-channel analysis of neural networks allows parameter extraction even for networks with several multiple deep layers with high effectiveness. It is therefore of interest to implement a certain level of protection against these attacks. In this paper, we propose a desynchronization-based countermeasure that makes the timing analysis of activation functions harder. We analyze the timing properties of several activation functions and design the desynchronization in a way that the dependency on the input and the activation type is hidden. We experimentally verify the effectiveness of the countermeasure on a 32-bit ARM Cortex-M4 microcontroller and employ a t-test to show the side-channel information leakage. The overhead ultimately depends on the number of neurons in the fully-connected layer, for example, in the case of 4096 neurons in VGG-19, the overheads are between 2.8% and 11%.
[ { "version": "v1", "created": "Sat, 25 Mar 2023 12:35:04 GMT" } ]
2023-04-03T00:00:00
[ [ "Breier", "Jakub", "" ], [ "Jap", "Dirmanto", "" ], [ "Hou", "Xiaolu", "" ], [ "Bhasin", "Shivam", "" ] ]
new_dataset
0.989608
2303.18142
Hideyuki Kawashima
Takashi Kambayashi and Takayuki Tanabe and Takashi Hoshino and Hideyuki Kawashima
Shirakami: A Hybrid Concurrency Control Protocol for Tsurugi Relational Database System
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by-nc-nd/4.0/
Modern real-world transactional workloads such as bills of materials or telecommunication billing need to process both short transactions and long transactions. Recent concurrency control protocols do not cope with such workloads since they assume only classical workloads (i.e., YCSB and TPC-C) that have relatively short transactions. To this end, we proposed a new concurrency control protocol Shirakami. Shirakami has two sub-protocols. Shirakami-LTX protocol is for long transactions based on multiversion concurrency control and Shirakami-OCC protocol is for short transactions based on Silo. Shirakami naturally integrates them with write preservation method and epoch-based synchronization. Shirakami is a module in Tsurugi system, which is a production-purpose relational database system.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 15:26:42 GMT" } ]
2023-04-03T00:00:00
[ [ "Kambayashi", "Takashi", "" ], [ "Tanabe", "Takayuki", "" ], [ "Hoshino", "Takashi", "" ], [ "Kawashima", "Hideyuki", "" ] ]
new_dataset
0.984268
2303.18157
Guillermo Bern\'ardez
Guillermo Bern\'ardez, Jos\'e Su\'arez-Varela, Albert L\'opez, Xiang Shi, Shihan Xiao, Xiangle Cheng, Pere Barlet-Ros, and Albert Cabellos-Aparicio
MAGNNETO: A Graph Neural Network-based Multi-Agent system for Traffic Engineering
IEEE Transactions on Cognitive Communications and Networking (2023). arXiv admin note: text overlap with arXiv:2109.01445
null
10.1109/TCCN.2023.3235719
null
cs.NI cs.LG cs.MA
http://creativecommons.org/licenses/by/4.0/
Current trends in networking propose the use of Machine Learning (ML) for a wide variety of network optimization tasks. As such, many efforts have been made to produce ML-based solutions for Traffic Engineering (TE), which is a fundamental problem in ISP networks. Nowadays, state-of-the-art TE optimizers rely on traditional optimization techniques, such as Local search, Constraint Programming, or Linear programming. In this paper, we present MAGNNETO, a distributed ML-based framework that leverages Multi-Agent Reinforcement Learning and Graph Neural Networks for distributed TE optimization. MAGNNETO deploys a set of agents across the network that learn and communicate in a distributed fashion via message exchanges between neighboring agents. Particularly, we apply this framework to optimize link weights in OSPF, with the goal of minimizing network congestion. In our evaluation, we compare MAGNNETO against several state-of-the-art TE optimizers in more than 75 topologies (up to 153 nodes and 354 links), including realistic traffic loads. Our experimental results show that, thanks to its distributed nature, MAGNNETO achieves comparable performance to state-of-the-art TE optimizers with significantly lower execution times. Moreover, our ML-based solution demonstrates a strong generalization capability to successfully operate in new networks unseen during training.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 15:47:49 GMT" } ]
2023-04-03T00:00:00
[ [ "Bernárdez", "Guillermo", "" ], [ "Suárez-Varela", "José", "" ], [ "López", "Albert", "" ], [ "Shi", "Xiang", "" ], [ "Xiao", "Shihan", "" ], [ "Cheng", "Xiangle", "" ], [ "Barlet-Ros", "Pere", "" ], [ "Cabellos-Aparicio", "Albert", "" ] ]
new_dataset
0.998285
2303.18162
Son T. Luu
Son T. Luu, Khoi Trong Hoang, Tuong Quang Pham, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
A Multiple Choices Reading Comprehension Corpus for Vietnamese Language Education
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Machine reading comprehension has been an interesting and challenging task in recent years, with the purpose of extracting useful information from texts. To attain the computer ability to understand the reading text and answer relevant information, we introduce ViMMRC 2.0 - an extension of the previous ViMMRC for the task of multiple-choice reading comprehension in Vietnamese Textbooks which contain the reading articles for students from Grade 1 to Grade 12. This dataset has 699 reading passages which are prose and poems, and 5,273 questions. The questions in the new dataset are not fixed with four options as in the previous version. Moreover, the difficulty of questions is increased, which challenges the models to find the correct choice. The computer must understand the whole context of the reading passage, the question, and the content of each choice to extract the right answers. Hence, we propose the multi-stage approach that combines the multi-step attention network (MAN) with the natural language inference (NLI) task to enhance the performance of the reading comprehension model. Then, we compare the proposed methodology with the baseline BERTology models on the new dataset and the ViMMRC 1.0. Our multi-stage models achieved 58.81% by Accuracy on the test set, which is 5.34% better than the highest BERTology models. From the results of the error analysis, we found the challenge of the reading comprehension models is understanding the implicit context in texts and linking them together in order to find the correct answers. Finally, we hope our new dataset will motivate further research in enhancing the language understanding ability of computers in the Vietnamese language.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 15:54:54 GMT" } ]
2023-04-03T00:00:00
[ [ "Luu", "Son T.", "" ], [ "Hoang", "Khoi Trong", "" ], [ "Pham", "Tuong Quang", "" ], [ "Van Nguyen", "Kiet", "" ], [ "Nguyen", "Ngan Luu-Thuy", "" ] ]
new_dataset
0.999119
2303.18219
Shan Lin
Shan Lin, Yuheng Zhi, and Michael C. Yip
SemHint-MD: Learning from Noisy Semantic Labels for Self-Supervised Monocular Depth Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Without ground truth supervision, self-supervised depth estimation can be trapped in a local minimum due to the gradient-locality issue of the photometric loss. In this paper, we present a framework to enhance depth by leveraging semantic segmentation to guide the network to jump out of the local minimum. Prior works have proposed to share encoders between these two tasks or explicitly align them based on priors like the consistency between edges in the depth and segmentation maps. Yet, these methods usually require ground truth or high-quality pseudo labels, which may not be easily accessible in real-world applications. In contrast, we investigate self-supervised depth estimation along with a segmentation branch that is supervised with noisy labels provided by models pre-trained with limited data. We extend parameter sharing from the encoder to the decoder and study the influence of different numbers of shared decoder parameters on model performance. Also, we propose to use cross-task information to refine current depth and segmentation predictions to generate pseudo-depth and semantic labels for training. The advantages of the proposed method are demonstrated through extensive experiments on the KITTI benchmark and a downstream task for endoscopic tissue deformation tracking.
[ { "version": "v1", "created": "Fri, 31 Mar 2023 17:20:27 GMT" } ]
2023-04-03T00:00:00
[ [ "Lin", "Shan", "" ], [ "Zhi", "Yuheng", "" ], [ "Yip", "Michael C.", "" ] ]
new_dataset
0.98096
1801.05544
Ankit Parag Shah
Benjamin Elizalde, Rohan Badlani, Ankit Shah, Anurag Kumar, Bhiksha Raj
NELS -- Never-Ending Learner of Sounds
Accepted at Machine Learning for Audio Signal Processing (ML4Audio), 31st Conference on Neural Information Processing Systems (NIPS 2017), Long Beach, CA, USA
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sounds are essential to how humans perceive and interact with the world and are captured in recordings and shared on the Internet on a minute-by-minute basis. These recordings, which are predominantly videos, constitute the largest archive of sounds we know. However, most of these recordings have undescribed content making necessary methods for automatic sound analysis, indexing and retrieval. These methods have to address multiple challenges, such as the relation between sounds and language, numerous and diverse sound classes, and large-scale evaluation. We propose a system that continuously learns from the web relations between sounds and language, improves sound recognition models over time and evaluates its learning competency in the large-scale without references. We introduce the Never-Ending Learner of Sounds (NELS), a project for continuously learning of sounds and their associated knowledge, available on line in nels.cs.cmu.edu
[ { "version": "v1", "created": "Wed, 17 Jan 2018 04:29:12 GMT" }, { "version": "v2", "created": "Wed, 29 Mar 2023 19:52:25 GMT" } ]
2023-03-31T00:00:00
[ [ "Elizalde", "Benjamin", "" ], [ "Badlani", "Rohan", "" ], [ "Shah", "Ankit", "" ], [ "Kumar", "Anurag", "" ], [ "Raj", "Bhiksha", "" ] ]
new_dataset
0.992851