id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2210.16998
Ali Ebnenasir
Ebrahim Fazli and Ali Ebnenasir
TPGen: A Self-Stabilizing GPU-Based Method for Prime and Test Paths Generation
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a novel scalable GPU-based method for Test Paths (TPs) and Prime Paths (PPs) Generation, called TPGen, used in structural testing and in test data generation. TPGen outperforms existing methods for PPs and TPs generation in several orders of magnitude, both in time and space efficiency. Improving both time and space efficiency is made possible through devising a new non-contiguous and hierarchical memory allocation method, called Three-level Path Access Method (TPAM), that enables efficient storage of maximal simple paths in memory. In addition to its high time and space efficiency, a major significance of TPGen includes its self-stabilizing design where threads execute in a fully asynchronous and order-oblivious way without using any atomic instructions. TPGen can generate PPs and TPs of structurally complex programs that have an extremely high cyclomatic and Npath complexity.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 00:55:01 GMT" } ]
2022-11-01T00:00:00
[ [ "Fazli", "Ebrahim", "" ], [ "Ebnenasir", "Ali", "" ] ]
new_dataset
0.971878
2210.17008
Robin Hankin Dr
Robin K. S. Hankin
Stokes's theorem in R
18 pages
null
null
null
cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this short article I introduce the stokes package which provides functionality for working with tensors, alternating forms, wedge products, and related concepts from the exterior calculus. Notation and spirit follow Spivak. Stokes's generalized integral theorem, viz $\int_{\partial X}\phi=\int_Xd\phi$, is demonstrated here using the package; it is available on CRAN athttps://CRAN.R-project.org/package=stokes.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 01:51:36 GMT" } ]
2022-11-01T00:00:00
[ [ "Hankin", "Robin K. S.", "" ] ]
new_dataset
0.999514
2210.17057
Lei Kou
Lei Kou, Chuang Liu, Guo-wei Cai, Jia-ning Zhou, Quan-de Yuan, Si-miao Pang
Fault diagnosis for open-circuit faults in NPC inverter based on knowledge-driven and data-driven approaches
IET Power Electronics
null
10.1049/iet-pel.2019.0835
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this study, the open-circuit faults diagnosis and location issue of the neutral-point-clamped (NPC) inverters are analysed. A novel fault diagnosis approach based on knowledge driven and data driven was presented for the open-circuit faults in insulated-gate bipolar transistors (IGBTs) of NPC inverter, and Concordia transform (knowledge driven) and random forests (RFs) technique (data driven) are employed to improve the robustness performance of the fault diagnosis classifier. First, the fault feature data of AC in either normal state or open-circuit faults states of NPC inverter are analysed and extracted. Second, the Concordia transform is used to process the fault samples, and it has been verified that the slopes of current trajectories are not affected by different loads in this study, which can help the proposed method to reduce overdependence on fault data. Moreover, then the transformed fault samples are adopted to train the RFs fault diagnosis classifier, and the fault diagnosis results show that the classification accuracy and robustness performance of the fault diagnosis classifier are improved. Finally, the diagnosis results of online fault diagnosis experiments show that the proposed classifier can locate the open-circuit fault of IGBTs in NPC inverter under the conditions of different loads.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 04:33:53 GMT" } ]
2022-11-01T00:00:00
[ [ "Kou", "Lei", "" ], [ "Liu", "Chuang", "" ], [ "Cai", "Guo-wei", "" ], [ "Zhou", "Jia-ning", "" ], [ "Yuan", "Quan-de", "" ], [ "Pang", "Si-miao", "" ] ]
new_dataset
0.999559
2210.17086
Gali Sheffi
Gali Sheffi, Pedro Ramalhete and Erez Petrank
EEMARQ: Efficient Lock-Free Range Queries with Memory Reclamation
null
null
null
null
cs.DB cs.DC
http://creativecommons.org/licenses/by/4.0/
Multi-Version Concurrency Control (MVCC) is a common mechanism for achieving linearizable range queries in database systems and concurrent data-structures. The core idea is to keep previous versions of nodes to serve range queries, while still providing atomic reads and updates. Existing concurrent data-structure implementations, that support linearizable range queries, are either slow, use locks, or rely on blocking reclamation schemes. We present EEMARQ, the first scheme that uses MVCC with lock-free memory reclamation to obtain a fully lock-free data-structure supporting linearizable inserts, deletes, contains, and range queries. Evaluation shows that EEMARQ outperforms existing solutions across most workloads, with lower space overhead and while providing full lock freedom.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 06:23:05 GMT" } ]
2022-11-01T00:00:00
[ [ "Sheffi", "Gali", "" ], [ "Ramalhete", "Pedro", "" ], [ "Petrank", "Erez", "" ] ]
new_dataset
0.95626
2210.17115
Zhenzhe Hechen
Zhenzhe Hechen, Wei Huang, Yixin Zhao
ViT-LSLA: Vision Transformer with Light Self-Limited-Attention
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers have demonstrated a competitive performance across a wide range of vision tasks, while it is very expensive to compute the global self-attention. Many methods limit the range of attention within a local window to reduce computation complexity. However, their approaches cannot save the number of parameters; meanwhile, the self-attention and inner position bias (inside the softmax function) cause each query to focus on similar and close patches. Consequently, this paper presents a light self-limited-attention (LSLA) consisting of a light self-attention mechanism (LSA) to save the computation cost and the number of parameters, and a self-limited-attention mechanism (SLA) to improve the performance. Firstly, the LSA replaces the K (Key) and V (Value) of self-attention with the X(origin input). Applying it in vision Transformers which have encoder architecture and self-attention mechanism, can simplify the computation. Secondly, the SLA has a positional information module and a limited-attention module. The former contains a dynamic scale and an inner position bias to adjust the distribution of the self-attention scores and enhance the positional information. The latter uses an outer position bias after the softmax function to limit some large values of attention weights. Finally, a hierarchical Vision Transformer with Light self-Limited-attention (ViT-LSLA) is presented. The experiments show that ViT-LSLA achieves 71.6% top-1 accuracy on IP102 (2.4% absolute improvement of Swin-T); 87.2% top-1 accuracy on Mini-ImageNet (3.7% absolute improvement of Swin-T). Furthermore, it greatly reduces FLOPs (3.5GFLOPs vs. 4.5GFLOPs of Swin-T) and parameters (18.9M vs. 27.6M of Swin-T).
[ { "version": "v1", "created": "Mon, 31 Oct 2022 07:46:45 GMT" } ]
2022-11-01T00:00:00
[ [ "Hechen", "Zhenzhe", "" ], [ "Huang", "Wei", "" ], [ "Zhao", "Yixin", "" ] ]
new_dataset
0.994322
2210.17130
Kohei Suenaga
Atsushi Kikuchi, Kotaro Uchida, Masaki Waga, Kohei Suenaga
BOREx: Bayesian-Optimization--Based Refinement of Saliency Map for Image- and Video-Classification Models
32 pages. To appear in ACCV 2022
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Explaining a classification result produced by an image- and video-classification model is one of the important but challenging issues in computer vision. Many methods have been proposed for producing heat-map--based explanations for this purpose, including ones based on the white-box approach that uses the internal information of a model (e.g., LRP, Grad-CAM, and Grad-CAM++) and ones based on the black-box approach that does not use any internal information (e.g., LIME, SHAP, and RISE). We propose a new black-box method BOREx (Bayesian Optimization for Refinement of visual model Explanation) to refine a heat map produced by any method. Our observation is that a heat-map--based explanation can be seen as a prior for an explanation method based on Bayesian optimization. Based on this observation, BOREx conducts Gaussian process regression (GPR) to estimate the saliency of each pixel in a given image starting from the one produced by another explanation method. Our experiments statistically demonstrate that the refinement by BOREx improves low-quality heat maps for image- and video-classification results.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 08:25:12 GMT" } ]
2022-11-01T00:00:00
[ [ "Kikuchi", "Atsushi", "" ], [ "Uchida", "Kotaro", "" ], [ "Waga", "Masaki", "" ], [ "Suenaga", "Kohei", "" ] ]
new_dataset
0.995514
2210.17151
Deokki Hong
Deokki Hong
Tech Report: One-stage Lightweight Object Detectors
null
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
This work is for designing one-stage lightweight detectors which perform well in terms of mAP and latency. With baseline models each of which targets on GPU and CPU respectively, various operations are applied instead of the main operations in backbone networks of baseline models. In addition to experiments about backbone networks and operations, several feature pyramid network (FPN) architectures are investigated. Benchmarks and proposed detectors are analyzed in terms of the number of parameters, Gflops, GPU latency, CPU latency and mAP, on MS COCO dataset which is a benchmark dataset in object detection. This work propose similar or better network architectures considering the trade-off between accuracy and latency. For example, our proposed GPU-target backbone network outperforms that of YOLOX-tiny which is selected as the benchmark by 1.43x in speed and 0.5 mAP in accuracy on NVIDIA GeForce RTX 2080 Ti GPU.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 09:02:37 GMT" } ]
2022-11-01T00:00:00
[ [ "Hong", "Deokki", "" ] ]
new_dataset
0.998306
2210.17185
Ayush Tripathi
Ayush Tripathi, Lalan Kumar, Prathosh A.P., Suriya Prakash Muthukrishnan
SurfMyoAiR: A surface Electromyography based framework for Airwriting Recognition
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Airwriting Recognition is the task of identifying letters written in free space with finger movement. Electromyography (EMG) is a technique used to record electrical activity during muscle contraction and relaxation as a result of movement and is widely used for gesture recognition. Most of the current research in gesture recognition is focused on identifying static gestures. However, dynamic gestures are natural and user-friendly for being used as alternate input methods in Human-Computer Interaction applications. Airwriting recognition using EMG signals recorded from forearm muscles is therefore a viable solution. Since the user does not need to learn any new gestures and a large range of words can be formed by concatenating these letters, it is generalizable to a wider population. There has been limited work in recognition of airwriting using EMG signals and forms the core idea of the current work. The SurfMyoAiR dataset comprising of EMG signals recorded during writing English uppercase alphabets is constructed. Several different time-domain features to construct EMG envelope and two different time-frequency image representations: Short-Time Fourier Transform and Continuous Wavelet Transform were explored to form the input to a deep learning model for airwriting recognition. Several different deep learning architectures were exploited for this task. Additionally, the effect of various parameters such as signal length, window length and interpolation techniques on the recognition performance is comprehensively explored. The best-achieved accuracy was 78.50% and 62.19% in user-dependent and independent scenarios respectively by using Short-Time Fourier Transform in conjunction with a 2D Convolutional Neural Network based classifier. Airwriting has great potential as a user-friendly modality to be used as an alternate input method in Human-Computer Interaction applications.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 10:08:34 GMT" } ]
2022-11-01T00:00:00
[ [ "Tripathi", "Ayush", "" ], [ "Kumar", "Lalan", "" ], [ "P.", "Prathosh A.", "" ], [ "Muthukrishnan", "Suriya Prakash", "" ] ]
new_dataset
0.999649
2210.17190
Shubham Mittal
Shubham Mittal and Preslav Nakov
IITD at the WANLP 2022 Shared Task: Multilingual Multi-Granularity Network for Propaganda Detection
null
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
We present our system for the two subtasks of the shared task on propaganda detection in Arabic, part of WANLP'2022. Subtask 1 is a multi-label classification problem to find the propaganda techniques used in a given tweet. Our system for this task uses XLM-R to predict probabilities for the target tweet to use each of the techniques. In addition to finding the techniques, Subtask 2 further asks to identify the textual span for each instance of each technique that is present in the tweet; the task can be modeled as a sequence tagging problem. We use a multi-granularity network with mBERT encoder for Subtask 2. Overall, our system ranks second for both subtasks (out of 14 and 3 participants, respectively). Our empirical analysis show that it does not help to use a much larger English corpus annotated with propaganda techniques, regardless of whether used in English or after translation to Arabic.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 10:14:43 GMT" } ]
2022-11-01T00:00:00
[ [ "Mittal", "Shubham", "" ], [ "Nakov", "Preslav", "" ] ]
new_dataset
0.986257
2210.17236
Daoguang Zan
Daoguang Zan, Bei Chen, Zeqi Lin, Bei Guan, Yongji Wang, Jian-Guang Lou
When Language Model Meets Private Library
EMNLP 2022 Findings
null
null
null
cs.PL cs.CL cs.SE
http://creativecommons.org/licenses/by/4.0/
With the rapid development of pre-training techniques, a number of language models have been pre-trained on large-scale code corpora and perform well in code generation. In this paper, we investigate how to equip pre-trained language models with the ability of code generation for private libraries. In practice, it is common for programmers to write code using private libraries. However, this is a challenge for language models since they have never seen private APIs during training. Motivated by the fact that private libraries usually come with elaborate API documentation, we propose a novel framework with two modules: the APIRetriever finds useful APIs, and then the APICoder generates code using these APIs. For APIRetriever, we present a dense retrieval system and also design a friendly interaction to involve uses. For APICoder, we can directly use off-the-shelf language models, or continually pre-train the base model on a code corpus containing API information. Both modules are trained with data from public libraries and can be generalized to private ones. Furthermore, we craft three benchmarks for private libraries, named TorchDataEval, MonkeyEval, and BeatNumEval. Experimental results demonstrate the impressive performance of our framework.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 11:42:06 GMT" } ]
2022-11-01T00:00:00
[ [ "Zan", "Daoguang", "" ], [ "Chen", "Bei", "" ], [ "Lin", "Zeqi", "" ], [ "Guan", "Bei", "" ], [ "Wang", "Yongji", "" ], [ "Lou", "Jian-Guang", "" ] ]
new_dataset
0.951138
2210.17414
Sanjay Adhikesaven
Sanjay Adhikesaven
An Industrial Workplace Alerting and Monitoring Platform to Prevent Workplace Injury and Accidents
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Workplace accidents are a critical problem that causes many deaths, injuries, and financial losses. Climate change has a severe impact on industrial workers, partially caused by global warming. To reduce such casualties, it is important to proactively find unsafe environments where injuries could occur by detecting the use of personal protective equipment (PPE) and identifying unsafe activities. Thus, we propose an industrial workplace alerting and monitoring platform to detect PPE use and classify unsafe activity in group settings involving multiple humans and objects over a long period of time. Our proposed method is the first to analyze prolonged actions involving multiple people or objects. It benefits from combining pose estimation with PPE detection in one platform. Additionally, we propose the first open source annotated data set with video data from industrial workplaces annotated with the action classifications and detected PPE. The proposed system can be implemented within the surveillance cameras already present in industrial settings, making it a practical and effective solution.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 06:35:00 GMT" } ]
2022-11-01T00:00:00
[ [ "Adhikesaven", "Sanjay", "" ] ]
new_dataset
0.993637
2210.17491
Julian Whitman
Julian Whitman and Howie Choset
Learning Modular Robot Locomotion from Demonstrations
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
Modular robots can be reconfigured to create a variety of designs from a small set of components. But constructing a robot's hardware on its own is not enough -- each robot needs a controller. One could create controllers for some designs individually, but developing policies for additional designs can be time consuming. This work presents a method that uses demonstrations from one set of designs to accelerate policy learning for additional designs. We leverage a learning framework in which a graph neural network is made up of modular components, each component corresponds to a type of module (e.g., a leg, wheel, or body) and these components can be recombined to learn from multiple designs at once. In this paper we develop a combined reinforcement and imitation learning algorithm. Our method is novel because the policy is optimized to both maximize a reward for one design, and simultaneously imitate demonstrations from different designs, within one objective function. We show that when the modular policy is optimized with this combined objective, demonstrations from one set of designs influence how the policy behaves on a different design, decreasing the number of training iterations needed.
[ { "version": "v1", "created": "Mon, 31 Oct 2022 17:15:32 GMT" } ]
2022-11-01T00:00:00
[ [ "Whitman", "Julian", "" ], [ "Choset", "Howie", "" ] ]
new_dataset
0.999127
2106.14651
Sam Kumar
Sam Kumar, David E. Culler, Raluca Ada Popa
MAGE: Nearly Zero-Cost Virtual Memory for Secure Computation
19 pages; Accepted to OSDI 2021
null
null
null
cs.OS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Secure Computation (SC) is a family of cryptographic primitives for computing on encrypted data in single-party and multi-party settings. SC is being increasingly adopted by industry for a variety of applications. A significant obstacle to using SC for practical applications is the memory overhead of the underlying cryptography. We develop MAGE, an execution engine for SC that efficiently runs SC computations that do not fit in memory. We observe that, due to their intended security guarantees, SC schemes are inherently oblivious -- their memory access patterns are independent of the input data. Using this property, MAGE calculates the memory access pattern ahead of time and uses it to produce a memory management plan. This formulation of memory management, which we call memory programming, is a generalization of paging that allows MAGE to provide a highly efficient virtual memory abstraction for SC. MAGE outperforms the OS virtual memory system by up to an order of magnitude, and in many cases, runs SC computations that do not fit in memory at nearly the same speed as if the underlying machines had unbounded physical memory to fit the entire computation.
[ { "version": "v1", "created": "Wed, 23 Jun 2021 23:44:27 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 22:31:58 GMT" } ]
2022-10-31T00:00:00
[ [ "Kumar", "Sam", "" ], [ "Culler", "David E.", "" ], [ "Popa", "Raluca Ada", "" ] ]
new_dataset
0.953342
2108.09372
Archana Patel
Archana Patel, Sarika Jain, Narayan C. Debnath, Vishal Lama
InBiodiv-O: An Ontology for Indian Biodiversity Knowledge Management
This paper has been withdrawn by the author due to many grammatical errors, and inconsistent content
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
To present the biodiversity information, a semantic model is required that connects all kinds of data about living creatures and their habitats. The model must be able to encode human knowledge for machines to be understood. Ontology offers the richest machine-interpretable (rather than just machine-processable) and explicit semantics that are being extensively used in the biodiversity domain. Various ontologies are developed for the biodiversity domain however a review of the current landscape shows that these ontologies are not capable to define the Indian biodiversity information though India is one of the megadiverse countries. To semantically analyze the Indian biodiversity information, it is crucial to build an ontology that describes all the essential terms of this domain from the unstructured format of the data available on the web. Since, the curation of the ontologies heavily depends on the domain where these are implemented hence there is no ideal methodology is defined yet to be ready for universal use. The aim of this article is to develop an ontology that semantically encodes all the terms of Indian biodiversity information in all its dimensions based on the proposed methodology. The comprehensive evaluation of the proposed ontology depicts that ontology is well built in the specified domain.
[ { "version": "v1", "created": "Fri, 20 Aug 2021 21:07:46 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 08:10:43 GMT" } ]
2022-10-31T00:00:00
[ [ "Patel", "Archana", "" ], [ "Jain", "Sarika", "" ], [ "Debnath", "Narayan C.", "" ], [ "Lama", "Vishal", "" ] ]
new_dataset
0.999166
2110.08565
Domenico Tortorella
Domenico Tortorella, Alessio Micheli
Dynamic Graph Echo State Networks
Accepted for oral presentation at ESANN 2021
Proceedings of the 29th European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning (ESANN 2021), pp. 99-104
10.14428/esann/2021.ES2021-70
null
cs.LG cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dynamic temporal graphs represent evolving relations between entities, e.g. interactions between social network users or infection spreading. We propose an extension of graph echo state networks for the efficient processing of dynamic temporal graphs, with a sufficient condition for their echo state property, and an experimental analysis of reservoir layout impact. Compared to temporal graph kernels that need to hold the entire history of vertex interactions, our model provides a vector encoding for the dynamic graph that is updated at each time-step without requiring training. Experiments show accuracy comparable to approximate temporal graph kernels on twelve dissemination process classification tasks.
[ { "version": "v1", "created": "Sat, 16 Oct 2021 12:51:50 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 19:39:01 GMT" } ]
2022-10-31T00:00:00
[ [ "Tortorella", "Domenico", "" ], [ "Micheli", "Alessio", "" ] ]
new_dataset
0.993235
2202.09367
Peng Xiang
Peng Xiang, Xin Wen, Yu-Shen Liu, Yan-Pei Cao, Pengfei Wan, Wen Zheng, Zhizhong Han
Snowflake Point Deconvolution for Point Cloud Completion and Generation with Skip-Transformer
IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI), 2022. This work is a journal extension of our ICCV 2021 paper arXiv:2108.04444 . The first two authors contributed equally
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing point cloud completion methods suffer from the discrete nature of point clouds and the unstructured prediction of points in local regions, which makes it difficult to reveal fine local geometric details. To resolve this issue, we propose SnowflakeNet with snowflake point deconvolution (SPD) to generate complete point clouds. SPD models the generation of point clouds as the snowflake-like growth of points, where child points are generated progressively by splitting their parent points after each SPD. Our insight into the detailed geometry is to introduce a skip-transformer in the SPD to learn the point splitting patterns that can best fit the local regions. The skip-transformer leverages attention mechanism to summarize the splitting patterns used in the previous SPD layer to produce the splitting in the current layer. The locally compact and structured point clouds generated by SPD precisely reveal the structural characteristics of the 3D shape in local patches, which enables us to predict highly detailed geometries. Moreover, since SPD is a general operation that is not limited to completion, we explore its applications in other generative tasks, including point cloud auto-encoding, generation, single image reconstruction, and upsampling. Our experimental results outperform state-of-the-art methods under widely used benchmarks.
[ { "version": "v1", "created": "Fri, 18 Feb 2022 17:09:49 GMT" }, { "version": "v2", "created": "Tue, 22 Feb 2022 11:58:29 GMT" }, { "version": "v3", "created": "Fri, 28 Oct 2022 06:36:31 GMT" } ]
2022-10-31T00:00:00
[ [ "Xiang", "Peng", "" ], [ "Wen", "Xin", "" ], [ "Liu", "Yu-Shen", "" ], [ "Cao", "Yan-Pei", "" ], [ "Wan", "Pengfei", "" ], [ "Zheng", "Wen", "" ], [ "Han", "Zhizhong", "" ] ]
new_dataset
0.976121
2203.10885
Qiang Sheng
Qiang Sheng, Juan Cao, Xueyao Zhang, Rundong Li, Danding Wang, Yongchun Zhu
Zoom Out and Observe: News Environment Perception for Fake News Detection
ACL 2022 Main Conference (Long Paper)
null
10.18653/v1/2022.acl-long.311
null
cs.CL cs.CY cs.SI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Fake news detection is crucial for preventing the dissemination of misinformation on social media. To differentiate fake news from real ones, existing methods observe the language patterns of the news post and "zoom in" to verify its content with knowledge sources or check its readers' replies. However, these methods neglect the information in the external news environment where a fake news post is created and disseminated. The news environment represents recent mainstream media opinion and public attention, which is an important inspiration of fake news fabrication because fake news is often designed to ride the wave of popular events and catch public attention with unexpected novel content for greater exposure and spread. To capture the environmental signals of news posts, we "zoom out" to observe the news environment and propose the News Environment Perception Framework (NEP). For each post, we construct its macro and micro news environment from recent mainstream news. Then we design a popularity-oriented and a novelty-oriented module to perceive useful signals and further assist final prediction. Experiments on our newly built datasets show that the NEP can efficiently improve the performance of basic fake news detectors.
[ { "version": "v1", "created": "Mon, 21 Mar 2022 11:10:46 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 02:48:21 GMT" } ]
2022-10-31T00:00:00
[ [ "Sheng", "Qiang", "" ], [ "Cao", "Juan", "" ], [ "Zhang", "Xueyao", "" ], [ "Li", "Rundong", "" ], [ "Wang", "Danding", "" ], [ "Zhu", "Yongchun", "" ] ]
new_dataset
0.99835
2205.09641
Tanya Goyal
Tanya Goyal, Junyi Jessy Li, Greg Durrett
SNaC: Coherence Error Detection for Narrative Summarization
EMNLP 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Progress in summarizing long texts is inhibited by the lack of appropriate evaluation frameworks. When a long summary must be produced to appropriately cover the facets of that text, that summary needs to present a coherent narrative to be understandable by a reader, but current automatic and human evaluation methods fail to identify gaps in coherence. In this work, we introduce SNaC, a narrative coherence evaluation framework rooted in fine-grained annotations for long summaries. We develop a taxonomy of coherence errors in generated narrative summaries and collect span-level annotations for 6.6k sentences across 150 book and movie screenplay summaries. Our work provides the first characterization of coherence errors generated by state-of-the-art summarization models and a protocol for eliciting coherence judgments from crowd annotators. Furthermore, we show that the collected annotations allow us to train a strong classifier for automatically localizing coherence errors in generated summaries as well as benchmarking past work in coherence modeling. Finally, our SNaC framework can support future work in long document summarization and coherence evaluation, including improved summarization modeling and post-hoc summary correction.
[ { "version": "v1", "created": "Thu, 19 May 2022 16:01:47 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 15:28:59 GMT" } ]
2022-10-31T00:00:00
[ [ "Goyal", "Tanya", "" ], [ "Li", "Junyi Jessy", "" ], [ "Durrett", "Greg", "" ] ]
new_dataset
0.989385
2205.12206
Aitor Ormazabal
Aitor Ormazabal, Mikel Artetxe, Manex Agirrezabal, Aitor Soroa and Eneko Agirre
PoeLM: A Meter- and Rhyme-Controllable Language Model for Unsupervised Poetry Generation
EMNLP Findings 2022
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Formal verse poetry imposes strict constraints on the meter and rhyme scheme of poems. Most prior work on generating this type of poetry uses existing poems for supervision, which are difficult to obtain for most languages and poetic forms. In this work, we propose an unsupervised approach to generate poems following any given meter and rhyme scheme, without requiring any poetic text for training. Our method works by splitting a regular, non-poetic corpus into phrases, prepending control codes that describe the length and end rhyme of each phrase, and training a transformer language model in the augmented corpus. During inference, we build control codes for the desired meter and rhyme scheme, and condition our language model on them to generate formal verse poetry. Experiments in Spanish and Basque show that our approach is able to generate valid poems, which are often comparable in quality to those written by humans.
[ { "version": "v1", "created": "Tue, 24 May 2022 17:09:55 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 11:57:12 GMT" } ]
2022-10-31T00:00:00
[ [ "Ormazabal", "Aitor", "" ], [ "Artetxe", "Mikel", "" ], [ "Agirrezabal", "Manex", "" ], [ "Soroa", "Aitor", "" ], [ "Agirre", "Eneko", "" ] ]
new_dataset
0.998445
2206.14976
Nibraas Khan
Nibraas Khan, Nilanjan Sarkar
Semi-Supervised Generative Adversarial Network for Stress Detection Using Partially Labeled Physiological Data
12 pages
null
null
null
cs.LG cs.AI eess.SP
http://creativecommons.org/licenses/by/4.0/
Physiological measurements involves observing variables that attribute to the normative functioning of human systems and subsystems directly or indirectly. The measurements can be used to detect affective states of a person with aims such as improving human-computer interactions. There are several methods of collecting physiological data, but wearable sensors are a common, non-invasive tool for accurate readings. However, valuable information is hard to extract from the raw physiological data, especially for affective state detection. Machine Learning techniques are used to detect the affective state of a person through labeled physiological data. A clear problem with using labeled data is creating accurate labels. An expert is needed to analyze a form of recording of participants and mark sections with different states such as stress and calm. While expensive, this method delivers a complete dataset with labeled data that can be used in any number of supervised algorithms. An interesting question arises from the expensive labeling: how can we reduce the cost while maintaining high accuracy? Semi-Supervised learning (SSL) is a potential solution to this problem. These algorithms allow for machine learning models to be trained with only a small subset of labeled data (unlike unsupervised which use no labels). They provide a way of avoiding expensive labeling. This paper compares a fully supervised algorithm to a SSL on the public WESAD (Wearable Stress and Affect Detection) Dataset for stress detection. This paper shows that Semi-Supervised algorithms are a viable method for inexpensive affective state detection systems with accurate results.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 01:58:33 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 19:47:23 GMT" } ]
2022-10-31T00:00:00
[ [ "Khan", "Nibraas", "" ], [ "Sarkar", "Nilanjan", "" ] ]
new_dataset
0.96566
2207.02506
Evangelos Bitsikas
Evangelos Bitsikas and Christina P\"opper
You have been warned: Abusing 5G's Warning and Emergency Systems
null
null
10.1145/3564625.3568000
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Public Warning System (PWS) is an essential part of cellular networks and a country's civil protection. Warnings can notify users of hazardous events (e.g., floods, earthquakes) and crucial national matters that require immediate attention. PWS attacks disseminating fake warnings or concealing precarious events can have a serious impact, causing fraud, panic, physical harm, or unrest to users within an affected area. In this work, we conduct the first comprehensive investigation of PWS security in 5G networks. We demonstrate five practical attacks that may impact the security of 5G-based Commercial Mobile Alert System (CMAS) as well as Earthquake and Tsunami Warning System (ETWS) alerts. Additional to identifying the vulnerabilities, we investigate two PWS spoofing and three PWS suppression attacks, with or without a man-in-the-middle (MitM) attacker. We discover that MitM-based attacks have more severe impact than their non-MitM counterparts. Our PWS barring attack is an effective technique to eliminate legitimate warning messages. We perform a rigorous analysis of the roaming aspect of the PWS, incl. its potentially secure version, and report the implications of our attacks on other emergency features (e.g., 911 SIP calls). We discuss possible countermeasures and note that eradicating the attacks necessitates a scrupulous reevaluation of the PWS design and a secure implementation.
[ { "version": "v1", "created": "Wed, 6 Jul 2022 08:15:12 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 14:29:19 GMT" } ]
2022-10-31T00:00:00
[ [ "Bitsikas", "Evangelos", "" ], [ "Pöpper", "Christina", "" ] ]
new_dataset
0.997224
2210.02318
C\'edric Picron
C\'edric Picron, Punarjay Chakravarty, Tinne Tuytelaars
FQDet: Fast-converging Query-based Detector
Accepted at NeurIPS VTTA workshop 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, two-stage Deformable DETR introduced the query-based two-stage head, a new type of two-stage head different from the region-based two-stage heads of classical detectors as Faster R-CNN. In query-based two-stage heads, the second stage selects one feature per detection processed by a transformer, called the query, as opposed to pooling a rectangular grid of features processed by CNNs as in region-based detectors. In this work, we improve the query-based head by improving the prior of the cross-attention operation with anchors, significantly speeding up the convergence while increasing its performance. Additionally, we empirically show that by improving the cross-attention prior, auxiliary losses and iterative bounding box mechanisms typically used by DETR-based detectors are no longer needed. By combining the best of both the classical and the DETR-based detectors, our FQDet head peaks at 45.4 AP on the 2017 COCO validation set when using a ResNet-50+TPN backbone, only after training for 12 epochs using the 1x schedule. We outperform other high-performing two-stage heads such as e.g. Cascade R-CNN, while using the same backbone and while being computationally cheaper. Additionally, when using the large ResNeXt-101-DCN+TPN backbone and multi-scale testing, our FQDet head achieves 52.9 AP on the 2017 COCO test-dev set after only 12 epochs of training. Code is released at https://github.com/CedricPicron/FQDet .
[ { "version": "v1", "created": "Wed, 5 Oct 2022 15:19:34 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 08:05:18 GMT" } ]
2022-10-31T00:00:00
[ [ "Picron", "Cédric", "" ], [ "Chakravarty", "Punarjay", "" ], [ "Tuytelaars", "Tinne", "" ] ]
new_dataset
0.974571
2210.09482
Sri Hrushikesh Varma Bhupathiraju
Yulong Cao, S. Hrushikesh Bhupathiraju, Pirouz Naghavi, Takeshi Sugawara, Z. Morley Mao, Sara Rampazzi
You Can't See Me: Physical Removal Attacks on LiDAR-based Autonomous Vehicles Driving Frameworks
Accepted to the 32nd USENIX Security Symposium (2023)
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Autonomous Vehicles (AVs) increasingly use LiDAR-based object detection systems to perceive other vehicles and pedestrians on the road. While existing attacks on LiDAR-based autonomous driving architectures focus on lowering the confidence score of AV object detection models to induce obstacle misdetection, our research discovers how to leverage laser-based spoofing techniques to selectively remove the LiDAR point cloud data of genuine obstacles at the sensor level before being used as input to the AV perception. The ablation of this critical LiDAR information causes autonomous driving obstacle detectors to fail to identify and locate obstacles and, consequently, induces AVs to make dangerous automatic driving decisions. In this paper, we present a method invisible to the human eye that hides objects and deceives autonomous vehicles' obstacle detectors by exploiting inherent automatic transformation and filtering processes of LiDAR sensor data integrated with autonomous driving frameworks. We call such attacks Physical Removal Attacks (PRA), and we demonstrate their effectiveness against three popular AV obstacle detectors (Apollo, Autoware, PointPillars), and we achieve 45{\deg} attack capability. We evaluate the attack impact on three fusion models (Frustum-ConvNet, AVOD, and Integrated-Semantic Level Fusion) and the consequences on the driving decision using LGSVL, an industry-grade simulator. In our moving vehicle scenarios, we achieve a 92.7% success rate removing 90\% of a target obstacle's cloud points. Finally, we demonstrate the attack's success against two popular defenses against spoofing and object hiding attacks and discuss two enhanced defense strategies to mitigate our attack.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 00:02:00 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 18:43:03 GMT" } ]
2022-10-31T00:00:00
[ [ "Cao", "Yulong", "" ], [ "Bhupathiraju", "S. Hrushikesh", "" ], [ "Naghavi", "Pirouz", "" ], [ "Sugawara", "Takeshi", "" ], [ "Mao", "Z. Morley", "" ], [ "Rampazzi", "Sara", "" ] ]
new_dataset
0.982191
2210.12756
Andreas Georgis
Andreas Georgis, Panagiotis Mermigkas, Petros Maragos
VP-SLAM: A Monocular Real-time Visual SLAM with Points, Lines and Vanishing Points
null
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Traditional monocular Visual Simultaneous Localization and Mapping (vSLAM) systems can be divided into three categories: those that use features, those that rely on the image itself, and hybrid models. In the case of feature-based methods, new research has evolved to incorporate more information from their environment using geometric primitives beyond points, such as lines and planes. This is because in many environments, which are man-made environments, characterized as Manhattan world, geometric primitives such as lines and planes occupy most of the space in the environment. The exploitation of these schemes can lead to the introduction of algorithms capable of optimizing the trajectory of a Visual SLAM system and also helping to construct an exuberant map. Thus, we present a real-time monocular Visual SLAM system that incorporates real-time methods for line and VP extraction, as well as two strategies that exploit vanishing points to estimate the robot's translation and improve its rotation.Particularly, we build on ORB-SLAM2, which is considered the current state-of-the-art solution in terms of both accuracy and efficiency, and extend its formulation to handle lines and VPs to create two strategies the first optimize the rotation and the second refine the translation part from the known rotation. First, we extract VPs using a real-time method and use them for a global rotation optimization strategy. Second, we present a translation estimation method that takes advantage of last-stage rotation optimization to model a linear system. Finally, we evaluate our system on the TUM RGB-D benchmark and demonstrate that the proposed system achieves state-of-the-art results and runs in real time, and its performance remains close to the original ORB-SLAM2 system
[ { "version": "v1", "created": "Sun, 23 Oct 2022 15:54:26 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 10:29:20 GMT" } ]
2022-10-31T00:00:00
[ [ "Georgis", "Andreas", "" ], [ "Mermigkas", "Panagiotis", "" ], [ "Maragos", "Petros", "" ] ]
new_dataset
0.993192
2210.15306
Rodrigo Diaz
Rodrigo Diaz, Ben Hayes, Charalampos Saitis, Gy\"orgy Fazekas, Mark Sandler
Rigid-Body Sound Synthesis with Differentiable Modal Resonators
5 pages
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Physical models of rigid bodies are used for sound synthesis in applications from virtual environments to music production. Traditional methods such as modal synthesis often rely on computationally expensive numerical solvers, while recent deep learning approaches are limited by post-processing of their results. In this work we present a novel end-to-end framework for training a deep neural network to generate modal resonators for a given 2D shape and material, using a bank of differentiable IIR filters. We demonstrate our method on a dataset of synthetic objects, but train our model using an audio-domain objective, paving the way for physically-informed synthesisers to be learned directly from recordings of real-world objects.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 10:34:38 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 11:47:41 GMT" } ]
2022-10-31T00:00:00
[ [ "Diaz", "Rodrigo", "" ], [ "Hayes", "Ben", "" ], [ "Saitis", "Charalampos", "" ], [ "Fazekas", "György", "" ], [ "Sandler", "Mark", "" ] ]
new_dataset
0.999783
2210.15436
Giulia Cavicchioni
Giulia Cavicchioni, Alessio Meneghetti
The weight distribution of codes over finite chain rings
null
null
null
null
cs.IT math.AC math.IT
http://creativecommons.org/licenses/by/4.0/
In this work, we determine new linear equations for the weight distribution of linear codes over finite chain rings. The identities are determined by counting the number of some special submatrices of the parity-check matrix of the code. Thanks to these relations we are able to compute the full weight distribution of codes with small Singleton defects, such as MDS, MDR and AMDR codes.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 13:58:13 GMT" }, { "version": "v2", "created": "Fri, 28 Oct 2022 07:21:52 GMT" } ]
2022-10-31T00:00:00
[ [ "Cavicchioni", "Giulia", "" ], [ "Meneghetti", "Alessio", "" ] ]
new_dataset
0.999205
2210.15696
Everlyn Chimoto
Everlyn Asiko Chimoto and Bruce A. Bassett
COMET-QE and Active Learning for Low-Resource Machine Translation
Accepted to Findings of EMNLP 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Active learning aims to deliver maximum benefit when resources are scarce. We use COMET-QE, a reference-free evaluation metric, to select sentences for low-resource neural machine translation. Using Swahili, Kinyarwanda and Spanish for our experiments, we show that COMET-QE significantly outperforms two variants of Round Trip Translation Likelihood (RTTL) and random sentence selection by up to 5 BLEU points for 20k sentences selected by Active Learning on a 30k baseline. This suggests that COMET-QE is a powerful tool for sentence selection in the very low-resource limit.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 18:00:41 GMT" } ]
2022-10-31T00:00:00
[ [ "Chimoto", "Everlyn Asiko", "" ], [ "Bassett", "Bruce A.", "" ] ]
new_dataset
0.989994
2210.15722
Sachin Chhabra
Sachin Chhabra, Prabal Bijoy Dutta, Hemanth Venkateswara and Baoxin Li
PatchRot: A Self-Supervised Technique for Training Vision Transformers
NeurIPS Workshop on Vision Transformers: Theory and Applications (VTTA)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision transformers require a huge amount of labeled data to outperform convolutional neural networks. However, labeling a huge dataset is a very expensive process. Self-supervised learning techniques alleviate this problem by learning features similar to supervised learning in an unsupervised way. In this paper, we propose a self-supervised technique PatchRot that is crafted for vision transformers. PatchRot rotates images and image patches and trains the network to predict the rotation angles. The network learns to extract both global and local features from an image. Our extensive experiments on different datasets showcase PatchRot training learns rich features which outperform supervised learning and compared baseline.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 18:55:12 GMT" } ]
2022-10-31T00:00:00
[ [ "Chhabra", "Sachin", "" ], [ "Dutta", "Prabal Bijoy", "" ], [ "Venkateswara", "Hemanth", "" ], [ "Li", "Baoxin", "" ] ]
new_dataset
0.999661
2210.15769
Giacomo Fiorentini
Giacomo Fiorentini, Itir Onal Ertugrul, Albert Ali Salah
Fully-attentive and interpretable: vision and video vision transformers for pain detection
9 pages (12 with references), 10 figures, VTTA2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Pain is a serious and costly issue globally, but to be treated, it must first be detected. Vision transformers are a top-performing architecture in computer vision, with little research on their use for pain detection. In this paper, we propose the first fully-attentive automated pain detection pipeline that achieves state-of-the-art performance on binary pain detection from facial expressions. The model is trained on the UNBC-McMaster dataset, after faces are 3D-registered and rotated to the canonical frontal view. In our experiments we identify important areas of the hyperparameter space and their interaction with vision and video vision transformers, obtaining 3 noteworthy models. We analyse the attention maps of one of our models, finding reasonable interpretations for its predictions. We also evaluate Mixup, an augmentation technique, and Sharpness-Aware Minimization, an optimizer, with no success. Our presented models, ViT-1 (F1 score 0.55 +- 0.15), ViViT-1 (F1 score 0.55 +- 0.13), and ViViT-2 (F1 score 0.49 +- 0.04), all outperform earlier works, showing the potential of vision transformers for pain detection. Code is available at https://github.com/IPDTFE/ViT-McMaster
[ { "version": "v1", "created": "Thu, 27 Oct 2022 21:01:40 GMT" } ]
2022-10-31T00:00:00
[ [ "Fiorentini", "Giacomo", "" ], [ "Ertugrul", "Itir Onal", "" ], [ "Salah", "Albert Ali", "" ] ]
new_dataset
0.99907
2210.15790
Lin Zhao
Heng Huang, Lin Zhao, Xintao Hu, Haixing Dai, Lu Zhang, Dajiang Zhu, Tianming Liu
BI AVAN: Brain inspired Adversarial Visual Attention Network
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual attention is a fundamental mechanism in the human brain, and it inspires the design of attention mechanisms in deep neural networks. However, most of the visual attention studies adopted eye-tracking data rather than the direct measurement of brain activity to characterize human visual attention. In addition, the adversarial relationship between the attention-related objects and attention-neglected background in the human visual system was not fully exploited. To bridge these gaps, we propose a novel brain-inspired adversarial visual attention network (BI-AVAN) to characterize human visual attention directly from functional brain activity. Our BI-AVAN model imitates the biased competition process between attention-related/neglected objects to identify and locate the visual objects in a movie frame the human brain focuses on in an unsupervised manner. We use independent eye-tracking data as ground truth for validation and experimental results show that our model achieves robust and promising results when inferring meaningful human visual attention and mapping the relationship between brain activities and visual stimuli. Our BI-AVAN model contributes to the emerging field of leveraging the brain's functional architecture to inspire and guide the model design in artificial intelligence (AI), e.g., deep neural networks.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 22:20:36 GMT" } ]
2022-10-31T00:00:00
[ [ "Huang", "Heng", "" ], [ "Zhao", "Lin", "" ], [ "Hu", "Xintao", "" ], [ "Dai", "Haixing", "" ], [ "Zhang", "Lu", "" ], [ "Zhu", "Dajiang", "" ], [ "Liu", "Tianming", "" ] ]
new_dataset
0.980615
2210.15791
Shaunak Mehta
Shaunak A. Mehta, Yeunhee Kim, Joshua Hoegerman, Michael D. Bartlett and Dylan P. Losey
RISO: Combining Rigid Grippers with Soft Switchable Adhesives
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robot arms that assist humans should be able to pick up, move, and release everyday objects. Today's assistive robot arms use rigid grippers to pinch items between fingers; while these rigid grippers are well suited for large and heavy objects, they often struggle to grasp small, numerous, or delicate items (such as foods). Soft grippers cover the opposite end of the spectrum; these grippers use adhesives or change shape to wrap around small and irregular items, but cannot exert the large forces needed to manipulate heavy objects. In this paper we introduce RIgid-SOft (RISO) grippers that combine switchable soft adhesives with standard rigid mechanisms to enable a diverse range of robotic grasping. We develop RISO grippers by leveraging a novel class of soft materials that change adhesion force in real-time through pneumatically controlled shape and rigidity tuning. By mounting these soft adhesives on the bottom of rigid fingers, we create a gripper that can interact with objects using either purely rigid grasps (pinching the object) or purely soft grasps (adhering to the object). This increased capability requires additional decision making, and we therefore formulate a shared control approach that partially automates the motion of the robot arm. In practice, this controller aligns the RISO gripper while inferring which object the human wants to grasp and how the human wants to grasp that item. Our user study demonstrates that RISO grippers can pick up, move, and release household items from existing datasets, and that the system performs grasps more successfully and efficiently when sharing control between the human and robot. See videos here: https://youtu.be/5uLUkBYcnwg
[ { "version": "v1", "created": "Thu, 27 Oct 2022 22:26:15 GMT" } ]
2022-10-31T00:00:00
[ [ "Mehta", "Shaunak A.", "" ], [ "Kim", "Yeunhee", "" ], [ "Hoegerman", "Joshua", "" ], [ "Bartlett", "Michael D.", "" ], [ "Losey", "Dylan P.", "" ] ]
new_dataset
0.994652
2210.15852
Todd Murphey
Joel Meyer, Allison Pinosky, Thomas Trzpit, Ed Colgate, Todd D. Murphey
A Game Benchmark for Real-Time Human-Swarm Control
8 pages, IEEE Conference on Automation Science and Engineering (CASE), 2022
null
null
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a game benchmark for testing human-swarm control algorithms and interfaces in a real-time, high-cadence scenario. Our benchmark consists of a swarm vs. swarm game in a virtual ROS environment in which the goal of the game is to capture all agents from the opposing swarm; the game's high-cadence is a result of the capture rules, which cause agent team sizes to fluctuate rapidly. These rules require players to consider both the number of agents currently at their disposal and the behavior of their opponent's swarm when they plan actions. We demonstrate our game benchmark with a default human-swarm control system that enables a player to interact with their swarm through a high-level touchscreen interface. The touchscreen interface transforms player gestures into swarm control commands via a low-level decentralized ergodic control framework. We compare our default human-swarm control system to a flocking-based control system, and discuss traits that are crucial for swarm control algorithms and interfaces operating in real-time, high-cadence scenarios like our game benchmark. Our game benchmark code is available on Github; more information can be found at https://sites.google.com/view/swarm-game-benchmark.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 02:47:14 GMT" } ]
2022-10-31T00:00:00
[ [ "Meyer", "Joel", "" ], [ "Pinosky", "Allison", "" ], [ "Trzpit", "Thomas", "" ], [ "Colgate", "Ed", "" ], [ "Murphey", "Todd D.", "" ] ]
new_dataset
0.999786
2210.15907
Devansh Jalota
Devansh Jalota and Jessica Lazarus and Alexandre Bayen and Marco Pavone
Credit-Based Congestion Pricing: Equilibrium Properties and Optimal Scheme Design
null
null
null
null
cs.GT cs.MA math.OC
http://creativecommons.org/licenses/by/4.0/
Credit-based congestion pricing (CBCP) has emerged as a mechanism to alleviate the social inequity concerns of road congestion pricing - a promising strategy for traffic congestion mitigation - by providing low-income users with travel credits to offset some of their toll payments. While CBCP offers immense potential for addressing inequity issues that hamper the practical viability of congestion pricing, the deployment of CBCP in practice is nascent, and the potential efficacy and optimal design of CBCP schemes have yet to be formalized. In this work, we study the design of CBCP schemes to achieve particular societal objectives and investigate their influence on traffic patterns when routing heterogeneous users with different values of time (VoTs) in a multi-lane highway with an express lane. We introduce a new non-atomic congestion game model of a mixed-economy, wherein eligible users receive travel credits while the remaining ineligible users pay out-of-pocket to use the express lane. In this setting, we investigate the effect of CBCP schemes on traffic patterns by characterizing the properties (i.e., existence, comparative statics) of the corresponding Nash equilibria and, in the setting when eligible users have time-invariant VoTs, develop a convex program to compute these equilibria. We further present a bi-level optimization framework to design optimal CBCP schemes to achieve a central planner's societal objectives. Finally, we conduct numerical experiments based on a case study of the San Mateo 101 Express Lanes Project, one of the first North American CBCP pilots. Our results demonstrate the potential of CBCP to enable low-income travelers to avail of the travel time savings provided by congestion pricing on express lanes while having comparatively low impacts on the travel costs of other road users.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 05:29:10 GMT" } ]
2022-10-31T00:00:00
[ [ "Jalota", "Devansh", "" ], [ "Lazarus", "Jessica", "" ], [ "Bayen", "Alexandre", "" ], [ "Pavone", "Marco", "" ] ]
new_dataset
0.998347
2210.15913
Zhaowei Chen
Zhaowei Chen, Peng Li, Zeyong Wei, Honghua Chen, Haoran Xie, Mingqiang Wei, Fu Lee Wang
GeoGCN: Geometric Dual-domain Graph Convolution Network for Point Cloud Denoising
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose GeoGCN, a novel geometric dual-domain graph convolution network for point cloud denoising (PCD). Beyond the traditional wisdom of PCD, to fully exploit the geometric information of point clouds, we define two kinds of surface normals, one is called Real Normal (RN), and the other is Virtual Normal (VN). RN preserves the local details of noisy point clouds while VN avoids the global shape shrinkage during denoising. GeoGCN is a new PCD paradigm that, 1) first regresses point positions by spatialbased GCN with the help of VNs, 2) then estimates initial RNs by performing Principal Component Analysis on the regressed points, and 3) finally regresses fine RNs by normalbased GCN. Unlike existing PCD methods, GeoGCN not only exploits two kinds of geometry expertise (i.e., RN and VN) but also benefits from training data. Experiments validate that GeoGCN outperforms SOTAs in terms of both noise-robustness and local-and-global feature preservation.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 05:48:57 GMT" } ]
2022-10-31T00:00:00
[ [ "Chen", "Zhaowei", "" ], [ "Li", "Peng", "" ], [ "Wei", "Zeyong", "" ], [ "Chen", "Honghua", "" ], [ "Xie", "Haoran", "" ], [ "Wei", "Mingqiang", "" ], [ "Wang", "Fu Lee", "" ] ]
new_dataset
0.97894
2210.15933
Baian Chen
Baian Chen, Lipeng Gu, Xin Zhuang, Yiyang Shen, Weiming Wang, Mingqiang Wei
PSFormer: Point Transformer for 3D Salient Object Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose PSFormer, an effective point transformer model for 3D salient object detection. PSFormer is an encoder-decoder network that takes full advantage of transformers to model the contextual information in both multi-scale point- and scene-wise manners. In the encoder, we develop a Point Context Transformer (PCT) module to capture region contextual features at the point level; PCT contains two different transformers to excavate the relationship among points. In the decoder, we develop a Scene Context Transformer (SCT) module to learn context representations at the scene level; SCT contains both Upsampling-and-Transformer blocks and Multi-context Aggregation units to integrate the global semantic and multi-level features from the encoder into the global scene context. Experiments show clear improvements of PSFormer over its competitors and validate that PSFormer is more robust to challenging cases such as small objects, multiple objects, and objects with complex structures.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 06:34:28 GMT" } ]
2022-10-31T00:00:00
[ [ "Chen", "Baian", "" ], [ "Gu", "Lipeng", "" ], [ "Zhuang", "Xin", "" ], [ "Shen", "Yiyang", "" ], [ "Wang", "Weiming", "" ], [ "Wei", "Mingqiang", "" ] ]
new_dataset
0.997081
2210.15937
Atsushi Ando
Atsushi Ando, Ryo Masumura, Akihiko Takashima, Satoshi Suzuki, Naoki Makishima, Keita Suzuki, Takafumi Moriya, Takanori Ashihara, Hiroshi Sato
On the Use of Modality-Specific Large-Scale Pre-Trained Encoders for Multimodal Sentiment Analysis
Accepted to SLT 2022
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper investigates the effectiveness and implementation of modality-specific large-scale pre-trained encoders for multimodal sentiment analysis~(MSA). Although the effectiveness of pre-trained encoders in various fields has been reported, conventional MSA methods employ them for only linguistic modality, and their application has not been investigated. This paper compares the features yielded by large-scale pre-trained encoders with conventional heuristic features. One each of the largest pre-trained encoders publicly available for each modality are used; CLIP-ViT, WavLM, and BERT for visual, acoustic, and linguistic modalities, respectively. Experiments on two datasets reveal that methods with domain-specific pre-trained encoders attain better performance than those with conventional features in both unimodal and multimodal scenarios. We also find it better to use the outputs of the intermediate layers of the encoders than those of the output layer. The codes are available at https://github.com/ando-hub/MSA_Pretrain.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 06:48:35 GMT" } ]
2022-10-31T00:00:00
[ [ "Ando", "Atsushi", "" ], [ "Masumura", "Ryo", "" ], [ "Takashima", "Akihiko", "" ], [ "Suzuki", "Satoshi", "" ], [ "Makishima", "Naoki", "" ], [ "Suzuki", "Keita", "" ], [ "Moriya", "Takafumi", "" ], [ "Ashihara", "Takanori", "" ], [ "Sato", "Hiroshi", "" ] ]
new_dataset
0.969997
2210.15939
Shaoshan Liu
Tianze Wu, Shaoshan Liu, Bo Yu, Sa Wang, Yungang Bao, Weisong Shi
INTERNEURON: A Middleware with Multi-Network Communication Reliability for Infrastructure Vehicle Cooperative Autonomous Driving
null
null
null
null
cs.RO cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrastructure-Vehicle Cooperative Autonomous Driving (IVCAD) is a new paradigm of autonomous driving, which relies on the cooperation between intelligent roads and autonomous vehicles. This paradigm has been shown to be safer and more efficient compared to the on-vehicle-only autonomous driving paradigm. Our real-world deployment data indicates that the effectiveness of IVCAD is constrained by reliability and performance of commercial communication networks. This paper targets this exact problem, and proposes INTERNEURON, a middleware to achieve high communication reliability between intelligent roads and autonomous vehicles, in the context of IVCAD. Specifically, INTERNEURON dynamically matches IVCAD applications and the underlying communication technologies based on varying communication performance and quality needs. Evaluation results confirm that INTERNEURON reduces deadline violations by more than 95\%, significantly improving the reliability of IVCAD systems.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 06:56:36 GMT" } ]
2022-10-31T00:00:00
[ [ "Wu", "Tianze", "" ], [ "Liu", "Shaoshan", "" ], [ "Yu", "Bo", "" ], [ "Wang", "Sa", "" ], [ "Bao", "Yungang", "" ], [ "Shi", "Weisong", "" ] ]
new_dataset
0.999599
2210.15954
Jonathan Zheng
Jonathan Zheng, Ashutosh Baheti, Tarek Naous, Wei Xu, and Alan Ritter
Stanceosaurus: Classifying Stance Towards Multilingual Misinformation
Accepted to EMNLP 2022 main conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present Stanceosaurus, a new corpus of 28,033 tweets in English, Hindi, and Arabic annotated with stance towards 251 misinformation claims. As far as we are aware, it is the largest corpus annotated with stance towards misinformation claims. The claims in Stanceosaurus originate from 15 fact-checking sources that cover diverse geographical regions and cultures. Unlike existing stance datasets, we introduce a more fine-grained 5-class labeling strategy with additional subcategories to distinguish implicit stance. Pre-trained transformer-based stance classifiers that are fine-tuned on our corpus show good generalization on unseen claims and regional claims from countries outside the training data. Cross-lingual experiments demonstrate Stanceosaurus' capability of training multi-lingual models, achieving 53.1 F1 on Hindi and 50.4 F1 on Arabic without any target-language fine-tuning. Finally, we show how a domain adaptation method can be used to improve performance on Stanceosaurus using additional RumourEval-2019 data. We make Stanceosaurus publicly available to the research community and hope it will encourage further work on misinformation identification across languages and cultures.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 07:18:32 GMT" } ]
2022-10-31T00:00:00
[ [ "Zheng", "Jonathan", "" ], [ "Baheti", "Ashutosh", "" ], [ "Naous", "Tarek", "" ], [ "Xu", "Wei", "" ], [ "Ritter", "Alan", "" ] ]
new_dataset
0.994884
2210.15972
Yan Zhang
Yan Zhang, Xiyuan Gao, Qingyan Duan, Jiaxu Leng, Xiao Pu, Xinbo Gao
Contextual Learning in Fourier Complex Field for VHR Remote Sensing Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Very high-resolution (VHR) remote sensing (RS) image classification is the fundamental task for RS image analysis and understanding. Recently, transformer-based models demonstrated outstanding potential for learning high-order contextual relationships from natural images with general resolution (224x224 pixels) and achieved remarkable results on general image classification tasks. However, the complexity of the naive transformer grows quadratically with the increase in image size, which prevents transformer-based models from VHR RS image (500x500 pixels) classification and other computationally expensive downstream tasks. To this end, we propose to decompose the expensive self-attention (SA) into real and imaginary parts via discrete Fourier transform (DFT) and therefore propose an efficient complex self-attention (CSA) mechanism. Benefiting from the conjugated symmetric property of DFT, CSA is capable to model the high-order contextual information with less than half computations of naive SA. To overcome the gradient explosion in Fourier complex field, we replace the Softmax function with the carefully designed Logmax function to normalize the attention map of CSA and stabilize the gradient propagation. By stacking various layers of CSA blocks, we propose the Fourier Complex Transformer (FCT) model to learn global contextual information from VHR aerial images following the hierarchical manners. Universal experiments conducted on commonly used RS classification data sets demonstrate the effectiveness and efficiency of FCT, especially on very high-resolution RS images.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 08:13:33 GMT" } ]
2022-10-31T00:00:00
[ [ "Zhang", "Yan", "" ], [ "Gao", "Xiyuan", "" ], [ "Duan", "Qingyan", "" ], [ "Leng", "Jiaxu", "" ], [ "Pu", "Xiao", "" ], [ "Gao", "Xinbo", "" ] ]
new_dataset
0.970514
2210.15988
Leyi Zhao Ennard
Leyi Zhao, Yi Li
Spectrograms Are Sequences of Patches
null
null
null
null
cs.SD cs.AI cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
Self-supervised pre-training models have been used successfully in several machine learning domains. However, only a tiny amount of work is related to music. In our work, we treat a spectrogram of music as a series of patches and design a self-supervised model that captures the features of these sequential patches: Patchifier, which makes good use of self-supervised learning methods from both NLP and CV domains. We do not use labeled data for the pre-training process, only a subset of the MTAT dataset containing 16k music clips. After pre-training, we apply the model to several downstream tasks. Our model achieves a considerably acceptable result compared to other audio representation models. Meanwhile, our work demonstrates that it makes sense to consider audio as a series of patch segments.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 08:39:36 GMT" } ]
2022-10-31T00:00:00
[ [ "Zhao", "Leyi", "" ], [ "Li", "Yi", "" ] ]
new_dataset
0.96104
2210.15999
Ayman Beghdadi
Ayman Beghdadi, Malik Mallem, Lotfi Beji
Benchmarking performance of object detection under image distortions in an uncontrolled environment
null
null
null
null
cs.CV cs.DB
http://creativecommons.org/licenses/by/4.0/
The robustness of object detection algorithms plays a prominent role in real-world applications, especially in uncontrolled environments due to distortions during image acquisition. It has been proven that the performance of object detection methods suffers from in-capture distortions. In this study, we present a performance evaluation framework for the state-of-the-art object detection methods using a dedicated dataset containing images with various distortions at different levels of severity. Furthermore, we propose an original strategy of image distortion generation applied to the MS-COCO dataset that combines some local and global distortions to reach much better performances. We have shown that training using the proposed dataset improves the robustness of object detection by 31.5\%. Finally, we provide a custom dataset including natural images distorted from MS-COCO to perform a more reliable evaluation of the robustness against common distortions. The database and the generation source codes of the different distortions are made publicly available
[ { "version": "v1", "created": "Fri, 28 Oct 2022 09:06:52 GMT" } ]
2022-10-31T00:00:00
[ [ "Beghdadi", "Ayman", "" ], [ "Mallem", "Malik", "" ], [ "Beji", "Lotfi", "" ] ]
new_dataset
0.99229
2210.16018
Ekaterina Trofimova
Anastasia Drozdova, Polina Guseva, Ekaterina Trofimova, Anna Scherbakova, Andrey Ustyuzhanin
Code4ML: a Large-scale Dataset of annotated Machine Learning Code
Under review
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
Program code as a data source is gaining popularity in the data science community. Possible applications for models trained on such assets range from classification for data dimensionality reduction to automatic code generation. However, without annotation number of methods that could be applied is somewhat limited. To address the lack of annotated datasets, we present the Code4ML corpus. It contains code snippets, task summaries, competitions and dataset descriptions publicly available from Kaggle - the leading platform for hosting data science competitions. The corpus consists of ~2.5 million snippets of ML code collected from ~100 thousand Jupyter notebooks. A representative fraction of the snippets is annotated by human assessors through a user-friendly interface specially designed for that purpose. Code4ML dataset can potentially help address a number of software engineering or data science challenges through a data-driven approach. For example, it can be helpful for semantic code classification, code auto-completion, and code generation for an ML task specified in natural language.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 09:44:19 GMT" } ]
2022-10-31T00:00:00
[ [ "Drozdova", "Anastasia", "" ], [ "Guseva", "Polina", "" ], [ "Trofimova", "Ekaterina", "" ], [ "Scherbakova", "Anna", "" ], [ "Ustyuzhanin", "Andrey", "" ] ]
new_dataset
0.999811
2210.16029
Shaoguang Mao
Zhiyi Wang, Shaoguang Mao, Wenshan Wu, Yan Xia
Assessing Phrase Break of ESL speech with Pre-trained Language Models
Under Review, ICASSP 2023
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
This work introduces an approach to assessing phrase break in ESL learners' speech with pre-trained language models (PLMs). Different with traditional methods, this proposal converts speech to token sequences, and then leverages the power of PLMs. There are two sub-tasks: overall assessment of phrase break for a speech clip; fine-grained assessment of every possible phrase break position. Speech input is first force-aligned with texts, then pre-processed to a token sequence, including words and associated phrase break information. The token sequence is then fed into the pre-training and fine-tuning pipeline. In pre-training, a replaced break token detection module is trained with token data where each token has a certain percentage chance to be randomly replaced. In fine-tuning, overall and fine-grained scoring are optimized with text classification and sequence labeling pipeline, respectively. With the introduction of PLMs, the dependence on labeled training data has been greatly reduced, and performance has improved.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 10:06:06 GMT" } ]
2022-10-31T00:00:00
[ [ "Wang", "Zhiyi", "" ], [ "Mao", "Shaoguang", "" ], [ "Wu", "Wenshan", "" ], [ "Xia", "Yan", "" ] ]
new_dataset
0.960751
2210.16063
Roee Mordechai Francos
Roee M. Francos and Alfred M. Bruckstein
Defense Against Smart Invaders with Swarms of Sweeping Agents
18 pages, 21 figures
null
null
null
cs.MA
http://creativecommons.org/licenses/by/4.0/
The goal of this research is to devise guaranteed defense policies that allow to protect a given region from the entrance of smart mobile invaders by detecting them using a team of defending agents equipped with identical line sensors. By designing cooperative defense strategies that ensure all invaders are detected, conditions on the defenders' speed are derived. Successful accomplishment of the defense task implies invaders with a known limit on their speed cannot slip past the defenders and enter the guarded region undetected. The desired outcome of the defense protocols is to defend the area and additionally to expand it as much as possible. Expansion becomes possible if the defenders' speed exceeds a critical speed that is necessary to only defend the initial region. We present results on the total search time, critical speeds and maximal expansion possible for two types of novel pincer-movement defense processes, circular and spiral, for any even number of defenders. The proposed spiral process allows to detect invaders at nearly the lowest theoretically optimal speed, and if this speed is exceeded, it also allows to expand the protected region almost to the maximal area.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 11:18:08 GMT" } ]
2022-10-31T00:00:00
[ [ "Francos", "Roee M.", "" ], [ "Bruckstein", "Alfred M.", "" ] ]
new_dataset
0.988101
2210.16083
JunKyu Lee
JunKyu Lee, Blesson Varghese, Hans Vandierendonck
ROMA: Run-Time Object Detection To Maximize Real-Time Accuracy
Accepted at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper analyzes the effects of dynamically varying video contents and detection latency on the real-time detection accuracy of a detector and proposes a new run-time accuracy variation model, ROMA, based on the findings from the analysis. ROMA is designed to select an optimal detector out of a set of detectors in real time without label information to maximize real-time object detection accuracy. ROMA utilizing four YOLOv4 detectors on an NVIDIA Jetson Nano shows real-time accuracy improvements by 4 to 37% for a scenario of dynamically varying video contents and detection latency consisting of MOT17Det and MOT20Det datasets, compared to individual YOLOv4 detectors and two state-of-the-art runtime techniques.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 12:06:29 GMT" } ]
2022-10-31T00:00:00
[ [ "Lee", "JunKyu", "" ], [ "Varghese", "Blesson", "" ], [ "Vandierendonck", "Hans", "" ] ]
new_dataset
0.951412
2210.16204
Nicola Marinello
Nicola Marinello (1), Marc Proesmans (1 and 3), Luc Van Gool (1 and 2 and 3) ((1) KU Leuven/ESAT-PSI, (2) ETH Zurich/CVL, (3) TRACE vzw)
TripletTrack: 3D Object Tracking using Triplet Embeddings and LSTM
Accepted to CVPR 2022 Workshop on Autonomous Driving
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops June 2022 4500-4510
10.1109/CVPRW56347.2022.00496
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
3D object tracking is a critical task in autonomous driving systems. It plays an essential role for the system's awareness about the surrounding environment. At the same time there is an increasing interest in algorithms for autonomous cars that solely rely on inexpensive sensors, such as cameras. In this paper we investigate the use of triplet embeddings in combination with motion representations for 3D object tracking. We start from an off-the-shelf 3D object detector, and apply a tracking mechanism where objects are matched by an affinity score computed on local object feature embeddings and motion descriptors. The feature embeddings are trained to include information about the visual appearance and monocular 3D object characteristics, while motion descriptors provide a strong representation of object trajectories. We will show that our approach effectively re-identifies objects, and also behaves reliably and accurately in case of occlusions, missed detections and can detect re-appearance across different field of views. Experimental evaluation shows that our approach outperforms state-of-the-art on nuScenes by a large margin. We also obtain competitive results on KITTI.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 15:23:50 GMT" } ]
2022-10-31T00:00:00
[ [ "Marinello", "Nicola", "", "KU Leuven/ESAT-PSI" ], [ "Proesmans", "Marc", "", "1 and 3" ], [ "Van Gool", "Luc", "", "1 and 2\n and 3" ] ]
new_dataset
0.998687
2210.16231
Sergey Novoselov
Sergey Novoselov, Vladimir Volokhov, Galina Lavrentyeva
Universal speaker recognition encoders for different speech segments duration
Submitted to ICASSP'23
null
null
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Creating universal speaker encoders which are robust for different acoustic and speech duration conditions is a big challenge today. According to our observations systems trained on short speech segments are optimal for short phrase speaker verification and systems trained on long segments are superior for long segments verification. A system trained simultaneously on pooled short and long speech segments does not give optimal verification results and usually degrades both for short and long segments. This paper addresses the problem of creating universal speaker encoders for different speech segments duration. We describe our simple recipe for training universal speaker encoder for any type of selected neural network architecture. According to our evaluation results of wav2vec-TDNN based systems obtained for NIST SRE and VoxCeleb1 benchmarks the proposed universal encoder provides speaker verification improvements in case of different enrollment and test speech segment duration. The key feature of the proposed encoder is that it has the same inference time as the selected neural network architecture.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 16:06:00 GMT" } ]
2022-10-31T00:00:00
[ [ "Novoselov", "Sergey", "" ], [ "Volokhov", "Vladimir", "" ], [ "Lavrentyeva", "Galina", "" ] ]
new_dataset
0.994583
2210.16253
Mattia Pugliatti
Mattia Pugliatti and Francesco Topputo
DOORS: Dataset fOr bOuldeRs Segmentation. Statistical properties and Blender setup
16 pages, 19 figures, summary paper of a dataset
null
null
null
cs.CV cs.AI cs.DB cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
The capability to detect boulders on the surface of small bodies is beneficial for vision-based applications such as hazard detection during critical operations and navigation. This task is challenging due to the wide assortment of irregular shapes, the characteristics of the boulders population, and the rapid variability in the illumination conditions. Moreover, the lack of publicly available labeled datasets for these applications damps the research about data-driven algorithms. In this work, the authors provide a statistical characterization and setup used for the generation of two datasets about boulders on small bodies that are made publicly available.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 16:39:06 GMT" } ]
2022-10-31T00:00:00
[ [ "Pugliatti", "Mattia", "" ], [ "Topputo", "Francesco", "" ] ]
new_dataset
0.999698
2210.16261
Ian Cosden
Ian A. Cosden
An RSE Group Model: Operational and Organizational Approaches From Princeton University's Central Research Software Engineering Group
Submitted to IEEE Computing in Science & Engineering (CiSE) Special Issue on the Future of Research Software Engineers in the US
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Princeton Research Software Engineering Group has grown rapidly since its inception in late 2016. The group, housed in the central Research Computing Department, comprised of professional Research Software Engineers (RSEs), works directly with researchers to create high quality research software to enable new scientific advances. As the group has matured so has the need for formalizing operational details and procedures. The RSE group uses an RSE partnership model, where Research Software Engineers work long-term with a designated academic department, institute, center, consortium, or individual principal investigator (PI). This article describes the operation of the central Princeton RSE group including funding, partner & project selection, and best practices for defining expectations for a successful partnership with researchers.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 16:51:31 GMT" } ]
2022-10-31T00:00:00
[ [ "Cosden", "Ian A.", "" ] ]
new_dataset
0.998954
2210.16285
Muhammad Irfan Yousuf Dr.
Muhammad Irfan Yousuf, Izza Anwer, Tanzeela Shakir, Minahil Siddiqui, Maysoon Shahid
Multi-feature Dataset for Windows PE Malware Classification
9 Pages, 1 Figure, 5 Tables
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
This paper describes a multi-feature dataset for training machine learning classifiers for detecting malicious Windows Portable Executable (PE) files. The dataset includes four feature sets from 18,551 binary samples belonging to five malware families including Spyware, Ransomware, Downloader, Backdoor and Generic Malware. The feature sets include the list of DLLs and their functions, values of different fields of PE Header and Sections. First, we explain the data collection and creation phase and then we explain how did we label the samples in it using VirusTotal's services. Finally, we explore the dataset to describe how this dataset can benefit the researchers for static malware analysis. The dataset is made public in the hope that it will help inspire machine learning research for malware detection.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 17:23:21 GMT" } ]
2022-10-31T00:00:00
[ [ "Yousuf", "Muhammad Irfan", "" ], [ "Anwer", "Izza", "" ], [ "Shakir", "Tanzeela", "" ], [ "Siddiqui", "Minahil", "" ], [ "Shahid", "Maysoon", "" ] ]
new_dataset
0.999774
2112.08557
Yi Fang
Yi Fang, Pingping Chen, Yong Liang Guan, Francis C. M. Lau, Yonghui Li, Guanrong Chen
Protograph Bit-Interleaved Coded Modulation: A Bandwidth-Efficient Design Paradigm for 6G Wireless Communications
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bit-interleaved coded modulation (BICM) has attracted considerable attention from the research community in the past three decades, because it can achieve desirable error performance with relatively low implementation complexity for a large number of communication and storage systems. By exploiting the iterative demapping and decoding (ID), the BICM is able to approach capacity limits of coded modulation over various channels. In recent years, protograph low-density parity-check (PLDPC) codes and their spatially-coupled (SC) variants have emerged to be a pragmatic forward-error-correction (FEC) solution for BICM systems due to their tremendous error-correction capability and simple structures, and found widespread applications such as deep-space communication, satellite communication, wireless communication, optical communication, and data storage. This article offers a comprehensive survey on the state-of-the-art development of PLDPC-BICM and its innovative SC variants over a variety of channel models, e.g., additive white Gaussian noise (AWGN) channels, fading channels, Poisson pulse position modulation (PPM) channels, and flash-memory channels. Of particular interest is code construction, constellation shaping, as well as bit-mapper design, where the receiver is formulated as a serially-concatenated decoding framework consisting of a soft-decision demapper and a belief-propagation decoder. Finally, several promising research directions are discussed, which have not been adequately addressed in the current literature.
[ { "version": "v1", "created": "Thu, 16 Dec 2021 01:39:53 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 11:40:45 GMT" } ]
2022-10-28T00:00:00
[ [ "Fang", "Yi", "" ], [ "Chen", "Pingping", "" ], [ "Guan", "Yong Liang", "" ], [ "Lau", "Francis C. M.", "" ], [ "Li", "Yonghui", "" ], [ "Chen", "Guanrong", "" ] ]
new_dataset
0.99337
2201.09280
Rishiraj Adhikary
Rishiraj Adhikary, Dhruvi Lodhavia, Chris Francis, Rohit Patil, Tanmay Srivastava, Prerna Khanna, Nipun Batra, Joe Breda, Jacob Peplinski, Shwetak Patel
SpiroMask: Measuring Lung Function Using Consumer-Grade Masks
Accepted in the ACM Transactions on Computing for Healthcare (HEALTH)
null
null
null
cs.HC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
According to the World Health Organisation (WHO), 235 million people suffer from respiratory illnesses and four million people die annually due to air pollution. Regular lung health monitoring can lead to prognoses about deteriorating lung health conditions. This paper presents our system SpiroMask that retrofits a microphone in consumer-grade masks (N95 and cloth masks) for continuous lung health monitoring. We evaluate our approach on 48 participants (including 14 with lung health issues) and find that we can estimate parameters such as lung volume and respiration rate within the approved error range by the American Thoracic Society (ATS). Further, we show that our approach is robust to sensor placement inside the mask.
[ { "version": "v1", "created": "Sun, 23 Jan 2022 14:32:38 GMT" }, { "version": "v2", "created": "Tue, 25 Jan 2022 11:09:23 GMT" }, { "version": "v3", "created": "Fri, 28 Jan 2022 09:54:17 GMT" }, { "version": "v4", "created": "Mon, 31 Jan 2022 04:55:01 GMT" }, { "version": "v5", "created": "Wed, 26 Oct 2022 19:47:17 GMT" } ]
2022-10-28T00:00:00
[ [ "Adhikary", "Rishiraj", "" ], [ "Lodhavia", "Dhruvi", "" ], [ "Francis", "Chris", "" ], [ "Patil", "Rohit", "" ], [ "Srivastava", "Tanmay", "" ], [ "Khanna", "Prerna", "" ], [ "Batra", "Nipun", "" ], [ "Breda", "Joe", "" ], [ "Peplinski", "Jacob", "" ], [ "Patel", "Shwetak", "" ] ]
new_dataset
0.999669
2203.05437
Prasanna Raj Noel Dabre
Aman Kumar, Himani Shrotriya, Prachi Sahu, Raj Dabre, Ratish Puduppully, Anoop Kunchukuttan, Amogh Mishra, Mitesh M. Khapra, Pratyush Kumar
IndicNLG Benchmark: Multilingual Datasets for Diverse NLG Tasks in Indic Languages
Accepted at EMNLP 2022
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Natural Language Generation (NLG) for non-English languages is hampered by the scarcity of datasets in these languages. In this paper, we present the IndicNLG Benchmark, a collection of datasets for benchmarking NLG for 11 Indic languages. We focus on five diverse tasks, namely, biography generation using Wikipedia infoboxes, news headline generation, sentence summarization, paraphrase generation and, question generation. We describe the created datasets and use them to benchmark the performance of several monolingual and multilingual baselines that leverage pre-trained sequence-to-sequence models. Our results exhibit the strong performance of multilingual language-specific pre-trained models, and the utility of models trained on our dataset for other related NLG tasks. Our dataset creation methods can be easily applied to modest-resource languages as they involve simple steps such as scraping news articles and Wikipedia infoboxes, light cleaning, and pivoting through machine translation data. To the best of our knowledge, the IndicNLG Benchmark is the first NLG benchmark for Indic languages and the most diverse multilingual NLG dataset, with approximately 8M examples across 5 tasks and 11 languages. The datasets and models are publicly available at https://ai4bharat.iitm.ac.in/indicnlg-suite.
[ { "version": "v1", "created": "Thu, 10 Mar 2022 15:53:58 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 02:33:39 GMT" } ]
2022-10-28T00:00:00
[ [ "Kumar", "Aman", "" ], [ "Shrotriya", "Himani", "" ], [ "Sahu", "Prachi", "" ], [ "Dabre", "Raj", "" ], [ "Puduppully", "Ratish", "" ], [ "Kunchukuttan", "Anoop", "" ], [ "Mishra", "Amogh", "" ], [ "Khapra", "Mitesh M.", "" ], [ "Kumar", "Pratyush", "" ] ]
new_dataset
0.999771
2203.11471
Yu Zhan
Yu Zhan, Fenghai Li, Renliang Weng, Wongun Choi
Ray3D: ray-based 3D human pose estimation for monocular absolute 3D localization
Accepted by CVPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper, we propose a novel monocular ray-based 3D (Ray3D) absolute human pose estimation with calibrated camera. Accurate and generalizable absolute 3D human pose estimation from monocular 2D pose input is an ill-posed problem. To address this challenge, we convert the input from pixel space to 3D normalized rays. This conversion makes our approach robust to camera intrinsic parameter changes. To deal with the in-the-wild camera extrinsic parameter variations, Ray3D explicitly takes the camera extrinsic parameters as an input and jointly models the distribution between the 3D pose rays and camera extrinsic parameters. This novel network design is the key to the outstanding generalizability of Ray3D approach. To have a comprehensive understanding of how the camera intrinsic and extrinsic parameter variations affect the accuracy of absolute 3D key-point localization, we conduct in-depth systematic experiments on three single person 3D benchmarks as well as one synthetic benchmark. These experiments demonstrate that our method significantly outperforms existing state-of-the-art models. Our code and the synthetic dataset are available at https://github.com/YxZhxn/Ray3D .
[ { "version": "v1", "created": "Tue, 22 Mar 2022 05:42:31 GMT" }, { "version": "v2", "created": "Wed, 30 Mar 2022 06:29:45 GMT" }, { "version": "v3", "created": "Thu, 27 Oct 2022 06:40:16 GMT" } ]
2022-10-28T00:00:00
[ [ "Zhan", "Yu", "" ], [ "Li", "Fenghai", "" ], [ "Weng", "Renliang", "" ], [ "Choi", "Wongun", "" ] ]
new_dataset
0.989909
2204.05070
Karolos Nikitaras
Karolos Nikitaras, Georgios Vamvoukakis, Nikolaos Ellinas, Konstantinos Klapsas, Konstantinos Markopoulos, Spyros Raptis, June Sig Sung, Gunu Jho, Aimilios Chalamandaris, Pirros Tsiakoulis
Fine-grained Noise Control for Multispeaker Speech Synthesis
Accepted to INTERSPEECH 2022
null
10.21437/Interspeech.2022-10765
null
cs.SD cs.CL cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A text-to-speech (TTS) model typically factorizes speech attributes such as content, speaker and prosody into disentangled representations.Recent works aim to additionally model the acoustic conditions explicitly, in order to disentangle the primary speech factors, i.e. linguistic content, prosody and timbre from any residual factors, such as recording conditions and background noise.This paper proposes unsupervised, interpretable and fine-grained noise and prosody modeling. We incorporate adversarial training, representation bottleneck and utterance-to-frame modeling in order to learn frame-level noise representations. To the same end, we perform fine-grained prosody modeling via a Fully Hierarchical Variational AutoEncoder (FVAE) which additionally results in more expressive speech synthesis.
[ { "version": "v1", "created": "Mon, 11 Apr 2022 13:13:55 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 16:26:24 GMT" } ]
2022-10-28T00:00:00
[ [ "Nikitaras", "Karolos", "" ], [ "Vamvoukakis", "Georgios", "" ], [ "Ellinas", "Nikolaos", "" ], [ "Klapsas", "Konstantinos", "" ], [ "Markopoulos", "Konstantinos", "" ], [ "Raptis", "Spyros", "" ], [ "Sung", "June Sig", "" ], [ "Jho", "Gunu", "" ], [ "Chalamandaris", "Aimilios", "" ], [ "Tsiakoulis", "Pirros", "" ] ]
new_dataset
0.998966
2205.14459
Shashank Goel
Shashank Goel, Hritik Bansal, Sumit Bhatia, Ryan A. Rossi, Vishwa Vinay, Aditya Grover
CyCLIP: Cyclic Contrastive Language-Image Pretraining
19 pages, 13 tables, 6 figures, Oral at NeuRIPS 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in contrastive representation learning over paired image-text data have led to models such as CLIP that achieve state-of-the-art performance for zero-shot classification and distributional robustness. Such models typically require joint reasoning in the image and text representation spaces for downstream inference tasks. Contrary to prior beliefs, we demonstrate that the image and text representations learned via a standard contrastive objective are not interchangeable and can lead to inconsistent downstream predictions. To mitigate this issue, we formalize consistency and propose CyCLIP, a framework for contrastive representation learning that explicitly optimizes for the learned representations to be geometrically consistent in the image and text space. In particular, we show that consistent representations can be learned by explicitly symmetrizing (a) the similarity between the two mismatched image-text pairs (cross-modal consistency); and (b) the similarity between the image-image pair and the text-text pair (in-modal consistency). Empirically, we show that the improved consistency in CyCLIP translates to significant gains over CLIP, with gains ranging from 10%-24% for zero-shot classification accuracy on standard benchmarks (CIFAR-10, CIFAR-100, ImageNet1K) and 10%-27% for robustness to various natural distribution shifts. The code is available at https://github.com/goel-shashank/CyCLIP.
[ { "version": "v1", "created": "Sat, 28 May 2022 15:31:17 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 18:30:33 GMT" } ]
2022-10-28T00:00:00
[ [ "Goel", "Shashank", "" ], [ "Bansal", "Hritik", "" ], [ "Bhatia", "Sumit", "" ], [ "Rossi", "Ryan A.", "" ], [ "Vinay", "Vishwa", "" ], [ "Grover", "Aditya", "" ] ]
new_dataset
0.95749
2208.05004
Zuher Jahshan
Zuher Jahshan, Can Alkan and Leonid Yavits
CoViT: Real-time phylogenetics for the SARS-CoV-2 pandemic using Vision Transformers
11 pages, 4 figures, 2 tables
null
null
null
cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-time viral genome detection, taxonomic classification and phylogenetic analysis are critical for efficient tracking and control of viral pandemics such as Covid-19. However, the unprecedented and still growing amounts of viral genome data create a computational bottleneck, which effectively prevents the real-time pandemic tracking. For genomic tracing to work effectively, each new viral genome sequence must be placed in its pangenomic context. Re-inferring the full phylogeny of SARS-CoV-2, with datasets containing millions of samples, is prohibitively slow even using powerful computational resources. We are attempting to alleviate the computational bottleneck by modifying and applying Vision Transformer, a recently developed neural network model for image recognition, to taxonomic classification and placement of viral genomes, such as SARS-CoV-2. Our solution, CoViT, places SARS-CoV-2 genome accessions onto SARS-CoV-2 phylogenetic tree with the accuracy of 94.2%. Since CoViT is a classification neural network, it provides more than one likely placement. Specifically, one of the two most likely placements suggested by CoViT is correct with the probability of 97.9%. The probability of the correct placement to be found among the five most likely placements generated by CoViT is 99.8%. The placement time is 0.055s per individual genome running on NVIDIAs GeForce RTX 2080 Ti GPU. We make CoViT available to research community through GitHub: https://github.com/zuherJahshan/covit.
[ { "version": "v1", "created": "Tue, 9 Aug 2022 19:13:41 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 09:14:44 GMT" } ]
2022-10-28T00:00:00
[ [ "Jahshan", "Zuher", "" ], [ "Alkan", "Can", "" ], [ "Yavits", "Leonid", "" ] ]
new_dataset
0.998668
2208.10607
Jonathan Ventura
Jonathan Ventura, Camille Pawlak, Milo Honsberger, Cameron Gonsalves, Julian Rice, Natalie L.R. Love, Skyler Han, Viet Nguyen, Keilana Sugano, Jacqueline Doremus, G. Andrew Fricker, Jenn Yost, Matt Ritter
Individual Tree Detection in Large-Scale Urban Environments using High-Resolution Multispectral Imagery
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We introduce a novel deep learning method for detection of individual trees in urban environments using high-resolution multispectral aerial imagery. We use a convolutional neural network to regress a confidence map indicating the locations of individual trees, which are localized using a peak finding algorithm. Our method provides complete spatial coverage by detecting trees in both public and private spaces, and can scale to very large areas. We performed a thorough evaluation of our method, supported by a new dataset of over 1,500 images and almost 100,000 tree annotations, covering eight cities, six climate zones, and three image capture years. We trained our model on data from Southern California, and achieved a precision of 73.6% and recall of 73.3% using test data from this region. We generally observed similar precision and slightly lower recall when extrapolating to other California climate zones and image capture dates. We used our method to produce a map of trees in the entire urban forest of California, and estimated the total number of urban trees in California to be about 43.5 million. Our study indicates the potential for deep learning methods to support future urban forestry studies at unprecedented scales.
[ { "version": "v1", "created": "Mon, 22 Aug 2022 21:26:57 GMT" }, { "version": "v2", "created": "Wed, 24 Aug 2022 17:45:38 GMT" }, { "version": "v3", "created": "Thu, 27 Oct 2022 04:51:55 GMT" } ]
2022-10-28T00:00:00
[ [ "Ventura", "Jonathan", "" ], [ "Pawlak", "Camille", "" ], [ "Honsberger", "Milo", "" ], [ "Gonsalves", "Cameron", "" ], [ "Rice", "Julian", "" ], [ "Love", "Natalie L. R.", "" ], [ "Han", "Skyler", "" ], [ "Nguyen", "Viet", "" ], [ "Sugano", "Keilana", "" ], [ "Doremus", "Jacqueline", "" ], [ "Fricker", "G. Andrew", "" ], [ "Yost", "Jenn", "" ], [ "Ritter", "Matt", "" ] ]
new_dataset
0.995405
2210.10335
Namhyuk Ahn
Jihye Back, Seungkwon Kim, Namhyuk Ahn
WebtoonMe: A Data-Centric Approach for Full-Body Portrait Stylization
SIGGRAPH Asia 2022 Technical Communications
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Full-body portrait stylization, which aims to translate portrait photography into a cartoon style, has drawn attention recently. However, most methods have focused only on converting face regions, restraining the feasibility of use in real-world applications. A recently proposed two-stage method expands the rendering area to full bodies, but the outputs are less plausible and fail to achieve quality robustness of non-face regions. Furthermore, they cannot reflect diverse skin tones. In this study, we propose a data-centric solution to build a production-level full-body portrait stylization system. Based on the two-stage scheme, we construct a novel and advanced dataset preparation paradigm that can effectively resolve the aforementioned problems. Experiments reveal that with our pipeline, high-quality portrait stylization can be achieved without additional losses or architectural changes.
[ { "version": "v1", "created": "Wed, 19 Oct 2022 07:09:03 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 05:01:19 GMT" } ]
2022-10-28T00:00:00
[ [ "Back", "Jihye", "" ], [ "Kim", "Seungkwon", "" ], [ "Ahn", "Namhyuk", "" ] ]
new_dataset
0.996535
2210.11674
Enting Ying
Enting Ying and Tianyang Xiong and Shihui Guo and Ming Qiu and Yipeng Qin and Hongbo Fu
WristSketcher: Creating Dynamic Sketches in AR with a Sensing Wristband
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Restricted by the limited interaction area of native AR glasses (e.g., touch bars), it is challenging to create sketches in AR glasses. Recent works have attempted to use mobile devices (e.g., tablets) or mid-air bare-hand gestures to expand the interactive spaces and can work as the 2D/3D sketching input interfaces for AR glasses. Between them, mobile devices allow for accurate sketching but are often heavy to carry, while sketching with bare hands is zero-burden but can be inaccurate due to arm instability. In addition, mid-air bare-hand sketching can easily lead to social misunderstandings and its prolonged use can cause arm fatigue. As a new attempt, in this work, we present WristSketcher, a new AR system based on a flexible sensing wristband for creating 2D dynamic sketches, featuring an almost zero-burden authoring model for accurate and comfortable sketch creation in real-world scenarios. Specifically, we have streamlined the interaction space from the mid-air to the surface of a lightweight sensing wristband, and implemented AR sketching and associated interaction commands by developing a gesture recognition method based on the sensing pressure points on the wristband. The set of interactive gestures used by our WristSketcher is determined by a heuristic study on user preferences. Moreover, we endow our WristSketcher with the ability of animation creation, allowing it to create dynamic and expressive sketches. Experimental results demonstrate that our WristSketcher i) faithfully recognizes users' gesture interactions with a high accuracy of 96.0%; ii) achieves higher sketching accuracy than Freehand sketching; iii) achieves high user satisfaction in ease of use, usability and functionality; and iv) shows innovation potentials in art creation, memory aids, and entertainment applications.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 02:00:41 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 01:26:09 GMT" } ]
2022-10-28T00:00:00
[ [ "Ying", "Enting", "" ], [ "Xiong", "Tianyang", "" ], [ "Guo", "Shihui", "" ], [ "Qiu", "Ming", "" ], [ "Qin", "Yipeng", "" ], [ "Fu", "Hongbo", "" ] ]
new_dataset
0.999553
2210.14320
Rini Jasmine Gladstone
Rini J. Gladstone, Mohammad A. Nabian, Hadi Meidani
FO-PINNs: A First-Order formulation for Physics Informed Neural Networks
6 pages, 3 figures, Selected for ML4PS workshop at NeurIPS 2022
null
null
null
cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present FO-PINNs, physics-informed neural networks that are trained using the first-order formulation of the Partial Differential Equation (PDE) losses. We show that FO-PINNs offer significantly higher accuracy in solving parameterized systems compared to traditional PINNs, and reduce time-per-iteration by removing the extra backpropagations needed to compute the second or higher-order derivatives. Additionally, unlike standard PINNs, FO-PINNs can be used with exact imposition of boundary conditions using approximate distance functions, and can be trained using Automatic Mixed Precision (AMP) to further speed up the training. Through two Helmholtz and Navier-Stokes examples, we demonstrate the advantages of FO-PINNs over traditional PINNs in terms of accuracy and training speedup.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 20:25:33 GMT" } ]
2022-10-28T00:00:00
[ [ "Gladstone", "Rini J.", "" ], [ "Nabian", "Mohammad A.", "" ], [ "Meidani", "Hadi", "" ] ]
new_dataset
0.995654
2210.14461
Dhruv Makwana
Onkar Susladkar, Dhruv Makwana, Gayatri Deshmukh, Sparsh Mittal, Sai Chandra Teja R, Rekha Singhal
TPFNet: A Novel Text In-painting Transformer for Text Removal
10 pages, 5 figures, 5 tables, Neurips Proceedings
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
Text erasure from an image is helpful for various tasks such as image editing and privacy preservation. In this paper, we present TPFNet, a novel one-stage (end-toend) network for text removal from images. Our network has two parts: feature synthesis and image generation. Since noise can be more effectively removed from low-resolution images, part 1 operates on low-resolution images. The output of part 1 is a low-resolution text-free image. Part 2 uses the features learned in part 1 to predict a high-resolution text-free image. In part 1, we use "pyramidal vision transformer" (PVT) as the encoder. Further, we use a novel multi-headed decoder that generates a high-pass filtered image and a segmentation map, in addition to a text-free image. The segmentation branch helps locate the text precisely, and the high-pass branch helps in learning the image structure. To precisely locate the text, TPFNet employs an adversarial loss that is conditional on the segmentation map rather than the input image. On Oxford, SCUT, and SCUT-EnsText datasets, our network outperforms recently proposed networks on nearly all the metrics. For example, on SCUT-EnsText dataset, TPFNet has a PSNR (higher is better) of 39.0 and text-detection precision (lower is better) of 21.1, compared to the best previous technique, which has a PSNR of 32.3 and precision of 53.2. The source code can be obtained from https://github.com/CandleLabAI/TPFNet
[ { "version": "v1", "created": "Wed, 26 Oct 2022 04:16:50 GMT" }, { "version": "v2", "created": "Thu, 27 Oct 2022 14:14:55 GMT" } ]
2022-10-28T00:00:00
[ [ "Susladkar", "Onkar", "" ], [ "Makwana", "Dhruv", "" ], [ "Deshmukh", "Gayatri", "" ], [ "Mittal", "Sparsh", "" ], [ "R", "Sai Chandra Teja", "" ], [ "Singhal", "Rekha", "" ] ]
new_dataset
0.99973
2210.14997
Manthan Patel
Manthan Patel, Gabriel Waibel, Shehryar Khattak, Marco Hutter
LiDAR-guided object search and detection in Subterranean Environments
6 pages, 5 Figures, 2 Tables, conference: IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR-2022), Seville, Spain
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detecting objects of interest, such as human survivors, safety equipment, and structure access points, is critical to any search-and-rescue operation. Robots deployed for such time-sensitive efforts rely on their onboard sensors to perform their designated tasks. However, as disaster response operations are predominantly conducted under perceptually degraded conditions, commonly utilized sensors such as visual cameras and LiDARs suffer in terms of performance degradation. In response, this work presents a method that utilizes the complementary nature of vision and depth sensors to leverage multi-modal information to aid object detection at longer distances. In particular, depth and intensity values from sparse LiDAR returns are used to generate proposals for objects present in the environment. These proposals are then utilized by a Pan-Tilt-Zoom (PTZ) camera system to perform a directed search by adjusting its pose and zoom level for performing object detection and classification in difficult environments. The proposed work has been thoroughly verified using an ANYmal quadruped robot in underground settings and on datasets collected during the DARPA Subterranean Challenge finals.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 19:38:19 GMT" } ]
2022-10-28T00:00:00
[ [ "Patel", "Manthan", "" ], [ "Waibel", "Gabriel", "" ], [ "Khattak", "Shehryar", "" ], [ "Hutter", "Marco", "" ] ]
new_dataset
0.999268
2210.15040
Dan Casas
Andr\'es Casado-Elvira and Marc Comino Trinidad and Dan Casas
PERGAMO: Personalized 3D Garments from Monocular Video
Published at Computer Graphics Forum (Proc. of ACM/SIGGRAPH SCA), 2022. Project website http://mslab.es/projects/PERGAMO/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Clothing plays a fundamental role in digital humans. Current approaches to animate 3D garments are mostly based on realistic physics simulation, however, they typically suffer from two main issues: high computational run-time cost, which hinders their development; and simulation-to-real gap, which impedes the synthesis of specific real-world cloth samples. To circumvent both issues we propose PERGAMO, a data-driven approach to learn a deformable model for 3D garments from monocular images. To this end, we first introduce a novel method to reconstruct the 3D geometry of garments from a single image, and use it to build a dataset of clothing from monocular videos. We use these 3D reconstructions to train a regression model that accurately predicts how the garment deforms as a function of the underlying body pose. We show that our method is capable of producing garment animations that match the real-world behaviour, and generalizes to unseen body motions extracted from motion capture dataset.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 21:15:54 GMT" } ]
2022-10-28T00:00:00
[ [ "Casado-Elvira", "Andrés", "" ], [ "Trinidad", "Marc Comino", "" ], [ "Casas", "Dan", "" ] ]
new_dataset
0.996633
2210.15050
Hyunwook Lee
Hyunwook Lee, Chunggi Lee, Hongkyu Lim, Sungahn Ko
TILDE-Q: A Transformation Invariant Loss Function for Time-Series Forecasting
9 pages paper, 2 pages references, and 7 pages appendix. Submitted as conference paper to ICLR 2023
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Time-series forecasting has caught increasing attention in the AI research field due to its importance in solving real-world problems across different domains, such as energy, weather, traffic, and economy. As shown in various types of data, it has been a must-see issue to deal with drastic changes, temporal patterns, and shapes in sequential data that previous models are weak in prediction. This is because most cases in time-series forecasting aim to minimize $L_p$ norm distances as loss functions, such as mean absolute error (MAE) or mean square error (MSE). These loss functions are vulnerable to not only considering temporal dynamics modeling but also capturing the shape of signals. In addition, these functions often make models misbehave and return uncorrelated results to the original time-series. To become an effective loss function, it has to be invariant to the set of distortions between two time-series data instead of just comparing exact values. In this paper, we propose a novel loss function, called TILDE-Q (Transformation Invariant Loss function with Distance EQuilibrium), that not only considers the distortions in amplitude and phase but also allows models to capture the shape of time-series sequences. In addition, TILDE-Q supports modeling periodic and non-periodic temporal dynamics at the same time. We evaluate the effectiveness of TILDE-Q by conducting extensive experiments with respect to periodic and non-periodic conditions of data, from naive models to state-of-the-art models. The experiment results indicate that the models trained with TILDE-Q outperform those trained with other training metrics (e.g., MSE, dynamic time warping (DTW), temporal distortion index (TDI), and longest common subsequence (LCSS)).
[ { "version": "v1", "created": "Wed, 26 Oct 2022 21:32:20 GMT" } ]
2022-10-28T00:00:00
[ [ "Lee", "Hyunwook", "" ], [ "Lee", "Chunggi", "" ], [ "Lim", "Hongkyu", "" ], [ "Ko", "Sungahn", "" ] ]
new_dataset
0.990552
2210.15085
Mohammadhadi Mohandes
Mohammadhadi Mohandes, Behnam Moradi, Kamal Gupta, Mehran Mehrandezh
Robot to Human Object Handover using Vision and Joint Torque Sensor Modalities
Note: This paper is submitted to RITA 2022 conference and waiting for results
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a robot-to-human object handover algorithm and implement it on a 7-DOF arm equipped with a 3-finger mechanical hand. The system performs a fully autonomous and robust object handover to a human receiver in real-time. Our algorithm relies on two complementary sensor modalities: joint torque sensors on the arm and an eye-in-hand RGB-D camera for sensor feedback. Our approach is entirely implicit, i.e., there is no explicit communication between the robot and the human receiver. Information obtained via the aforementioned sensor modalities is used as inputs to their related deep neural networks. While the torque sensor network detects the human receiver's "intention" such as: pull, hold, or bump, the vision sensor network detects if the receiver's fingers have wrapped around the object. Networks' outputs are then fused, based on which a decision is made to either release the object or not. Despite substantive challenges in sensor feedback synchronization, object, and human hand detection, our system achieves robust robot-to-human handover with 98\% accuracy in our preliminary real experiments using human receivers.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 00:11:34 GMT" } ]
2022-10-28T00:00:00
[ [ "Mohandes", "Mohammadhadi", "" ], [ "Moradi", "Behnam", "" ], [ "Gupta", "Kamal", "" ], [ "Mehrandezh", "Mehran", "" ] ]
new_dataset
0.991541
2210.15104
Piyush Behre
Piyush Behre, Sharman Tan, Amy Shah, Harini Kesavamoorthy, Shuangyu Chang, Fei Zuo, Chris Basoglu, Sayan Pathak
TRScore: A Novel GPT-based Readability Scorer for ASR Segmentation and Punctuation model evaluation and selection
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Punctuation and Segmentation are key to readability in Automatic Speech Recognition (ASR), often evaluated using F1 scores that require high-quality human transcripts and do not reflect readability well. Human evaluation is expensive, time-consuming, and suffers from large inter-observer variability, especially in conversational speech devoid of strict grammatical structures. Large pre-trained models capture a notion of grammatical structure. We present TRScore, a novel readability measure using the GPT model to evaluate different segmentation and punctuation systems. We validate our approach with human experts. Additionally, our approach enables quantitative assessment of text post-processing techniques such as capitalization, inverse text normalization (ITN), and disfluency on overall readability, which traditional word error rate (WER) and slot error rate (SER) metrics fail to capture. TRScore is strongly correlated to traditional F1 and human readability scores, with Pearson's correlation coefficients of 0.67 and 0.98, respectively. It also eliminates the need for human transcriptions for model selection.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 01:11:32 GMT" } ]
2022-10-28T00:00:00
[ [ "Behre", "Piyush", "" ], [ "Tan", "Sharman", "" ], [ "Shah", "Amy", "" ], [ "Kesavamoorthy", "Harini", "" ], [ "Chang", "Shuangyu", "" ], [ "Zuo", "Fei", "" ], [ "Basoglu", "Chris", "" ], [ "Pathak", "Sayan", "" ] ]
new_dataset
0.98037
2210.15126
Jianxiang Zhou
Cunxi Dai, Xiaohan Liu, Jianxiang Zhou, Zhengtao Liu, Zhenzhong Jia
SWheg: A Wheel-Leg Transformable Robot With Minimalist Actuator Realization
null
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This article presents the design, implementation, and performance evaluation of SWheg, a novel modular wheel-leg transformable robot family with minimalist actuator realization. SWheg takes advantage of both wheeled and legged locomotion by seamlessly integrating them on a single platform. In contrast to other designs that use multiple actuators, SWheg uses only one actuator to drive the transformation of all the wheel-leg modules in sync. This means an N-legged SWheg robot requires only N+1 actuators, which can significantly reduce the cost and malfunction rate of the platform. The tendon-driven wheel-leg transformation mechanism based on a four-bar linkage can perform fast morphology transitions between wheels and legs. We validated the design principle with two SWheg robots with four and six wheel-leg modules separately, namely Quadrupedal SWheg and Hexapod SWheg. The design process, mechatronics infrastructure, and the gait behavioral development of both platforms were discussed. The performance of the robot was evaluated in various scenarios, including driving and turning in wheeled mode, step crossing, irregular terrain passing, and stair climbing in legged mode. The comparison between these two platforms was also discussed.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 02:18:53 GMT" } ]
2022-10-28T00:00:00
[ [ "Dai", "Cunxi", "" ], [ "Liu", "Xiaohan", "" ], [ "Zhou", "Jianxiang", "" ], [ "Liu", "Zhengtao", "" ], [ "Jia", "Zhenzhong", "" ] ]
new_dataset
0.999531
2210.15128
Yongwei Miao
Chen Bao, Xudong Zhang, Jiazhou Chen, Yongwei Miao
MMFL-Net: Multi-scale and Multi-granularity Feature Learning for Cross-domain Fashion Retrieval
27 pages, 12 figures, Published by <Multimedia Tools and Applications>
Multimedia Tools and Applications(2022)1-27
10.1007/s11042-022-13648-8
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Instance-level image retrieval in fashion is a challenging issue owing to its increasing importance in real-scenario visual fashion search. Cross-domain fashion retrieval aims to match the unconstrained customer images as queries for photographs provided by retailers; however, it is a difficult task due to a wide range of consumer-to-shop (C2S) domain discrepancies and also considering that clothing image is vulnerable to various non-rigid deformations. To this end, we propose a novel multi-scale and multi-granularity feature learning network (MMFL-Net), which can jointly learn global-local aggregation feature representations of clothing images in a unified framework, aiming to train a cross-domain model for C2S fashion visual similarity. First, a new semantic-spatial feature fusion part is designed to bridge the semantic-spatial gap by applying top-down and bottom-up bidirectional multi-scale feature fusion. Next, a multi-branch deep network architecture is introduced to capture global salient, part-informed, and local detailed information, and extracting robust and discrimination feature embedding by integrating the similarity learning of coarse-to-fine embedding with the multiple granularities. Finally, the improved trihard loss, center loss, and multi-task classification loss are adopted for our MMFL-Net, which can jointly optimize intra-class and inter-class distance and thus explicitly improve intra-class compactness and inter-class discriminability between its visual representations for feature learning. Furthermore, our proposed model also combines the multi-task attribute recognition and classification module with multi-label semantic attributes and product ID labels. Experimental results demonstrate that our proposed MMFL-Net achieves significant improvement over the state-of-the-art methods on the two datasets, DeepFashion-C2S and Street2Shop.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 02:25:52 GMT" } ]
2022-10-28T00:00:00
[ [ "Bao", "Chen", "" ], [ "Zhang", "Xudong", "" ], [ "Chen", "Jiazhou", "" ], [ "Miao", "Yongwei", "" ] ]
new_dataset
0.996516
2210.15136
Rihao Chang
Weizhi Nie, Rihao Chang, Tong Hao, Anan Liu
3D Shape Knowledge Graph for Cross-domain and Cross-modal 3D Shape Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of 3D modeling and fabrication, 3D shape retrieval has become a hot topic. In recent years, several strategies have been put forth to address this retrieval issue. However, it is difficult for them to handle cross-modal 3D shape retrieval because of the natural differences between modalities. In this paper, we propose an innovative concept, namely, geometric words, which is regarded as the basic element to represent any 3D or 2D entity by combination, and assisted by which, we can simultaneously handle cross-domain or cross-modal retrieval problems. First, to construct the knowledge graph, we utilize the geometric word as the node, and then use the category of the 3D shape as well as the attribute of the geometry to bridge the nodes. Second, based on the knowledge graph, we provide a unique way for learning each entity's embedding. Finally, we propose an effective similarity measure to handle the cross-domain and cross-modal 3D shape retrieval. Specifically, every 3D or 2D entity could locate its geometric terms in the 3D knowledge graph, which serve as a link between cross-domain and cross-modal data. Thus, our approach can achieve the cross-domain and cross-modal 3D shape retrieval at the same time. We evaluated our proposed method on the ModelNet40 dataset and ShapeNetCore55 dataset for both the 3D shape retrieval task and cross-domain 3D shape retrieval task. The classic cross-modal dataset (MI3DOR) is utilized to evaluate cross-modal 3D shape retrieval. Experimental results and comparisons with state-of-the-art methods illustrate the superiority of our approach.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 02:51:24 GMT" } ]
2022-10-28T00:00:00
[ [ "Nie", "Weizhi", "" ], [ "Chang", "Rihao", "" ], [ "Hao", "Tong", "" ], [ "Liu", "Anan", "" ] ]
new_dataset
0.95615
2210.15234
Jamolbek Mattiev Dr
Maksud Sharipov, Jamolbek Mattiev, Jasur Sobirov, Rustam Baltayev
Creating a morphological and syntactic tagged corpus for the Uzbek language
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Nowadays, creation of the tagged corpora is becoming one of the most important tasks of Natural Language Processing (NLP). There are not enough tagged corpora to build machine learning models for the low-resource Uzbek language. In this paper, we tried to fill that gap by developing a novel Part Of Speech (POS) and syntactic tagset for creating the syntactic and morphologically tagged corpus of the Uzbek language. This work also includes detailed description and presentation of a web-based application to work on a tagging as well. Based on the developed annotation tool and the software, we share our experience results of the first stage of the tagged corpus creation
[ { "version": "v1", "created": "Thu, 27 Oct 2022 07:44:12 GMT" } ]
2022-10-28T00:00:00
[ [ "Sharipov", "Maksud", "" ], [ "Mattiev", "Jamolbek", "" ], [ "Sobirov", "Jasur", "" ], [ "Baltayev", "Rustam", "" ] ]
new_dataset
0.999343
2210.15316
Gopi Krishna Erabati
Gopi Krishna Erabati and Helder Araujo
MSF3DDETR: Multi-Sensor Fusion 3D Detection Transformer for Autonomous Driving
Accepted at the ICPR 2022 Workshop DLVDR2022
null
null
null
cs.CV cs.LG cs.RO
http://creativecommons.org/licenses/by/4.0/
3D object detection is a significant task for autonomous driving. Recently with the progress of vision transformers, the 2D object detection problem is being treated with the set-to-set loss. Inspired by these approaches on 2D object detection and an approach for multi-view 3D object detection DETR3D, we propose MSF3DDETR: Multi-Sensor Fusion 3D Detection Transformer architecture to fuse image and LiDAR features to improve the detection accuracy. Our end-to-end single-stage, anchor-free and NMS-free network takes in multi-view images and LiDAR point clouds and predicts 3D bounding boxes. Firstly, we link the object queries learnt from data to the image and LiDAR features using a novel MSF3DDETR cross-attention block. Secondly, the object queries interacts with each other in multi-head self-attention block. Finally, MSF3DDETR block is repeated for $L$ number of times to refine the object queries. The MSF3DDETR network is trained end-to-end on the nuScenes dataset using Hungarian algorithm based bipartite matching and set-to-set loss inspired by DETR. We present both quantitative and qualitative results which are competitive to the state-of-the-art approaches.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 10:55:15 GMT" } ]
2022-10-28T00:00:00
[ [ "Erabati", "Gopi Krishna", "" ], [ "Araujo", "Helder", "" ] ]
new_dataset
0.99902
2210.15360
Rui Liu
Yifan Hu, Rui Liu, Guanglai Gao, Haizhou Li
FCTalker: Fine and Coarse Grained Context Modeling for Expressive Conversational Speech Synthesis
5 pages, 4 figures, 1 table. Submitted to ICASSP 2023. We release the source code at: https://github.com/walker-hyf/FCTalker
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Conversational Text-to-Speech (TTS) aims to synthesis an utterance with the right linguistic and affective prosody in a conversational context. The correlation between the current utterance and the dialogue history at the utterance level was used to improve the expressiveness of synthesized speech. However, the fine-grained information in the dialogue history at the word level also has an important impact on the prosodic expression of an utterance, which has not been well studied in the prior work. Therefore, we propose a novel expressive conversational TTS model, termed as FCTalker, that learn the fine and coarse grained context dependency at the same time during speech generation. Specifically, the FCTalker includes fine and coarse grained encoders to exploit the word and utterance-level context dependency. To model the word-level dependencies between an utterance and its dialogue history, the fine-grained dialogue encoder is built on top of a dialogue BERT model. The experimental results show that the proposed method outperforms all baselines and generates more expressive speech that is contextually appropriate. We release the source code at: https://github.com/walker-hyf/FCTalker.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 12:20:20 GMT" } ]
2022-10-28T00:00:00
[ [ "Hu", "Yifan", "" ], [ "Liu", "Rui", "" ], [ "Gao", "Guanglai", "" ], [ "Li", "Haizhou", "" ] ]
new_dataset
0.992227
2210.15364
Rui Liu
Rui Liu, Haolin Zuo, De Hu, Guanglai Gao, Haizhou Li
Explicit Intensity Control for Accented Text-to-speech
5 pages, 3 figures. Submitted to ICASSP 2023. arXiv admin note: text overlap with arXiv:2209.10804
null
null
null
cs.SD cs.AI eess.AS
http://creativecommons.org/licenses/by/4.0/
Accented text-to-speech (TTS) synthesis seeks to generate speech with an accent (L2) as a variant of the standard version (L1). How to control the intensity of accent in the process of TTS is a very interesting research direction, and has attracted more and more attention. Recent work design a speaker-adversarial loss to disentangle the speaker and accent information, and then adjust the loss weight to control the accent intensity. However, such a control method lacks interpretability, and there is no direct correlation between the controlling factor and natural accent intensity. To this end, this paper propose a new intuitive and explicit accent intensity control scheme for accented TTS. Specifically, we first extract the posterior probability, called as ``goodness of pronunciation (GoP)'' from the L1 speech recognition model to quantify the phoneme accent intensity for accented speech, then design a FastSpeech2 based TTS model, named Ai-TTS, to take the accent intensity expression into account during speech generation. Experiments show that the our method outperforms the baseline model in terms of accent rendering and intensity control.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 12:23:41 GMT" } ]
2022-10-28T00:00:00
[ [ "Liu", "Rui", "" ], [ "Zuo", "Haolin", "" ], [ "Hu", "De", "" ], [ "Gao", "Guanglai", "" ], [ "Li", "Haizhou", "" ] ]
new_dataset
0.953199
2210.15365
Gopi Krishna Erabati
Gopi Krishna Erabati and Helder Araujo
Li3DeTr: A LiDAR based 3D Detection Transformer
Accepted at the IEEE/CVF Winter Conference on Applications of Computer Vision (WACV) 2023
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Inspired by recent advances in vision transformers for object detection, we propose Li3DeTr, an end-to-end LiDAR based 3D Detection Transformer for autonomous driving, that inputs LiDAR point clouds and regresses 3D bounding boxes. The LiDAR local and global features are encoded using sparse convolution and multi-scale deformable attention respectively. In the decoder head, firstly, in the novel Li3DeTr cross-attention block, we link the LiDAR global features to 3D predictions leveraging the sparse set of object queries learnt from the data. Secondly, the object query interactions are formulated using multi-head self-attention. Finally, the decoder layer is repeated $L_{dec}$ number of times to refine the object queries. Inspired by DETR, we employ set-to-set loss to train the Li3DeTr network. Without bells and whistles, the Li3DeTr network achieves 61.3% mAP and 67.6% NDS surpassing the state-of-the-art methods with non-maximum suppression (NMS) on the nuScenes dataset and it also achieves competitive performance on the KITTI dataset. We also employ knowledge distillation (KD) using a teacher and student model that slightly improves the performance of our network.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 12:23:54 GMT" } ]
2022-10-28T00:00:00
[ [ "Erabati", "Gopi Krishna", "" ], [ "Araujo", "Helder", "" ] ]
new_dataset
0.999246
2210.15386
Kwanghee Choi
Kwanghee Choi, Eun Jung Yeo
Opening the Black Box of wav2vec Feature Encoder
null
null
null
null
cs.SD cs.CL cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Self-supervised models, namely, wav2vec and its variants, have shown promising results in various downstream tasks in the speech domain. However, their inner workings are poorly understood, calling for in-depth analyses on what the model learns. In this paper, we concentrate on the convolutional feature encoder where its latent space is often speculated to represent discrete acoustic units. To analyze the embedding space in a reductive manner, we feed the synthesized audio signals, which is the summation of simple sine waves. Through extensive experiments, we conclude that various information is embedded inside the feature encoder representations: (1) fundamental frequency, (2) formants, and (3) amplitude, packed with (4) sufficient temporal detail. Further, the information incorporated inside the latent representations is analogous to spectrograms but with a fundamental difference: latent representations construct a metric space so that closer representations imply acoustic similarity.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 12:47:35 GMT" } ]
2022-10-28T00:00:00
[ [ "Choi", "Kwanghee", "" ], [ "Yeo", "Eun Jung", "" ] ]
new_dataset
0.988115
2210.15406
Sabah Al-Fedaghi Dr.
Sabah Al-Fedaghi
Lupascian Non-Negativity Applied to Conceptual Modeling: Alternating Static Potentiality and Dynamic Actuality
11 pages, 21 figures
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
In software engineering, conceptual modeling focuses on creating representations of the world that are as faithful and rich as possible, with the aim of guiding the development of software systems. In contrast, in the computing realm, the notion of ontology has been characterized as being closely related to conceptual modeling and is often viewed as a specification of a conceptualization. Accordingly, conceptual modeling and ontology engineering now address the same problem of representing the world in a suitable fashion. A high-level ontology provides a means to describe concepts and their interactions with each other and to capture structural and behavioral features in the intended domain. This paper aims to analyze ontological concepts and semantics of modeling notations to provide a common understanding among software engineers. An important issue in this context concerns the question of whether the modeled world might be stratified into ontological levels. We introduce an abstract system of two-level domain ontology to be used as a foundation for conceptual models. We study the two levels of staticity and dynamics in the context of the thinging machine (TM) model using the notions of potentiality and actuality that the Franco-Romanian philosopher Stephane Lupasco developed in logic. He provided a quasi-universal rejection of contradiction where every event was always associated with a no event, such that the actualization of an event entails the potentialization of a no event and vice versa without either ever disappearing completely. This approach is illustrated by re-modeling UML state machines in TM modeling. The results strengthen the semantics of a static versus dynamic levels in conceptual modeling and sharpen the notion of events as a phenomenon without negativity alternating between the two levels of dynamics and staticity.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 13:13:07 GMT" } ]
2022-10-28T00:00:00
[ [ "Al-Fedaghi", "Sabah", "" ] ]
new_dataset
0.986729
2210.15421
Diego Ulisse Pizzagalli
Diego Ulisse Pizzagalli, Rolf Krause
AnyDijkstra, an algorithm to compute shortest paths on images with anytime properties
7 pages, 4 figures
null
null
null
cs.DS
http://creativecommons.org/licenses/by/4.0/
Images conveniently capture the result of physical processes, representing rich source of information for data driven medicine, engineering, and science. The modeling of an image as a graph allows the application of graph-based algorithms for content analysis. Amongst these, one of the most used is the Dijkstra Single Source Shortest Path algorithm (DSSSP), which computes the path with minimal cost from one starting node to all the other nodes of the graph. However, the results of DSSSP remains unknown for nodes until they are explored. Moreover, DSSSP execution is associated to frequent jumps between distant locations in the graph, which results in non-optimal memory access, reduced parallelization, and finally increased execution time. Therefore, we propose AnyDijkstra, an iterative implementation of the Dijkstra SSSP algorithm optimized for images, that retains anytime properties while accessing memory following a cache-friendly scheme and maximizing parallelization.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 13:38:23 GMT" } ]
2022-10-28T00:00:00
[ [ "Pizzagalli", "Diego Ulisse", "" ], [ "Krause", "Rolf", "" ] ]
new_dataset
0.993114
2210.15451
Diddigi Raghuram Bharadwaj
Diddigi Raghu Ram Bharadwaj, Lakshya Kumar, Saif Jawaid, Sreekanth Vempati
Fine-Grained Session Recommendations in E-commerce using Deep Reinforcement Learning
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sustaining users' interest and keeping them engaged in the platform is very important for the success of an e-commerce business. A session encompasses different activities of a user between logging into the platform and logging out or making a purchase. User activities in a session can be classified into two groups: Known Intent and Unknown intent. Known intent activity pertains to the session where the intent of a user to browse/purchase a specific product can be easily captured. Whereas in unknown intent activity, the intent of the user is not known. For example, consider the scenario where a user enters the session to casually browse the products over the platform, similar to the window shopping experience in the offline setting. While recommending similar products is essential in the former, accurately understanding the intent and recommending interesting products is essential in the latter setting in order to retain a user. In this work, we focus primarily on the unknown intent setting where our objective is to recommend a sequence of products to a user in a session to sustain their interest, keep them engaged and possibly drive them towards purchase. We formulate this problem in the framework of the Markov Decision Process (MDP), a popular mathematical framework for sequential decision making and solve it using Deep Reinforcement Learning (DRL) techniques. However, training the next product recommendation is difficult in the RL paradigm due to large variance in browse/purchase behavior of the users. Therefore, we break the problem down into predicting various product attributes, where a pattern/trend can be identified and exploited to build accurate models. We show that the DRL agent provides better performance compared to a greedy strategy.
[ { "version": "v1", "created": "Thu, 20 Oct 2022 13:22:13 GMT" } ]
2022-10-28T00:00:00
[ [ "Bharadwaj", "Diddigi Raghu Ram", "" ], [ "Kumar", "Lakshya", "" ], [ "Jawaid", "Saif", "" ], [ "Vempati", "Sreekanth", "" ] ]
new_dataset
0.998384
2210.15478
Vittorio Lippi
Vittorio Lippi and Christoph Maurer and Thomas Mergner
Human-Likeness Indicator for Robot Posture Control and Balance
16 pages, 5 Figures. arXiv admin note: substantial text overlap with arXiv:2110.14395
In Robotics, Computer Vision and Intelligent Systems Vol. 1667, Ser. CCIS, pp. 1-16. Springer (2022)
10.1007/978-3-031-19650-8_5
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Similarly to humans, humanoid robots require posture control and balance to walk and interact with the environment. In this work posture control in perturbed conditions is evaluated as a performance test for humanoid control. A specific performance indicator is proposed: the score is based on the comparison between the body sway of the tested humanoid standing on a moving surface and the sway produced by healthy subjects performing the same experiment. This approach is here oriented to the evaluation of a human-likeness. The measure is tested using a humanoid robot in order to demonstrate a typical usage of the proposed evaluation scheme and an example of how to improve robot control on the basis of such a performance indicator score
[ { "version": "v1", "created": "Thu, 27 Oct 2022 14:23:16 GMT" } ]
2022-10-28T00:00:00
[ [ "Lippi", "Vittorio", "" ], [ "Maurer", "Christoph", "" ], [ "Mergner", "Thomas", "" ] ]
new_dataset
0.964012
2210.15638
Gaurav Sahu
Olga Vechtomova, Gaurav Sahu
LyricJam Sonic: A Generative System for Real-Time Composition and Musical Improvisation
15 pages, 9 figures, 2 tables
null
null
null
cs.SD cs.AI cs.CL cs.LG cs.MM eess.AS
http://creativecommons.org/licenses/by/4.0/
Electronic music artists and sound designers have unique workflow practices that necessitate specialized approaches for developing music information retrieval and creativity support tools. Furthermore, electronic music instruments, such as modular synthesizers, have near-infinite possibilities for sound creation and can be combined to create unique and complex audio paths. The process of discovering interesting sounds is often serendipitous and impossible to replicate. For this reason, many musicians in electronic genres record audio output at all times while they work in the studio. Subsequently, it is difficult for artists to rediscover audio segments that might be suitable for use in their compositions from thousands of hours of recordings. In this paper, we describe LyricJam Sonic -- a novel creative tool for musicians to rediscover their previous recordings, re-contextualize them with other recordings, and create original live music compositions in real-time. A bi-modal AI-driven approach uses generated lyric lines to find matching audio clips from the artist's past studio recordings, and uses them to generate new lyric lines, which in turn are used to find other clips, thus creating a continuous and evolving stream of music and lyrics. The intent is to keep the artists in a state of creative flow conducive to music creation rather than taking them into an analytical/critical state of deliberately searching for past audio segments. The system can run in either a fully autonomous mode without user input, or in a live performance mode, where the artist plays live music, while the system "listens" and creates a continuous stream of music and lyrics in response.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 17:27:58 GMT" } ]
2022-10-28T00:00:00
[ [ "Vechtomova", "Olga", "" ], [ "Sahu", "Gaurav", "" ] ]
new_dataset
0.999739
2202.04947
Merey Ramazanova
Merey Ramazanova, Victor Escorcia, Fabian Caba Heilbron, Chen Zhao, Bernard Ghanem
OWL (Observe, Watch, Listen): Audiovisual Temporal Context for Localizing Actions in Egocentric Videos
null
null
null
null
cs.CV cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
Egocentric videos capture sequences of human activities from a first-person perspective and can provide rich multimodal signals. However, most current localization methods use third-person videos and only incorporate visual information. In this work, we take a deep look into the effectiveness of audiovisual context in detecting actions in egocentric videos and introduce a simple-yet-effective approach via Observing, Watching, and Listening (OWL). OWL leverages audiovisual information and context for egocentric temporal action localization (TAL). We validate our approach in two large-scale datasets, EPIC-Kitchens, and HOMAGE. Extensive experiments demonstrate the relevance of the audiovisual temporal context. Namely, we boost the localization performance (mAP) over visual-only models by +2.23% and +3.35% in the above datasets.
[ { "version": "v1", "created": "Thu, 10 Feb 2022 10:50:52 GMT" }, { "version": "v2", "created": "Mon, 14 Feb 2022 15:30:49 GMT" }, { "version": "v3", "created": "Wed, 26 Oct 2022 13:24:39 GMT" } ]
2022-10-27T00:00:00
[ [ "Ramazanova", "Merey", "" ], [ "Escorcia", "Victor", "" ], [ "Heilbron", "Fabian Caba", "" ], [ "Zhao", "Chen", "" ], [ "Ghanem", "Bernard", "" ] ]
new_dataset
0.958125
2203.08480
Jiangjie Chen
Jiangjie Chen, Rui Xu, Ziquan Fu, Wei Shi, Zhongqiao Li, Xinbo Zhang, Changzhi Sun, Lei Li, Yanghua Xiao, Hao Zhou
E-KAR: A Benchmark for Rationalizing Natural Language Analogical Reasoning
Accepted to ACL 2022 (Findings)
null
10.18653/v1/2022.findings-acl.311
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to recognize analogies is fundamental to human cognition. Existing benchmarks to test word analogy do not reveal the underneath process of analogical reasoning of neural models. Holding the belief that models capable of reasoning should be right for the right reasons, we propose a first-of-its-kind Explainable Knowledge-intensive Analogical Reasoning benchmark (E-KAR). Our benchmark consists of 1,655 (in Chinese) and 1,251 (in English) problems sourced from the Civil Service Exams, which require intensive background knowledge to solve. More importantly, we design a free-text explanation scheme to explain whether an analogy should be drawn, and manually annotate them for each and every question and candidate answer. Empirical results suggest that this benchmark is very challenging for some state-of-the-art models for both explanation generation and analogical question answering tasks, which invites further research in this area.
[ { "version": "v1", "created": "Wed, 16 Mar 2022 09:16:38 GMT" } ]
2022-10-27T00:00:00
[ [ "Chen", "Jiangjie", "" ], [ "Xu", "Rui", "" ], [ "Fu", "Ziquan", "" ], [ "Shi", "Wei", "" ], [ "Li", "Zhongqiao", "" ], [ "Zhang", "Xinbo", "" ], [ "Sun", "Changzhi", "" ], [ "Li", "Lei", "" ], [ "Xiao", "Yanghua", "" ], [ "Zhou", "Hao", "" ] ]
new_dataset
0.999451
2205.12697
Haoyu Dong
Ao Liu, Haoyu Dong, Naoaki Okazaki, Shi Han, Dongmei Zhang
PLOG: Table-to-Logic Pretraining for Logical Table-to-Text Generation
EMNLP'22
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Logical table-to-text generation is a task that involves generating logically faithful sentences from tables, which requires models to derive logical level facts from table records via logical inference. It raises a new challenge on the logical-level content planning of table-to-text models. However, directly learning the logical inference knowledge from table-text pairs is very difficult for neural models because of the ambiguity of natural language and the scarcity of parallel data. Hence even large-scale pre-trained language models present low logical fidelity on logical table-to-text. In this work, we propose a PLOG (Pretrained Logical Form Generator) framework to improve the generation fidelity. Specifically, PLOG is first pretrained on a table-to-logic-form generation (table-to-logic) task, then finetuned on downstream table-to-text tasks. The formal definition of logical forms enables us to collect large amount of accurate logical forms from tables without human annotation. In addition, PLOG can learn logical inference from table-logic pairs much more definitely than from table-text pairs. To evaluate our model, we further collect a controlled logical table-to-text dataset CONTLOG based on an existing dataset. On two benchmarks, LOGICNLG and CONTLOG, PLOG outperforms strong baselines by a large margin on the logical fidelity, demonstrating the effectiveness of table-to-logic pretraining.
[ { "version": "v1", "created": "Wed, 25 May 2022 11:55:54 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 02:00:54 GMT" } ]
2022-10-27T00:00:00
[ [ "Liu", "Ao", "" ], [ "Dong", "Haoyu", "" ], [ "Okazaki", "Naoaki", "" ], [ "Han", "Shi", "" ], [ "Zhang", "Dongmei", "" ] ]
new_dataset
0.997159
2207.11155
Domenico Fabio Savo
Piero Bonatti, Gianluca Cima, Domenico Lembo, Lorenzo Marconi, Riccardo Rosati, Luigi Sauro, Domenico Fabio Savo
CQE in OWL 2 QL: A "Longest Honeymoon" Approach (extended version)
This paper is the extended version of "P.Bonatti, G.Cima, D.Lembo, L.Marconi, R.Rosati, L.Sauro, and D.F.Savo. Controlled query evaluation in OWL 2 QL: A "Longest Honeymoon" approach" accepted for publication at ISWC 2022
null
10.1007/978-3-031-19433-7_25
null
cs.DB cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Controlled Query Evaluation (CQE) has been recently studied in the context of Semantic Web ontologies. The goal of CQE is concealing some query answers so as to prevent external users from inferring confidential information. In general, there exist multiple, mutually incomparable ways of concealing answers, and previous CQE approaches choose in advance which answers are visible and which are not. In this paper, instead, we study a dynamic CQE method, namely, we propose to alter the answer to the current query based on the evaluation of previous ones. We aim at a system that, besides being able to protect confidential data, is maximally cooperative, which intuitively means that it answers affirmatively to as many queries as possible; it achieves this goal by delaying answer modifications as much as possible. We also show that the behavior we get cannot be intensionally simulated through a static approach, independent of query history. Interestingly, for OWL 2 QL ontologies and policy expressed through denials, query evaluation under our semantics is first-order rewritable, and thus in AC0 in data complexity. This paves the way for the development of practical algorithms, which we also preliminarily discuss in the paper.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 15:51:15 GMT" } ]
2022-10-27T00:00:00
[ [ "Bonatti", "Piero", "" ], [ "Cima", "Gianluca", "" ], [ "Lembo", "Domenico", "" ], [ "Marconi", "Lorenzo", "" ], [ "Rosati", "Riccardo", "" ], [ "Sauro", "Luigi", "" ], [ "Savo", "Domenico Fabio", "" ] ]
new_dataset
0.983996
2210.09706
Wasja Brunotte
Wasja Brunotte, Alexander Specht, Larissa Chazette, Kurt Schneider
Privacy Explanations - A Means to End-User Trust
null
null
null
null
cs.SE cs.CY cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Software systems are ubiquitous, and their use is ingrained in our everyday lives. They enable us to get in touch with people quickly and easily, support us in gathering information, and help us perform our daily tasks. In return, we provide these systems with a large amount of personal information, often unaware that this is jeopardizing our privacy. End users are typically unaware of what data is collected, for what purpose, who has access to it, and where and how it is stored. To address this issue, we looked into how explainability might help to tackle this problem. We created privacy explanations that aim to help to clarify to end users why and for what purposes specific data is required. We asked end users about privacy explanations in a survey and found that the majority of respondents (91.6 \%) are generally interested in receiving privacy explanations. Our findings reveal that privacy explanations can be an important step towards increasing trust in software systems and can increase the privacy awareness of end users. These findings are a significant step in developing privacy-aware systems and incorporating usable privacy features into them, assisting users in protecting their privacy.
[ { "version": "v1", "created": "Tue, 18 Oct 2022 09:30:37 GMT" }, { "version": "v2", "created": "Thu, 20 Oct 2022 06:35:08 GMT" } ]
2022-10-27T00:00:00
[ [ "Brunotte", "Wasja", "" ], [ "Specht", "Alexander", "" ], [ "Chazette", "Larissa", "" ], [ "Schneider", "Kurt", "" ] ]
new_dataset
0.96246
2210.10983
Zhicong Huang
Zhicong Huang, Jingwen Zhao, Zhijie Zheng, Dihu Chena, Haifeng Hu
PSA-Det3D: Pillar Set Abstraction for 3D object Detection
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Small object detection for 3D point cloud is a challenging problem because of two limitations: (1) Perceiving small objects is much more diffcult than normal objects due to the lack of valid points. (2) Small objects are easily blocked which breaks the shape of their meshes in 3D point cloud. In this paper, we propose a pillar set abstraction (PSA) and foreground point compensation (FPC) and design a point-based detection network, PSA-Det3D, to improve the detection performance for small object. The PSA embeds a pillar query operation on the basis of set abstraction (SA) to expand its receptive field of the network, which can aggregate point-wise features effectively. To locate more occluded objects, we persent a proposal generation layer consisting of a foreground point segmentation and a FPC module. Both the foreground points and the estimated centers are finally fused together to generate the detection result. The experiments on the KITTI 3D detection benchmark show that our proposed PSA-Det3D outperforms other algorithms with high accuracy for small object detection.
[ { "version": "v1", "created": "Thu, 20 Oct 2022 03:05:34 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 09:36:39 GMT" } ]
2022-10-27T00:00:00
[ [ "Huang", "Zhicong", "" ], [ "Zhao", "Jingwen", "" ], [ "Zheng", "Zhijie", "" ], [ "Chena", "Dihu", "" ], [ "Hu", "Haifeng", "" ] ]
new_dataset
0.999419
2210.12467
Rajdeep Mukherjee
Rajdeep Mukherjee, Abhinav Bohra, Akash Banerjee, Soumya Sharma, Manjunath Hegde, Afreen Shaikh, Shivani Shrivastava, Koustuv Dasgupta, Niloy Ganguly, Saptarshi Ghosh, Pawan Goyal
ECTSum: A New Benchmark Dataset For Bullet Point Summarization of Long Earnings Call Transcripts
14 pages; Accepted as a Long Paper in EMNLP 2022 (Main Conference); Codes: https://github.com/rajdeep345/ECTSum
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite tremendous progress in automatic summarization, state-of-the-art methods are predominantly trained to excel in summarizing short newswire articles, or documents with strong layout biases such as scientific articles or government reports. Efficient techniques to summarize financial documents, including facts and figures, have largely been unexplored, majorly due to the unavailability of suitable datasets. In this work, we present ECTSum, a new dataset with transcripts of earnings calls (ECTs), hosted by publicly traded companies, as documents, and short experts-written telegram-style bullet point summaries derived from corresponding Reuters articles. ECTs are long unstructured documents without any prescribed length limit or format. We benchmark our dataset with state-of-the-art summarizers across various metrics evaluating the content quality and factual consistency of the generated summaries. Finally, we present a simple-yet-effective approach, ECT-BPS, to generate a set of bullet points that precisely capture the important facts discussed in the calls.
[ { "version": "v1", "created": "Sat, 22 Oct 2022 15:02:41 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 16:21:37 GMT" } ]
2022-10-27T00:00:00
[ [ "Mukherjee", "Rajdeep", "" ], [ "Bohra", "Abhinav", "" ], [ "Banerjee", "Akash", "" ], [ "Sharma", "Soumya", "" ], [ "Hegde", "Manjunath", "" ], [ "Shaikh", "Afreen", "" ], [ "Shrivastava", "Shivani", "" ], [ "Dasgupta", "Koustuv", "" ], [ "Ganguly", "Niloy", "" ], [ "Ghosh", "Saptarshi", "" ], [ "Goyal", "Pawan", "" ] ]
new_dataset
0.999699
2210.14056
Ajay Chawda
Ajay Chawda, Stefanie Grimm, Marius Kloft
Unsupervised Anomaly Detection for Auditing Data and Impact of Categorical Encodings
This work has been accepted at Proceedings of the Neurips 2022 Workshop on Synthetic Data 4ML
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we introduce the Vehicle Claims dataset, consisting of fraudulent insurance claims for automotive repairs. The data belongs to the more broad category of Auditing data, which includes also Journals and Network Intrusion data. Insurance claim data are distinctively different from other auditing data (such as network intrusion data) in their high number of categorical attributes. We tackle the common problem of missing benchmark datasets for anomaly detection: datasets are mostly confidential, and the public tabular datasets do not contain relevant and sufficient categorical attributes. Therefore, a large-sized dataset is created for this purpose and referred to as Vehicle Claims (VC) dataset. The dataset is evaluated on shallow and deep learning methods. Due to the introduction of categorical attributes, we encounter the challenge of encoding them for the large dataset. As One Hot encoding of high cardinal dataset invokes the "curse of dimensionality", we experiment with GEL encoding and embedding layer for representing categorical attributes. Our work compares competitive learning, reconstruction-error, density estimation and contrastive learning approaches for Label, One Hot, GEL encoding and embedding layer to handle categorical values.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 14:33:17 GMT" }, { "version": "v2", "created": "Wed, 26 Oct 2022 04:03:43 GMT" } ]
2022-10-27T00:00:00
[ [ "Chawda", "Ajay", "" ], [ "Grimm", "Stefanie", "" ], [ "Kloft", "Marius", "" ] ]
new_dataset
0.951189
2210.14252
Dawei Liang
Dawei Liang, Hang Su, Tarun Singh, Jay Mahadeokar, Shanil Puri, Jiedan Zhu, Edison Thomaz, Mike Seltzer
Dynamic Speech Endpoint Detection with Regression Targets
Manuscript submitted to ICASSP 2023
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive voice assistants have been widely used as input interfaces in various scenarios, e.g. on smart homes devices, wearables and on AR devices. Detecting the end of a speech query, i.e. speech end-pointing, is an important task for voice assistants to interact with users. Traditionally, speech end-pointing is based on pure classification methods along with arbitrary binary targets. In this paper, we propose a novel regression-based speech end-pointing model, which enables an end-pointer to adjust its detection behavior based on context of user queries. Specifically, we present a pause modeling method and show its effectiveness for dynamic end-pointing. Based on our experiments with vendor-collected smartphone and wearables speech queries, our strategy shows a better trade-off between endpointing latency and accuracy, compared to the traditional classification-based method. We further discuss the benefits of this model and generalization of the framework in the paper.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 18:09:42 GMT" } ]
2022-10-27T00:00:00
[ [ "Liang", "Dawei", "" ], [ "Su", "Hang", "" ], [ "Singh", "Tarun", "" ], [ "Mahadeokar", "Jay", "" ], [ "Puri", "Shanil", "" ], [ "Zhu", "Jiedan", "" ], [ "Thomaz", "Edison", "" ], [ "Seltzer", "Mike", "" ] ]
new_dataset
0.990497
2210.14299
Hanzi Xu
Hanzi Xu, Slobodan Vucetic, Wenpeng Yin
OpenStance: Real-world Zero-shot Stance Detection
CoNLL 2022 Camera-ready version
null
null
null
cs.CL
http://creativecommons.org/publicdomain/zero/1.0/
Prior studies of zero-shot stance detection identify the attitude of texts towards unseen topics occurring in the same document corpus. Such task formulation has three limitations: (i) Single domain/dataset. A system is optimized on a particular dataset from a single domain; therefore, the resulting system cannot work well on other datasets; (ii) the model is evaluated on a limited number of unseen topics; (iii) it is assumed that part of the topics has rich annotations, which might be impossible in real-world applications. These drawbacks will lead to an impractical stance detection system that fails to generalize to open domains and open-form topics. This work defines OpenStance: open-domain zero-shot stance detection, aiming to handle stance detection in an open world with neither domain constraints nor topic-specific annotations. The key challenge of OpenStance lies in the open-domain generalization: learning a system with fully unspecific supervision but capable of generalizing to any dataset. To solve OpenStance, we propose to combine indirect supervision, from textual entailment datasets, and weak supervision, from data generated automatically by pre-trained Language Models. Our single system, without any topic-specific supervision, outperforms the supervised method on three popular datasets. To our knowledge, this is the first work that studies stance detection under the open-domain zero-shot setting. All data and code are publicly released.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 19:50:36 GMT" } ]
2022-10-27T00:00:00
[ [ "Xu", "Hanzi", "" ], [ "Vucetic", "Slobodan", "" ], [ "Yin", "Wenpeng", "" ] ]
new_dataset
0.999298
2210.14349
Menghe Zhang
Menghe Zhang, Weichen Liu, Nadir Weibel, Jurgen Schulze
A DirectX-Based DICOM Viewer for Multi-User Surgical Planning in Augmented Reality
null
ISVC 2022 symposium proceeding, will be on Lecture Notes in Computer Science (LNCS) series
null
null
cs.MM cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Preoperative medical imaging is an essential part of surgical planning. The data from medical imaging devices, such as CT and MRI scanners, consist of stacks of 2D images in DICOM format. Conversely, advances in 3D data visualization provide further information by assembling cross-sections into 3D volumetric datasets. As Microsoft unveiled the HoloLens 2 (HL2), which is considered one of the best Mixed Reality (XR) headsets in the market, it promised to enhance visualization in 3D by providing an immersive experience to users. This paper introduces a prototype holographic XR DICOM Viewer for the 3D visualization of DICOM image sets on HL2 for surgical planning. We first developed a standalone graphical C++ engine using the native DirectX11 API and HLSL shaders. Based on that, the prototype further applies the OpenXR API for potential deployment on a wide range of devices from vendors across the XR spectrum. With native access to the device, our prototype unravels the limitation of hardware capabilities on HL2 for 3D volume rendering and interaction. Moreover, smartphones can act as input devices to provide another user interaction method by connecting to our server. In this paper, we present a holographic DICOM viewer for the HoloLens 2 and contribute (i) a prototype that renders the DICOM image stacks in real-time on HL2, (ii) three types of user interactions in XR, and (iii) a preliminary qualitative evaluation of our prototype.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 21:22:00 GMT" } ]
2022-10-27T00:00:00
[ [ "Zhang", "Menghe", "" ], [ "Liu", "Weichen", "" ], [ "Weibel", "Nadir", "" ], [ "Schulze", "Jurgen", "" ] ]
new_dataset
0.999099
2210.14363
Kishaloy Halder
Kishaloy Halder, Josip Krapac, Dmitry Goryunov, Anthony Brew, Matti Lyra, Alsida Dizdari, William Gillett, Adrien Renahy, Sinan Tang
Enhancing Product Safety in E-Commerce with NLP
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ensuring safety of the products offered to the customers is of paramount importance to any e- commerce platform. Despite stringent quality and safety checking of products listed on these platforms, occasionally customers might receive a product that can pose a safety issue arising out of its use. In this paper, we present an innovative mechanism of how a large scale multinational e-commerce platform, Zalando, uses Natural Language Processing techniques to assist timely investigation of the potentially unsafe products mined directly from customer written claims in unstructured plain text. We systematically describe the types of safety issues that concern Zalando customers. We demonstrate how we map this core business problem into a supervised text classification problem with highly imbalanced, noisy, multilingual data in a AI-in-the-loop setup with a focus on Key Performance Indicator (KPI) driven evaluation. Finally, we present detailed ablation studies to show a comprehensive comparison between different classification techniques. We conclude the work with how this NLP model was deployed.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 22:10:30 GMT" } ]
2022-10-27T00:00:00
[ [ "Halder", "Kishaloy", "" ], [ "Krapac", "Josip", "" ], [ "Goryunov", "Dmitry", "" ], [ "Brew", "Anthony", "" ], [ "Lyra", "Matti", "" ], [ "Dizdari", "Alsida", "" ], [ "Gillett", "William", "" ], [ "Renahy", "Adrien", "" ], [ "Tang", "Sinan", "" ] ]
new_dataset
0.98927
2210.14373
Levent Guvenc
Karina Meneses-Cime, Bilin Aksun-Guvenc, Levent Guvenc
Shared Autonomous Vehicle Mobility for a Transportation Underserved City
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes the use of an on-demand, ride hailed and ride-Shared Autonomous Vehicle (SAV) service as a feasible solution to serve the mobility needs of a small city where fixed route, circulator type public transportation may be too expensive to operate. The presented work builds upon our earlier work that modeled the city of Marysville, Ohio as an example of such a city, with realistic traffic behavior, and trip requests. A simple SAV dispatcher is implemented to model the behavior of the proposed on-demand mobility service. The goal of the service is to optimally distribute SAVs along the network to allocate passengers and shared rides. The pickup and drop-off locations are strategically placed along the network to provide mobility from affordable housing, which are also transit deserts, to locations corresponding to jobs and other opportunities. The study is carried out by varying the behaviors of the SAV driving system from cautious to aggressive along with the size of the SAV fleet and analyzing their corresponding performance. It is found that the size of the network and behavior of AV driving system behavior results in an optimal number of SAVs after which increasing the number of SAVs does not improve overall mobility. For the Marysville network, which is a 9 mile by 8 mile network, this happens at the mark of a fleet of 8 deployed SAVs. The results show that the introduction of the proposed SAV service with a simple optimal shared scheme can provide access to services and jobs to hundreds of people in a small sized city.
[ { "version": "v1", "created": "Tue, 25 Oct 2022 22:44:59 GMT" } ]
2022-10-27T00:00:00
[ [ "Meneses-Cime", "Karina", "" ], [ "Aksun-Guvenc", "Bilin", "" ], [ "Guvenc", "Levent", "" ] ]
new_dataset
0.992244
2210.14395
Seungwhan Moon
Seungwhan Moon, Andrea Madotto, Zhaojiang Lin, Alireza Dirafzoon, Aparajita Saraf, Amy Bearman, Babak Damavandi
IMU2CLIP: Multimodal Contrastive Learning for IMU Motion Sensors from Egocentric Videos and Text
null
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present IMU2CLIP, a novel pre-training approach to align Inertial Measurement Unit (IMU) motion sensor recordings with video and text, by projecting them into the joint representation space of Contrastive Language-Image Pre-training (CLIP). The proposed approach allows IMU2CLIP to translate human motions (as measured by IMU sensors) into their corresponding textual descriptions and videos -- while preserving the transitivity across these modalities. We explore several new IMU-based applications that IMU2CLIP enables, such as motion-based media retrieval and natural language reasoning tasks with motion data. In addition, we show that IMU2CLIP can significantly improve the downstream performance when fine-tuned for each application (e.g. activity recognition), demonstrating the universal usage of IMU2CLIP as a new pre-trained resource. Our code will be made publicly available.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 00:22:41 GMT" } ]
2022-10-27T00:00:00
[ [ "Moon", "Seungwhan", "" ], [ "Madotto", "Andrea", "" ], [ "Lin", "Zhaojiang", "" ], [ "Dirafzoon", "Alireza", "" ], [ "Saraf", "Aparajita", "" ], [ "Bearman", "Amy", "" ], [ "Damavandi", "Babak", "" ] ]
new_dataset
0.999587
2210.14408
Puyang Zhao
Puyang Zhao, Wei Tian, Lefu Xiao, Xinhui Liu, Jingjin Wu
An Attention-based Long Short-Term Memory Framework for Detection of Bitcoin Scams
null
null
null
null
cs.CR cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bitcoin is the most common cryptocurrency involved in cyber scams. Cybercriminals often utilize pseudonymity and privacy protection mechanism associated with Bitcoin transactions to make their scams virtually untraceable. The Ponzi scheme has attracted particularly significant attention among Bitcoin fraudulent activities. This paper considers a multi-class classification problem to determine whether a transaction is involved in Ponzi schemes or other cyber scams, or is a non-scam transaction. We design a specifically designed crawler to collect data and propose a novel Attention-based Long Short-Term Memory (A-LSTM) method for the classification problem. The experimental results show that the proposed model has better efficiency and accuracy than existing approaches, including Random Forest, Extra Trees, Gradient Boosting, and classical LSTM. With correctly identified scam features, our proposed A-LSTM achieves an F1-score over 82% for the original data and outperforms the existing approaches.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 01:20:21 GMT" } ]
2022-10-27T00:00:00
[ [ "Zhao", "Puyang", "" ], [ "Tian", "Wei", "" ], [ "Xiao", "Lefu", "" ], [ "Liu", "Xinhui", "" ], [ "Wu", "Jingjin", "" ] ]
new_dataset
0.995856
2210.14424
Mukund Rungta
Mukund Rungta, Janvijay Singh, Saif M. Mohammad and Diyi Yang
Geographic Citation Gaps in NLP Research
EMNLP 2022 Main Conference
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In a fair world, people have equitable opportunities to education, to conduct scientific research, to publish, and to get credit for their work, regardless of where they live. However, it is common knowledge among researchers that a vast number of papers accepted at top NLP venues come from a handful of western countries and (lately) China; whereas, very few papers from Africa and South America get published. Similar disparities are also believed to exist for paper citation counts. In the spirit of "what we do not measure, we cannot improve", this work asks a series of questions on the relationship between geographical location and publication success (acceptance in top NLP venues and citation impact). We first created a dataset of 70,000 papers from the ACL Anthology, extracted their meta-information, and generated their citation network. We then show that not only are there substantial geographical disparities in paper acceptance and citation but also that these disparities persist even when controlling for a number of variables such as venue of publication and sub-field of NLP. Further, despite some steps taken by the NLP community to improve geographical diversity, we show that the disparity in publication metrics across locations is still on an increasing trend since the early 2000s. We release our code and dataset here: https://github.com/iamjanvijay/acl-cite-net
[ { "version": "v1", "created": "Wed, 26 Oct 2022 02:25:23 GMT" } ]
2022-10-27T00:00:00
[ [ "Rungta", "Mukund", "" ], [ "Singh", "Janvijay", "" ], [ "Mohammad", "Saif M.", "" ], [ "Yang", "Diyi", "" ] ]
new_dataset
0.988034
2210.14472
Gihan Weeraprameshwara
Gihan Weeraprameshwara, Vihanga Jayawickrama, Nisansa de Silva, Yudhanjaya Wijeratne
Sinhala Sentence Embedding: A Two-Tiered Structure for Low-Resource Languages
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In the process of numerically modeling natural languages, developing language embeddings is a vital step. However, it is challenging to develop functional embeddings for resource-poor languages such as Sinhala, for which sufficiently large corpora, effective language parsers, and any other required resources are difficult to find. In such conditions, the exploitation of existing models to come up with an efficacious embedding methodology to numerically represent text could be quite fruitful. This paper explores the effectivity of several one-tiered and two-tiered embedding architectures in representing Sinhala text in the sentiment analysis domain. With our findings, the two-tiered embedding architecture where the lower-tier consists of a word embedding and the upper-tier consists of a sentence embedding has been proven to perform better than one-tier word embeddings, by achieving a maximum F1 score of 88.04% in contrast to the 83.76% achieved by word embedding models. Furthermore, embeddings in the hyperbolic space are also developed and compared with Euclidean embeddings in terms of performance. A sentiment data set consisting of Facebook posts and associated reactions have been used for this research. To effectively compare the performance of different embedding systems, the same deep neural network structure has been trained on sentiment data with each of the embedding systems used to encode the text associated.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 04:46:23 GMT" } ]
2022-10-27T00:00:00
[ [ "Weeraprameshwara", "Gihan", "" ], [ "Jayawickrama", "Vihanga", "" ], [ "de Silva", "Nisansa", "" ], [ "Wijeratne", "Yudhanjaya", "" ] ]
new_dataset
0.965605
2210.14494
Changyoon Lee
Changyoon Lee, Yeon Seonwoo, Alice Oh
CS1QA: A Dataset for Assisting Code-based Question Answering in an Introductory Programming Course
null
null
10.18653/v1/2022.naacl-main.148
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
We introduce CS1QA, a dataset for code-based question answering in the programming education domain. CS1QA consists of 9,237 question-answer pairs gathered from chat logs in an introductory programming class using Python, and 17,698 unannotated chat data with code. Each question is accompanied with the student's code, and the portion of the code relevant to answering the question. We carefully design the annotation process to construct CS1QA, and analyze the collected dataset in detail. The tasks for CS1QA are to predict the question type, the relevant code snippet given the question and the code and retrieving an answer from the annotated corpus. Results for the experiments on several baseline models are reported and thoroughly analyzed. The tasks for CS1QA challenge models to understand both the code and natural language. This unique dataset can be used as a benchmark for source code comprehension and question answering in the educational setting.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 05:40:34 GMT" } ]
2022-10-27T00:00:00
[ [ "Lee", "Changyoon", "" ], [ "Seonwoo", "Yeon", "" ], [ "Oh", "Alice", "" ] ]
new_dataset
0.999706
2210.14505
Xiujing Zheng
Xiujing Zheng
Constructions of entanglement-assisted quantum MDS from generalized Reed-Solomon codes
17 pages. 6 table
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Entanglement-assisted quantum error-correcting (EAQEC) codes are a generalization of standard stabilizer quantum error-correcting codes, which can be possibly constructed from any classical codes by relaxing self-orthogonal condition with the help of pre-shared entanglement between the sender and the receiver. In this paper, by using generalized Reed-Solomon codes, we construct two families of entanglement-assisted quantum error-correcting MDS (EAQMDS) codes with parameters $[[\frac{b({q^2}-1)}{a}+\frac{{q^2} - 1}{a}, \frac{b({q^2}-1)}{a}+\frac{{q^2}-1}{a}-2d+c+2,d;c]]_q$, where $q$ is a prime power and $a| (q+1)$. Among our constructions, the EAQMDS codes have much larger minimum distance than the known EAQMDS codes with the same length and consume the same number of ebits. Moreover, some of the lengths of ours EAQMDS codes may not be divisors of $q^2\pm 1$, which are new and different from all the previously known ones.
[ { "version": "v1", "created": "Wed, 26 Oct 2022 06:30:15 GMT" } ]
2022-10-27T00:00:00
[ [ "Zheng", "Xiujing", "" ] ]
new_dataset
0.961563