id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2205.01159
Reza Hojjaty Saeedy
Reza Hojjaty Saeedy, Richard A. Messner
Saliency map using features derived from spiking neural networks of primate visual cortex
19 pages, 8 figures, 1 table
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We propose a framework inspired by biological vision systems to produce saliency maps of digital images. Well-known computational models for receptive fields of areas in the visual cortex that are specialized for color and orientation perception are used. To model the connectivity between these areas we use the CARLsim library which is a spiking neural network(SNN) simulator. The spikes generated by CARLsim, then serve as extracted features and input to our saliency detection algorithm. This new method of saliency detection is described and applied to benchmark images.
[ { "version": "v1", "created": "Mon, 2 May 2022 18:52:39 GMT" } ]
2022-05-04T00:00:00
[ [ "Saeedy", "Reza Hojjaty", "" ], [ "Messner", "Richard A.", "" ] ]
new_dataset
0.995267
2205.01167
Marcus Schwarting
Jim James, Nathan Pruyne, Tiberiu Stan, Marcus Schwarting, Jiwon Yeom, Seungbum Hong, Peter Voorhees, Ben Blaiszik, Ian Foster
3D Convolutional Neural Networks for Dendrite Segmentation Using Fine-Tuning and Hyperparameter Optimization
null
null
null
null
cs.CV cond-mat.mtrl-sci eess.IV
http://creativecommons.org/licenses/by/4.0/
Dendritic microstructures are ubiquitous in nature and are the primary solidification morphologies in metallic materials. Techniques such as x-ray computed tomography (XCT) have provided new insights into dendritic phase transformation phenomena. However, manual identification of dendritic morphologies in microscopy data can be both labor intensive and potentially ambiguous. The analysis of 3D datasets is particularly challenging due to their large sizes (terabytes) and the presence of artifacts scattered within the imaged volumes. In this study, we trained 3D convolutional neural networks (CNNs) to segment 3D datasets. Three CNN architectures were investigated, including a new 3D version of FCDense. We show that using hyperparameter optimization (HPO) and fine-tuning techniques, both 2D and 3D CNN architectures can be trained to outperform the previous state of the art. The 3D U-Net architecture trained in this study produced the best segmentations according to quantitative metrics (pixel-wise accuracy of 99.84% and a boundary displacement error of 0.58 pixels), while 3D FCDense produced the smoothest boundaries and best segmentations according to visual inspection. The trained 3D CNNs are able to segment entire 852 x 852 x 250 voxel 3D volumes in only ~60 seconds, thus hastening the progress towards a deeper understanding of phase transformation phenomena such as dendritic solidification.
[ { "version": "v1", "created": "Mon, 2 May 2022 19:20:05 GMT" } ]
2022-05-04T00:00:00
[ [ "James", "Jim", "" ], [ "Pruyne", "Nathan", "" ], [ "Stan", "Tiberiu", "" ], [ "Schwarting", "Marcus", "" ], [ "Yeom", "Jiwon", "" ], [ "Hong", "Seungbum", "" ], [ "Voorhees", "Peter", "" ], [ "Blaiszik", "Ben", "" ], [ "Foster", "Ian", "" ] ]
new_dataset
0.999007
2205.01183
Ben Titzer
Ben L. Titzer
A fast in-place interpreter for WebAssembly
null
null
null
null
cs.PL cs.PF
http://creativecommons.org/licenses/by/4.0/
WebAssembly (Wasm) is a compact, well-specified bytecode format that offers a portable compilation target with near-native execution speed. The bytecode format was specifically designed to be fast to parse, validate, and compile, positioning itself as a portable alternative to native code. It was pointedly not designed to be interpreted directly. Instead, design considerations at the time focused on competing with native code, utilizing optimizing compilers as the primary execution tier. Yet, in JIT scenarios, compilation time and memory consumption critically impact application startup, leading many Wasm engines to later deploy baseline (single-pass) compilers. Though faster, baseline compilers still take time and waste code space for infrequently executed code. A typical interpreter being infeasible, some engines resort to compiling Wasm not to machine code, but to a more compact, but easy to interpret format. This still takes time and wastes memory. Instead, we introduce in this article a fast in-place interpreter for WebAssembly, where no rewrite and no separate format is necessary. Our evaluation shows that in-place interpretation of Wasm code is space-efficient and fast, achieving performance on-par with interpreting a custom-designed internal format. This fills a hole in the execution tier space for Wasm, allowing for even faster startup and lower memory footprint than previous engine configurations.
[ { "version": "v1", "created": "Mon, 2 May 2022 20:01:32 GMT" } ]
2022-05-04T00:00:00
[ [ "Titzer", "Ben L.", "" ] ]
new_dataset
0.997459
2205.01198
Zhening Huang
Zhening Huang, Weiwei Chen, Abir Al-Tabbaa, Ioannis Brilakis
NHA12D: A New Pavement Crack Dataset and a Comparison Study Of Crack Detection Algorithms
Accepted at EC3 2022
null
null
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Crack detection plays a key role in automated pavement inspection. Although a large number of algorithms have been developed in recent years to further boost performance, there are still remaining challenges in practice, due to the complexity of pavement images. To further accelerate the development and identify the remaining challenges, this paper conducts a comparison study to evaluate the performance of the state of the art crack detection algorithms quantitatively and objectively. A more comprehensive annotated pavement crack dataset (NHA12D) that contains images with different viewpoints and pavements types is proposed. In the comparison study, crack detection algorithms were trained equally on the largest public crack dataset collected and evaluated on the proposed dataset (NHA12D). Overall, the U-Net model with VGG-16 as backbone has the best all-around performance, but models generally fail to distinguish cracks from concrete joints, leading to a high false-positive rate. It also found that detecting cracks from concrete pavement images still has huge room for improvement. Dataset for concrete pavement images is also missing in the literature. Future directions in this area include filling the gap for concrete pavement images and using domain adaptation techniques to enhance the detection results on unseen datasets.
[ { "version": "v1", "created": "Mon, 2 May 2022 20:22:50 GMT" } ]
2022-05-04T00:00:00
[ [ "Huang", "Zhening", "" ], [ "Chen", "Weiwei", "" ], [ "Al-Tabbaa", "Abir", "" ], [ "Brilakis", "Ioannis", "" ] ]
new_dataset
0.999781
2205.01213
Andrea Pizzo
Andrea Pizzo, Angel Lozano, Sundeep Rangan, Thomas Marzetta
Line-of-Sight MIMO via Reflection From a Smooth Surface
5 pages, 5 figures, accepted for presentation at the 2022 IEEE Veh. Techn. Conf. (VTC)
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
We provide a deterministic channel model for a scenario where wireless connectivity is established through a reflection from a planar smooth surface of an infinite extent. The developed model is rigorously built upon the physics of wave propagation, and is as precise as tight are the unboundedness and smoothness assumptions on the surface. This model allows establishing that line-of-sight spatial multiplexing can take place via reflection off an electrically large surface, a situation of high interest for mmWave and terahertz frequencies.
[ { "version": "v1", "created": "Mon, 2 May 2022 21:09:28 GMT" } ]
2022-05-04T00:00:00
[ [ "Pizzo", "Andrea", "" ], [ "Lozano", "Angel", "" ], [ "Rangan", "Sundeep", "" ], [ "Marzetta", "Thomas", "" ] ]
new_dataset
0.996079
2205.01235
Edward Staley
Edward W. Staley and Jared Markowitz
Triangular Dropout: Variable Network Width without Retraining
null
null
null
null
cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-nd/4.0/
One of the most fundamental design choices in neural networks is layer width: it affects the capacity of what a network can learn and determines the complexity of the solution. This latter property is often exploited when introducing information bottlenecks, forcing a network to learn compressed representations. However, such an architecture decision is typically immutable once training begins; switching to a more compressed architecture requires retraining. In this paper we present a new layer design, called Triangular Dropout, which does not have this limitation. After training, the layer can be arbitrarily reduced in width to exchange performance for narrowness. We demonstrate the construction and potential use cases of such a mechanism in three areas. Firstly, we describe the formulation of Triangular Dropout in autoencoders, creating models with selectable compression after training. Secondly, we add Triangular Dropout to VGG19 on ImageNet, creating a powerful network which, without retraining, can be significantly reduced in parameters. Lastly, we explore the application of Triangular Dropout to reinforcement learning (RL) policies on selected control problems.
[ { "version": "v1", "created": "Mon, 2 May 2022 21:58:16 GMT" } ]
2022-05-04T00:00:00
[ [ "Staley", "Edward W.", "" ], [ "Markowitz", "Jared", "" ] ]
new_dataset
0.987784
2205.01253
Takahiro Miura Mr.
Takahiro Miura, Ichiro Sakata
Storyteller: The papers co-citing Sleeping Beauty and Prince before awakening
preprint, submitted to ASIS&T SIG-MET Workshop, extended abstract
null
null
null
cs.DL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the Cumulative Advantage(CA) model, which is one of the most fundamental approaches to understand the mechanism of citation dynamics, papers receive citations depending on how much they have been already cited. On the other hand, a substantial effect not included in CA is that some surprising discoveries suddenly acquire citations after a long time from publishing. This phenomenon is known as Sleeping Beauty(SB). Since disrupting discoveries need long-time discussion by the research community to accept, SBs can capture innovative findings and reveal the nature of disruptive scientific knowledge production. To research SBs citation burst mechanism, bibliometricians consider the existence of the Prince(PR) for each SBs, which can be the trigger of SBs awakeness. For example, the discovery of Green Fluorescent Protein(GFP), which got Nobel prize in chemistry, had been overlooked for 30 years until Chalfie and Tsien, who also received the prize, developed a method to use GFP as a marker protein in genetic engineering. However, how does Chalfies and Tsiens research relight the hidden knowledge in the research community? If we can clarify such a mechanism rediscovering from nearly nothing, it can be helpful in science support and policy decision-making. This study proposes a Storyteller that focuses on the connection between SB and PR before SB gets citation burst by co-citation. PR is found to be the paper awakening SB in retrospect, but it is not easy to detect it as the trigger of SBs awakeness at the time of PR submission. We named the papers which co-cites SB and PR before the citation burst of SB as Storyteller(ST) and analyze (1) how ST contributes to broadening the novelty of SB&PR connections and (2) how much ST leads the citation burst after awakening.
[ { "version": "v1", "created": "Tue, 3 May 2022 00:35:33 GMT" } ]
2022-05-04T00:00:00
[ [ "Miura", "Takahiro", "" ], [ "Sakata", "Ichiro", "" ] ]
new_dataset
0.990516
2205.01290
Jayetri Bardhan
Jayetri Bardhan, Anthony Colas, Kirk Roberts, Daisy Zhe Wang
DrugEHRQA: A Question Answering Dataset on Structured and Unstructured Electronic Health Records For Medicine Related Queries
15 pages (including Appendix section), 7 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper develops the first question answering dataset (DrugEHRQA) containing question-answer pairs from both structured tables and unstructured notes from a publicly available Electronic Health Record (EHR). EHRs contain patient records, stored in structured tables and unstructured clinical notes. The information in structured and unstructured EHRs is not strictly disjoint: information may be duplicated, contradictory, or provide additional context between these sources. Our dataset has medication-related queries, containing over 70,000 question-answer pairs. To provide a baseline model and help analyze the dataset, we have used a simple model (MultimodalEHRQA) which uses the predictions of a modality selection network to choose between EHR tables and clinical notes to answer the questions. This is used to direct the questions to the table-based or text-based state-of-the-art QA model. In order to address the problem arising from complex, nested queries, this is the first time Relation-Aware Schema Encoding and Linking for Text-to-SQL Parsers (RAT-SQL) has been used to test the structure of query templates in EHR data. Our goal is to provide a benchmark dataset for multi-modal QA systems, and to open up new avenues of research in improving question answering over EHR structured data by using context from unstructured clinical data.
[ { "version": "v1", "created": "Tue, 3 May 2022 03:50:50 GMT" } ]
2022-05-04T00:00:00
[ [ "Bardhan", "Jayetri", "" ], [ "Colas", "Anthony", "" ], [ "Roberts", "Kirk", "" ], [ "Wang", "Daisy Zhe", "" ] ]
new_dataset
0.998066
2205.01381
Mike Zhang
Mike Zhang, Kristian N{\o}rgaard Jensen, Barbara Plank
Kompetencer: Fine-grained Skill Classification in Danish Job Postings via Distant Supervision and Transfer Learning
7 pages, accepted to LREC 2022. arXiv admin note: text overlap with arXiv:2204.12811
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Skill Classification (SC) is the task of classifying job competences from job postings. This work is the first in SC applied to Danish job vacancy data. We release the first Danish job posting dataset: Kompetencer (en: competences), annotated for nested spans of competences. To improve upon coarse-grained annotations, we make use of The European Skills, Competences, Qualifications and Occupations (ESCO; le Vrang et al., 2014) taxonomy API to obtain fine-grained labels via distant supervision. We study two setups: The zero-shot and few-shot classification setting. We fine-tune English-based models and RemBERT (Chung et al., 2020) and compare them to in-language Danish models. Our results show RemBERT significantly outperforms all other models in both the zero-shot and the few-shot setting.
[ { "version": "v1", "created": "Tue, 3 May 2022 09:13:55 GMT" } ]
2022-05-04T00:00:00
[ [ "Zhang", "Mike", "" ], [ "Jensen", "Kristian Nørgaard", "" ], [ "Plank", "Barbara", "" ] ]
new_dataset
0.999024
2205.01506
Nayla Escribano
Nayla Escribano, Jon Ander Gonz\'alez, Julen Orbegozo-Terradillos, Ainara Larrondo-Ureta, Sim\'on Pe\~na-Fern\'andez, Olatz Perez-de-Vi\~naspre and Rodrigo Agerri
BasqueParl: A Bilingual Corpus of Basque Parliamentary Transcriptions
9 pages, 14 figures, 4 tables. To be published in LREC 2022
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Parliamentary transcripts provide a valuable resource to understand the reality and know about the most important facts that occur over time in our societies. Furthermore, the political debates captured in these transcripts facilitate research on political discourse from a computational social science perspective. In this paper we release the first version of a newly compiled corpus from Basque parliamentary transcripts. The corpus is characterized by heavy Basque-Spanish code-switching, and represents an interesting resource to study political discourse in contrasting languages such as Basque and Spanish. We enrich the corpus with metadata related to relevant attributes of the speakers and speeches (language, gender, party...) and process the text to obtain named entities and lemmas. The obtained metadata is then used to perform a detailed corpus analysis which provides interesting insights about the language use of the Basque political representatives across time, parties and gender.
[ { "version": "v1", "created": "Tue, 3 May 2022 14:02:24 GMT" } ]
2022-05-04T00:00:00
[ [ "Escribano", "Nayla", "" ], [ "González", "Jon Ander", "" ], [ "Orbegozo-Terradillos", "Julen", "" ], [ "Larrondo-Ureta", "Ainara", "" ], [ "Peña-Fernández", "Simón", "" ], [ "Perez-de-Viñaspre", "Olatz", "" ], [ "Agerri", "Rodrigo", "" ] ]
new_dataset
0.998181
2205.01515
Nikolas Ebert
Nikolas Ebert, Patrick Mangat, Oliver Wasenm\"uller
Multitask Network for Joint Object Detection, Semantic Segmentation and Human Pose Estimation in Vehicle Occupancy Monitoring
This paper has been accepted at IEEE Intelligent Vehicles Symposium (IV), 2022 (ORAL)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to ensure safe autonomous driving, precise information about the conditions in and around the vehicle must be available. Accordingly, the monitoring of occupants and objects inside the vehicle is crucial. In the state-of-the-art, single or multiple deep neural networks are used for either object recognition, semantic segmentation, or human pose estimation. In contrast, we propose our Multitask Detection, Segmentation and Pose Estimation Network (MDSP) -- the first multitask network solving all these three tasks jointly in the area of occupancy monitoring. Due to the shared architecture, memory and computing costs can be saved while achieving higher accuracy. Furthermore, our architecture allows a flexible combination of the three mentioned tasks during a simple end-to-end training. We perform comprehensive evaluations on the public datasets SVIRO and TiCaM in order to demonstrate the superior performance.
[ { "version": "v1", "created": "Tue, 3 May 2022 14:11:18 GMT" } ]
2022-05-04T00:00:00
[ [ "Ebert", "Nikolas", "" ], [ "Mangat", "Patrick", "" ], [ "Wasenmüller", "Oliver", "" ] ]
new_dataset
0.999574
2205.01569
TianSheuan Chang
Shu-Hung Kuo, and Tian-Sheuan Chang
PSCNN: A 885.86 TOPS/W Programmable SRAM-based Computing-In-Memory Processor for Keyword Spotting
5 pages, 7 figures, published in IEEE ISCAS 2022
null
null
null
cs.AR cs.LG eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
Computing-in-memory (CIM) has attracted significant attentions in recent years due to its massive parallelism and low power consumption. However, current CIM designs suffer from large area overhead of small CIM macros and bad programmablity for model execution. This paper proposes a programmable CIM processor with a single large sized CIM macro instead of multiple smaller ones for power efficient computation and a flexible instruction set to support various binary 1-D convolution Neural Network (CNN) models in an easy way. Furthermore, the proposed architecture adopts the pooling write-back method to support fused or independent convolution/pooling operations to reduce 35.9\% of latency, and the flexible ping-pong feature SRAM to fit different feature map sizes during layer-by-layer execution.The design fabricated in TSMC 28nm technology achieves 150.8 GOPS throughput and 885.86 TOPS/W power efficiency at 10 MHz when executing our binary keyword spotting model, which has higher power efficiency and flexibility than previous designs.
[ { "version": "v1", "created": "Mon, 2 May 2022 09:58:18 GMT" } ]
2022-05-04T00:00:00
[ [ "Kuo", "Shu-Hung", "" ], [ "Chang", "Tian-Sheuan", "" ] ]
new_dataset
0.99672
2205.01571
TianSheuan Chang
Kuo-Wei Chang, Hsu-Tung Shih, Tian-Sheuan Chang, Shang-Hong Tsai, Chih-Chyau Yang, Chien-Ming Wu, Chun-Ming Huang
A Real Time 1280x720 Object Detection Chip With 585MB/s Memory Traffic
11 pages, 14 figures, to be published IEEE Transactions on Very Large Scale Integration (VLSI) Systems
null
10.1109/TVLSI.2022.3149768
null
cs.AR cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Memory bandwidth has become the real-time bottleneck of current deep learning accelerators (DLA), particularly for high definition (HD) object detection. Under resource constraints, this paper proposes a low memory traffic DLA chip with joint hardware and software optimization. To maximize hardware utilization under memory bandwidth, we morph and fuse the object detection model into a group fusion-ready model to reduce intermediate data access. This reduces the YOLOv2's feature memory traffic from 2.9 GB/s to 0.15 GB/s. To support group fusion, our previous DLA based hardware employes a unified buffer with write-masking for simple layer-by-layer processing in a fusion group. When compared to our previous DLA with the same PE numbers, the chip implemented in a TSMC 40nm process supports 1280x720@30FPS object detection and consumes 7.9X less external DRAM access energy, from 2607 mJ to 327.6 mJ.
[ { "version": "v1", "created": "Mon, 2 May 2022 09:58:39 GMT" } ]
2022-05-04T00:00:00
[ [ "Chang", "Kuo-Wei", "" ], [ "Shih", "Hsu-Tung", "" ], [ "Chang", "Tian-Sheuan", "" ], [ "Tsai", "Shang-Hong", "" ], [ "Yang", "Chih-Chyau", "" ], [ "Wu", "Chien-Ming", "" ], [ "Huang", "Chun-Ming", "" ] ]
new_dataset
0.996534
1901.10736
Christian Schilling
Sergiy Bogomolov, Marcelo Forets, Goran Frehse, Kostiantyn Potomkin, Christian Schilling
JuliaReach: a Toolbox for Set-Based Reachability
Accepted in Proceedings of HSCC'19: 22nd ACM International Conference on Hybrid Systems: Computation and Control (HSCC'19)
HSCC 2019
10.1145/3302504.3311804
null
cs.SY math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present JuliaReach, a toolbox for set-based reachability analysis of dynamical systems. JuliaReach consists of two main packages: Reachability, containing implementations of reachability algorithms for continuous and hybrid systems, and LazySets, a standalone library that implements state-of-the-art algorithms for calculus with convex sets. The library offers both concrete and lazy set representations, where the latter stands for the ability to delay set computations until they are needed. The choice of the programming language Julia and the accompanying documentation of our toolbox allow researchers to easily translate set-based algorithms from mathematics to software in a platform-independent way, while achieving runtime performance that is comparable to statically compiled languages. Combining lazy operations in high dimensions and explicit computations in low dimensions, JuliaReach can be applied to solve complex, large-scale problems.
[ { "version": "v1", "created": "Wed, 30 Jan 2019 10:02:35 GMT" }, { "version": "v2", "created": "Tue, 5 Mar 2019 09:17:47 GMT" } ]
2022-05-03T00:00:00
[ [ "Bogomolov", "Sergiy", "" ], [ "Forets", "Marcelo", "" ], [ "Frehse", "Goran", "" ], [ "Potomkin", "Kostiantyn", "" ], [ "Schilling", "Christian", "" ] ]
new_dataset
0.995809
1902.00815
Bj{\o}rn Kjos-Hanssen
Bj{\o}rn Kjos-Hanssen and Lei Liu
The number of languages with maximum state complexity
Algebra Universalis, accepted for publication. Preliminary version in: Theory and Applications of Models of Computation (TAMC) 2019. Lecture Notes in Computer Science 11436 (2019)
null
null
null
cs.FL math.CO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
C\^{a}mpeanu and Ho (2004) determined the maximum finite state complexity of finite languages, building on work of Champarnaud and Pin (1989). They stated that it is very difficult to determine the number of maximum-complexity languages. Here we give a formula for this number. We also generalize their work from languages to functions on finite sets.
[ { "version": "v1", "created": "Sat, 2 Feb 2019 23:44:04 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 23:25:28 GMT" } ]
2022-05-03T00:00:00
[ [ "Kjos-Hanssen", "Bjørn", "" ], [ "Liu", "Lei", "" ] ]
new_dataset
0.996006
1911.00889
Dimitrios Stathis
Dimitrios Stathis, Chirag Sudarshan, Yu Yang, Matthias Jung, Syed Asad Mohamad Hasan Jafri, Christian Weis, Ahmed Hemani, Anders Lansner, Norbert Wehn
eBrainII: A 3 kW Realtime Custom 3D DRAM integrated ASIC implementation of a Biologically Plausible Model of a Human Scale Cortex
null
null
10.1007/s11265-020-01562-x
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Artificial Neural Networks (ANNs) like CNN/DNN and LSTM are not biologically plausible and in spite of their initial success, they cannot attain the cognitive capabilities enabled by the dynamic hierarchical associative memory systems of biological brains. The biologically plausible spiking brain models, for e.g. cortex, basal ganglia and amygdala have a greater potential to achieve biological brain like cognitive capabilities. Bayesian Confidence Propagation Neural Network (BCPNN) is a biologically plausible spiking model of cortex. A human scale model of BCPNN in real time requires 162 TFlops/s, 50 TBs of synaptic weight storage to be accessed with a bandwidth of 200 TBs. The spiking bandwidth is relatively modest at 250 GBs/s. A hand optimized implementation of rodent scale BCPNN has been implemented on Tesla K80 GPUs require 3 kW, we extrapolate from that a human scale network will require 3 MW. These power numbers rule out such implementations for field deployment as advanced cognition engines in embedded systems. The key innovation that this paper reports is that it is feasible and affordable to implement real time BCPNN as a custom tiled ASIC in 28 nm technology with custom 3D DRAM - eBrain II - that consumes 3 kWs for human scale and 12 W for rodent scale cortex model. Such implementations eminently fulfill the demands for field deployment.
[ { "version": "v1", "created": "Sun, 3 Nov 2019 14:02:58 GMT" } ]
2022-05-03T00:00:00
[ [ "Stathis", "Dimitrios", "" ], [ "Sudarshan", "Chirag", "" ], [ "Yang", "Yu", "" ], [ "Jung", "Matthias", "" ], [ "Jafri", "Syed Asad Mohamad Hasan", "" ], [ "Weis", "Christian", "" ], [ "Hemani", "Ahmed", "" ], [ "Lansner", "Anders", "" ], [ "Wehn", "Norbert", "" ] ]
new_dataset
0.998567
2007.03180
Chuang Yang
Chuang Yang (1), Zhiwen Zhang (1), Zipei Fan (1 and 2), Renhe Jiang (1 and 2), Quanjun Chen (1 and 2), Xuan Song (1 and 2), Ryosuke Shibasaki (1 and 2) ((1) Center for Spatial Information Science, The University of Tokyo, (2) SUSTech-UTokyo Joint Research Center on Super Smart City, Southern University of Science and Technology)
EpiMob: Interactive Visual Analytics of Citywide Human Mobility Restrictions for Epidemic Control
null
IEEE Transactions on Visualization and Computer Graphics, 2022
10.1109/TVCG.2022.3165385
null
cs.HC cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The outbreak of coronavirus disease (COVID-19) has swept across more than 180 countries and territories since late January 2020. As a worldwide emergency response, governments have implemented various measures and policies, such as self-quarantine, travel restrictions, work from home, and regional lockdown, to control the spread of the epidemic. These countermeasures seek to restrict human mobility because COVID-19 is a highly contagious disease that is spread by human-to-human transmission. Medical experts and policymakers have expressed the urgency to effectively evaluate the outcome of human restriction policies with the aid of big data and information technology. Thus, based on big human mobility data and city POI data, an interactive visual analytics system called Epidemic Mobility (EpiMob) was designed in this study. The system interactively simulates the changes in human mobility and infection status in response to the implementation of a certain restriction policy or a combination of policies (e.g., regional lockdown, telecommuting, screening). Users can conveniently designate the spatial and temporal ranges for different mobility restriction policies. Then, the results reflecting the infection situation under different policies are dynamically displayed and can be flexibly compared and analyzed in depth. Multiple case studies consisting of interviews with domain experts were conducted in the largest metropolitan area of Japan (i.e., Greater Tokyo Area) to demonstrate that the system can provide insight into the effects of different human mobility restriction policies for epidemic control, through measurements and comparisons.
[ { "version": "v1", "created": "Tue, 7 Jul 2020 03:01:59 GMT" }, { "version": "v2", "created": "Tue, 26 Jan 2021 08:02:21 GMT" }, { "version": "v3", "created": "Mon, 29 Nov 2021 16:39:24 GMT" } ]
2022-05-03T00:00:00
[ [ "Yang", "Chuang", "", "1 and 2" ], [ "Zhang", "Zhiwen", "", "1 and 2" ], [ "Fan", "Zipei", "", "1 and 2" ], [ "Jiang", "Renhe", "", "1\n and 2" ], [ "Chen", "Quanjun", "", "1 and 2" ], [ "Song", "Xuan", "", "1 and 2" ], [ "Shibasaki", "Ryosuke", "", "1 and\n 2" ] ]
new_dataset
0.998216
2012.02087
Mohamed Sayed
Mohamed Sayed, Robert Cinca, Enrico Costanza, Gabriel Brostow
LookOut! Interactive Camera Gimbal Controller for Filming Long Takes
V3: - ToG version with final edits
ACM Trans. Graph. 41, 3, Article 30 (March 2022), 22 pages
10.1145/3506693
null
cs.GR cs.HC cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
The job of a camera operator is challenging, and potentially dangerous, when filming long moving camera shots. Broadly, the operator must keep the actors in-frame while safely navigating around obstacles, and while fulfilling an artistic vision. We propose a unified hardware and software system that distributes some of the camera operator's burden, freeing them up to focus on safety and aesthetics during a take. Our real-time system provides a solo operator with end-to-end control, so they can balance on-set responsiveness to action vs planned storyboards and framing, while looking where they're going. By default, we film without a field monitor. Our LookOut system is built around a lightweight commodity camera gimbal mechanism, with heavy modifications to the controller, which would normally just provide active stabilization. Our control algorithm reacts to speech commands, video, and a pre-made script. Specifically, our automatic monitoring of the live video feed saves the operator from distractions. In pre-production, an artist uses our GUI to design a sequence of high-level camera "behaviors." Those can be specific, based on a storyboard, or looser objectives, such as "frame both actors." Then during filming, a machine-readable script, exported from the GUI, ties together with the sensor readings to drive the gimbal. To validate our algorithm, we compared tracking strategies, interfaces, and hardware protocols, and collected impressions from a) film-makers who used all aspects of our system, and b) film-makers who watched footage filmed using LookOut.
[ { "version": "v1", "created": "Thu, 3 Dec 2020 17:20:45 GMT" }, { "version": "v2", "created": "Wed, 30 Dec 2020 22:04:49 GMT" }, { "version": "v3", "created": "Sat, 30 Apr 2022 21:38:45 GMT" } ]
2022-05-03T00:00:00
[ [ "Sayed", "Mohamed", "" ], [ "Cinca", "Robert", "" ], [ "Costanza", "Enrico", "" ], [ "Brostow", "Gabriel", "" ] ]
new_dataset
0.996168
2103.12827
Cat Le
Cat P. Le, Mohammadreza Soltani, Juncheng Dong, Vahid Tarokh
Fisher Task Distance and Its Application in Neural Architecture Search
Published in IEEE Access, Volume 10, 2022
null
10.1109/ACCESS.2022.3171741
null
cs.LG eess.IV stat.ML
http://creativecommons.org/licenses/by/4.0/
We formulate an asymmetric (or non-commutative) distance between tasks based on Fisher Information Matrices, called Fisher task distance. This distance represents the complexity of transferring the knowledge from one task to another. We provide a proof of consistency for our distance through theorems and experiments on various classification tasks from MNIST, CIFAR-10, CIFAR-100, ImageNet, and Taskonomy datasets. Next, we construct an online neural architecture search framework using the Fisher task distance, in which we have access to the past learned tasks. By using the Fisher task distance, we can identify the closest learned tasks to the target task, and utilize the knowledge learned from these related tasks for the target task. Here, we show how the proposed distance between a target task and a set of learned tasks can be used to reduce the neural architecture search space for the target task. The complexity reduction in search space for task-specific architectures is achieved by building on the optimized architectures for similar tasks instead of doing a full search and without using this side information. Experimental results for tasks in MNIST, CIFAR-10, CIFAR-100, ImageNet datasets demonstrate the efficacy of the proposed approach and its improvements, in terms of the performance and the number of parameters, over other gradient-based search methods, such as ENAS, DARTS, PC-DARTS.
[ { "version": "v1", "created": "Tue, 23 Mar 2021 20:43:31 GMT" }, { "version": "v2", "created": "Thu, 25 Mar 2021 14:13:56 GMT" }, { "version": "v3", "created": "Wed, 8 Sep 2021 19:48:59 GMT" }, { "version": "v4", "created": "Fri, 21 Jan 2022 22:23:26 GMT" }, { "version": "v5", "created": "Sat, 30 Apr 2022 04:40:37 GMT" } ]
2022-05-03T00:00:00
[ [ "Le", "Cat P.", "" ], [ "Soltani", "Mohammadreza", "" ], [ "Dong", "Juncheng", "" ], [ "Tarokh", "Vahid", "" ] ]
new_dataset
0.999519
2107.03761
Akhila Sri Manasa Venigalla
Akhila Sri Manasa Venigalla, Kowndinya Boyalakunta and Sridhar Chimalakonda
GitQ- Towards Using Badges as Visual Cues for GitHub Projects
5 pages, 3 figures
null
10.1145/3524610.3527876
null
cs.SE cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
GitHub hosts millions of software repositories, facilitating developers to contribute to many projects in multiple ways. Most of the information about the repositories is text-based in the form of stars, forks, commits, and so on. However, developers willing to contribute to projects on GitHub often find it challenging to select appropriate projects to contribute to or reuse due to the large number of repositories present on GitHub. Further, obtaining this required information often becomes a tedious process, as one has to carefully mine information hidden inside the repository. To alleviate the effort intensive mining procedures, researchers have proposed npm-badges to outline information relating to build status of a project. However, these badges are static and limit their usage to package dependency and build details. Adding visual cues such as badges to the repositories might reduce the search space for developers. Hence, we present GitQ, to automatically augment GitHub repositories with badges representing information about source code and project maintenance. Presenting GitQ as a browser plugin to GitHub could make it easily accessible to developers using GitHub. GitQ is evaluated with 15 developers based on the UTAUT model to understand developer perception towards its usefulness. We observed that 11 out of 15 developers perceived GitQ to be useful in identifying the right set of repositories using visual cues such as generated by GitQ. The source code and tool are available for download on GitHub at https://github.com/gitq-for-github/plugin, and the demo can be found at https://youtu.be/c0yohmIat3A.
[ { "version": "v1", "created": "Thu, 8 Jul 2021 11:11:48 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 17:26:47 GMT" } ]
2022-05-03T00:00:00
[ [ "Venigalla", "Akhila Sri Manasa", "" ], [ "Boyalakunta", "Kowndinya", "" ], [ "Chimalakonda", "Sridhar", "" ] ]
new_dataset
0.978282
2107.11041
Yoonsik Kim
Junyeop Lee, Yoonsik Kim, Seonghyeon Kim, Moonbin Yim, Seung Shin, Gayoung Lee, Sungrae Park
RewriteNet: Reliable Scene Text Editing with Implicit Decomposition of Text Contents and Styles
CVPRW 2022 - AI for Content Creation Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Scene text editing (STE), which converts a text in a scene image into the desired text while preserving an original style, is a challenging task due to a complex intervention between text and style. In this paper, we propose a novel STE model, referred to as RewriteNet, that decomposes text images into content and style features and re-writes a text in the original image. Specifically, RewriteNet implicitly distinguishes the content from the style by introducing scene text recognition. Additionally, independent of the exact supervisions with synthetic examples, we propose a self-supervised training scheme for unlabeled real-world images, which bridges the domain gap between synthetic and real data. Our experiments present that RewriteNet achieves better generation performances than other comparisons. Further analysis proves the feature decomposition of RewriteNet and demonstrates the reliability and robustness through diverse experiments. Our implementation is publicly available at \url{https://github.com/clovaai/rewritenet}
[ { "version": "v1", "created": "Fri, 23 Jul 2021 06:32:58 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 11:30:26 GMT" } ]
2022-05-03T00:00:00
[ [ "Lee", "Junyeop", "" ], [ "Kim", "Yoonsik", "" ], [ "Kim", "Seonghyeon", "" ], [ "Yim", "Moonbin", "" ], [ "Shin", "Seung", "" ], [ "Lee", "Gayoung", "" ], [ "Park", "Sungrae", "" ] ]
new_dataset
0.986409
2108.07154
Yinhe Zheng Dr.
Yinhe Zheng, Guanyi Chen, Xin Liu, Jian Sun
MMChat: Multi-Modal Chat Dataset on Social Media
Accepted by LREC2022. Dataset available in https://github.com/silverriver/MMChat
null
null
null
cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Incorporating multi-modal contexts in conversation is important for developing more engaging dialogue systems. In this work, we explore this direction by introducing MMChat: a large-scale Chinese multi-modal dialogue corpus (32.4M raw dialogues and 120.84K filtered dialogues). Unlike previous corpora that are crowd-sourced or collected from fictitious movies, MMChat contains image-grounded dialogues collected from real conversations on social media, in which the sparsity issue is observed. Specifically, image-initiated dialogues in common communications may deviate to some non-image-grounded topics as the conversation proceeds. To better investigate this issue, we manually annotate 100K dialogues from MMChat and further filter the corpus accordingly, which yields MMChat-hf. We develop a benchmark model to address the sparsity issue in dialogue generation tasks by adapting the attention routing mechanism on image features. Experiments demonstrate the usefulness of incorporating image features and the effectiveness of handling the sparsity of image features.
[ { "version": "v1", "created": "Mon, 16 Aug 2021 15:27:49 GMT" }, { "version": "v2", "created": "Sat, 9 Apr 2022 02:04:48 GMT" }, { "version": "v3", "created": "Sun, 1 May 2022 09:51:17 GMT" } ]
2022-05-03T00:00:00
[ [ "Zheng", "Yinhe", "" ], [ "Chen", "Guanyi", "" ], [ "Liu", "Xin", "" ], [ "Sun", "Jian", "" ] ]
new_dataset
0.999839
2109.04919
Shutong Feng
Shutong Feng, Nurul Lubis, Christian Geishauser, Hsien-chin Lin, Michael Heck, Carel van Niekerk and Milica Ga\v{s}i\'c
EmoWOZ: A Large-Scale Corpus and Labelling Scheme for Emotion Recognition in Task-Oriented Dialogue Systems
Accepted for publication at LREC 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The ability to recognise emotions lends a conversational artificial intelligence a human touch. While emotions in chit-chat dialogues have received substantial attention, emotions in task-oriented dialogues remain largely unaddressed. This is despite emotions and dialogue success having equally important roles in a natural system. Existing emotion-annotated task-oriented corpora are limited in size, label richness, and public availability, creating a bottleneck for downstream tasks. To lay a foundation for studies on emotions in task-oriented dialogues, we introduce EmoWOZ, a large-scale manually emotion-annotated corpus of task-oriented dialogues. EmoWOZ is based on MultiWOZ, a multi-domain task-oriented dialogue dataset. It contains more than 11K dialogues with more than 83K emotion annotations of user utterances. In addition to Wizard-of-Oz dialogues from MultiWOZ, we collect human-machine dialogues within the same set of domains to sufficiently cover the space of various emotions that can happen during the lifetime of a data-driven dialogue system. To the best of our knowledge, this is the first large-scale open-source corpus of its kind. We propose a novel emotion labelling scheme, which is tailored to task-oriented dialogues. We report a set of experimental results to show the usability of this corpus for emotion recognition and state tracking in task-oriented dialogues.
[ { "version": "v1", "created": "Fri, 10 Sep 2021 15:00:01 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 08:34:11 GMT" } ]
2022-05-03T00:00:00
[ [ "Feng", "Shutong", "" ], [ "Lubis", "Nurul", "" ], [ "Geishauser", "Christian", "" ], [ "Lin", "Hsien-chin", "" ], [ "Heck", "Michael", "" ], [ "van Niekerk", "Carel", "" ], [ "Gašić", "Milica", "" ] ]
new_dataset
0.999782
2110.07731
Patrick Huber
Patrick Huber, Armen Aghajanyan, Barlas O\u{g}uz, Dmytro Okhonko, Wen-tau Yih, Sonal Gupta, Xilun Chen
CCQA: A New Web-Scale Question Answering Dataset for Model Pre-Training
9 pages, Findings of NAACL 2022
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rise of large-scale pre-trained language models, open-domain question-answering (ODQA) has become an important research topic in NLP. Based on the popular pre-training fine-tuning approach, we posit that an additional in-domain pre-training stage using a large-scale, natural, and diverse question-answering (QA) dataset can be beneficial for ODQA. Consequently, we propose a novel QA dataset based on the Common Crawl project in this paper. Using the readily available schema.org annotation, we extract around 130 million multilingual question-answer pairs, including about 60 million English data-points. With this previously unseen number of natural QA pairs, we pre-train popular language models to show the potential of large-scale in-domain pre-training for the task of question-answering. In our experiments, we find that pre-training question-answering models on our Common Crawl Question Answering dataset (CCQA) achieves promising results in zero-shot, low resource and fine-tuned settings across multiple tasks, models and benchmarks.
[ { "version": "v1", "created": "Thu, 14 Oct 2021 21:23:01 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 17:43:22 GMT" } ]
2022-05-03T00:00:00
[ [ "Huber", "Patrick", "" ], [ "Aghajanyan", "Armen", "" ], [ "Oğuz", "Barlas", "" ], [ "Okhonko", "Dmytro", "" ], [ "Yih", "Wen-tau", "" ], [ "Gupta", "Sonal", "" ], [ "Chen", "Xilun", "" ] ]
new_dataset
0.997999
2111.12358
Binhui Xie
Binhui Xie, Mingjia Li and Shuang Li
SPCL: A New Framework for Domain Adaptive Semantic Segmentation via Semantic Prototype-based Contrastive Learning
23 pages, 9 figures; The code is publicly available at https://github.com/BinhuiXie/SPCL
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although there is significant progress in supervised semantic segmentation, it remains challenging to deploy the segmentation models to unseen domains due to domain biases. Domain adaptation can help in this regard by transferring knowledge from a labeled source domain to an unlabeled target domain. Previous methods typically attempt to perform the adaptation on global features, however, the local semantic affiliations accounting for each pixel in the feature space are often ignored, resulting in less discriminability. To solve this issue, we propose a novel semantic prototype-based contrastive learning framework for fine-grained class alignment. Specifically, the semantic prototypes provide supervisory signals for per-pixel discriminative representation learning and each pixel of source and target domains in the feature space is required to reflect the content of the corresponding semantic prototype. In this way, our framework is able to explicitly make intra-class pixel representations closer and inter-class pixel representations further apart to improve the robustness of the segmentation model as well as alleviate the domain shift problem. Our method is easy to implement and attains superior results compared to state-of-the-art approaches, as is demonstrated with a number of experiments. The code is publicly available at https://github.com/BinhuiXie/SPCL.
[ { "version": "v1", "created": "Wed, 24 Nov 2021 09:26:07 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2022 08:02:22 GMT" } ]
2022-05-03T00:00:00
[ [ "Xie", "Binhui", "" ], [ "Li", "Mingjia", "" ], [ "Li", "Shuang", "" ] ]
new_dataset
0.980052
2112.02732
Yueqing Sun
Yueqing Sun, Qi Shi, Le Qi, Yu Zhang
JointLK: Joint Reasoning with Language Models and Knowledge Graphs for Commonsense Question Answering
Accepted by NAACL 2022 main conference (Long paper)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing KG-augmented models for commonsense question answering primarily focus on designing elaborate Graph Neural Networks (GNNs) to model knowledge graphs (KGs). However, they ignore (i) the effectively fusing and reasoning over question context representations and the KG representations, and (ii) automatically selecting relevant nodes from the noisy KGs during reasoning. In this paper, we propose a novel model, JointLK, which solves the above limitations through the joint reasoning of LM and GNN and the dynamic KGs pruning mechanism. Specifically, JointLK performs joint reasoning between LM and GNN through a novel dense bidirectional attention module, in which each question token attends on KG nodes and each KG node attends on question tokens, and the two modal representations fuse and update mutually by multi-step interactions. Then, the dynamic pruning module uses the attention weights generated by joint reasoning to prune irrelevant KG nodes recursively. We evaluate JointLK on the CommonsenseQA and OpenBookQA datasets, and demonstrate its improvements to the existing LM and LM+KG models, as well as its capability to perform interpretable reasoning.
[ { "version": "v1", "created": "Mon, 6 Dec 2021 01:46:46 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 08:28:35 GMT" } ]
2022-05-03T00:00:00
[ [ "Sun", "Yueqing", "" ], [ "Shi", "Qi", "" ], [ "Qi", "Le", "" ], [ "Zhang", "Yu", "" ] ]
new_dataset
0.985124
2112.04532
Jiang Liu
Jiang Liu, Alexander Levine, Chun Pong Lau, Rama Chellappa, Soheil Feizi
Segment and Complete: Defending Object Detectors against Adversarial Patch Attacks with Robust Patch Detection
CVPR 2022 camera ready
null
null
null
cs.CV cs.CR eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection plays a key role in many security-critical systems. Adversarial patch attacks, which are easy to implement in the physical world, pose a serious threat to state-of-the-art object detectors. Developing reliable defenses for object detectors against patch attacks is critical but severely understudied. In this paper, we propose Segment and Complete defense (SAC), a general framework for defending object detectors against patch attacks through detection and removal of adversarial patches. We first train a patch segmenter that outputs patch masks which provide pixel-level localization of adversarial patches. We then propose a self adversarial training algorithm to robustify the patch segmenter. In addition, we design a robust shape completion algorithm, which is guaranteed to remove the entire patch from the images if the outputs of the patch segmenter are within a certain Hamming distance of the ground-truth patch masks. Our experiments on COCO and xView datasets demonstrate that SAC achieves superior robustness even under strong adaptive attacks with no reduction in performance on clean images, and generalizes well to unseen patch shapes, attack budgets, and unseen attack methods. Furthermore, we present the APRICOT-Mask dataset, which augments the APRICOT dataset with pixel-level annotations of adversarial patches. We show SAC can significantly reduce the targeted attack success rate of physical patch attacks. Our code is available at https://github.com/joellliu/SegmentAndComplete.
[ { "version": "v1", "created": "Wed, 8 Dec 2021 19:18:48 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 14:59:39 GMT" } ]
2022-05-03T00:00:00
[ [ "Liu", "Jiang", "" ], [ "Levine", "Alexander", "" ], [ "Lau", "Chun Pong", "" ], [ "Chellappa", "Rama", "" ], [ "Feizi", "Soheil", "" ] ]
new_dataset
0.999641
2112.04960
Siddhartha Srivastava
X. Zhang, G.H. Teichert, Z. Wang, M. Duschenes, S. Srivastava, E. Livingston, J. Holber, M. Faghih Shojaei, A. Sundararajan and K. Garikipati
mechanoChemML: A software library for machine learning in computational materials physics
null
null
null
null
cs.CE
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present mechanoChemML, a machine learning software library for computational materials physics. mechanoChemML is designed to function as an interface between platforms that are widely used for machine learning on one hand, and others for solution of partial differential equations-based models of physics. Of special interest here, and the focus of mechanoChemML, are applications to computational materials physics. These typically feature the coupled solution of material transport, reaction, phase transformation, mechanics, heat transport and electrochemistry. Central to the organization of mechanoChemML are machine learning workflows that arise in the context of data-driven computational materials physics. The mechanoChemML code structure is described, the machine learning workflows are laid out and their application to the solution of several problems in materials physics is outlined.
[ { "version": "v1", "created": "Thu, 9 Dec 2021 14:42:04 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2022 19:45:07 GMT" } ]
2022-05-03T00:00:00
[ [ "Zhang", "X.", "" ], [ "Teichert", "G. H.", "" ], [ "Wang", "Z.", "" ], [ "Duschenes", "M.", "" ], [ "Srivastava", "S.", "" ], [ "Livingston", "E.", "" ], [ "Holber", "J.", "" ], [ "Shojaei", "M. Faghih", "" ], [ "Sundararajan", "A.", "" ], [ "Garikipati", "K.", "" ] ]
new_dataset
0.999432
2112.05637
Yang Hong
Yang Hong, Bo Peng, Haiyao Xiao, Ligang Liu, Juyong Zhang
HeadNeRF: A Real-time NeRF-based Parametric Head Model
Accepted by CVPR2022. Project page: https://crishy1995.github.io/HeadNeRF-Project/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose HeadNeRF, a novel NeRF-based parametric head model that integrates the neural radiance field to the parametric representation of the human head. It can render high fidelity head images in real-time on modern GPUs, and supports directly controlling the generated images' rendering pose and various semantic attributes. Different from existing related parametric models, we use the neural radiance fields as a novel 3D proxy instead of the traditional 3D textured mesh, which makes that HeadNeRF is able to generate high fidelity images. However, the computationally expensive rendering process of the original NeRF hinders the construction of the parametric NeRF model. To address this issue, we adopt the strategy of integrating 2D neural rendering to the rendering process of NeRF and design novel loss terms. As a result, the rendering speed of HeadNeRF can be significantly accelerated, and the rendering time of one frame is reduced from 5s to 25ms. The well designed loss terms also improve the rendering accuracy, and the fine-level details of the human head, such as the gaps between teeth, wrinkles, and beards, can be represented and synthesized by HeadNeRF. Extensive experimental results and several applications demonstrate its effectiveness. The trained parametric model is available at https://github.com/CrisHY1995/headnerf.
[ { "version": "v1", "created": "Fri, 10 Dec 2021 16:10:13 GMT" }, { "version": "v2", "created": "Mon, 13 Dec 2021 03:05:45 GMT" }, { "version": "v3", "created": "Sat, 30 Apr 2022 13:57:53 GMT" } ]
2022-05-03T00:00:00
[ [ "Hong", "Yang", "" ], [ "Peng", "Bo", "" ], [ "Xiao", "Haiyao", "" ], [ "Liu", "Ligang", "" ], [ "Zhang", "Juyong", "" ] ]
new_dataset
0.991708
2112.07522
Mengjie Zhao
Mengjie Zhao, Fei Mi, Yasheng Wang, Minglei Li, Xin Jiang, Qun Liu, Hinrich Sch\"utze
LMTurk: Few-Shot Learners as Crowdsourcing Workers in a Language-Model-as-a-Service Framework
Findings of ACL: NAACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Vast efforts have been devoted to creating high-performance few-shot learners, i.e., large-scale pretrained language models (PLMs) that perform well with little downstream task training data. Training PLMs has incurred significant cost, but utilizing the few-shot learners is still challenging due to their enormous size. This work focuses on a crucial question: How to make effective use of these few-shot learners? We propose LMTurk, a novel approach that treats few-shot learners as crowdsourcing workers. The rationale is that crowdsourcing workers are in fact few-shot learners: They are shown a few illustrative examples to learn about a task and then start annotating. LMTurk employs few-shot learners built upon PLMs as workers. We show that the resulting annotations can be utilized to train models that solve the task well and are small enough to be deployable in practical scenarios. Active learning is integrated into LMTurk to reduce the amount of queries made to PLMs, minimizing the computational cost of running PLM inference passes. Altogether, LMTurk is an important step towards making effective use of current PLMs.
[ { "version": "v1", "created": "Tue, 14 Dec 2021 16:34:22 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 09:20:46 GMT" } ]
2022-05-03T00:00:00
[ [ "Zhao", "Mengjie", "" ], [ "Mi", "Fei", "" ], [ "Wang", "Yasheng", "" ], [ "Li", "Minglei", "" ], [ "Jiang", "Xin", "" ], [ "Liu", "Qun", "" ], [ "Schütze", "Hinrich", "" ] ]
new_dataset
0.997075
2201.05281
Yaxiong Xie
Yaxiong Xie, Kyle Jamieson
NG-Scope: Fine-Grained Telemetry for NextG Cellular Networks
null
null
10.1145/3508032
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Accurate and highly-granular channel capacity telemetry of the cellular last hop is crucial for the effective operation of transport layer protocols and cutting-edge applications, such as video on demand and videotelephony. This paper presents the design, implementation, and experimental performance evaluation of NG-Scope, the first such telemetry tool able to fuse physical-layer channel occupancy readings from the cellular control channel with higher-layer packet arrival statistics and make accurate capacity estimates. NG-Scope handles the latest cellular innovations, such as when multiple base stations aggregate their signals together to serve mobile users. End-to-end experiments in a commercial cellular network demonstrate that wireless capacity varies significantly with channel quality, mobility, competing traffic within each cell, and the number of aggregated cells. Our experiments demonstrate significantly improved cell load estimation accuracy, missing the detection of less than 1% of data capacity overall, a reduction of 82% compared to OWL, the state-of-the-art in cellular monitoring. Further experiments show that MobileInsight-based CLAW has a root-mean-squared capacity error of 30.5 Mbit/s, which is 3.3x larger than NG-Scope (9.2 Mbit/s)
[ { "version": "v1", "created": "Fri, 14 Jan 2022 02:47:59 GMT" }, { "version": "v2", "created": "Tue, 18 Jan 2022 21:50:45 GMT" } ]
2022-05-03T00:00:00
[ [ "Xie", "Yaxiong", "" ], [ "Jamieson", "Kyle", "" ] ]
new_dataset
0.999558
2201.06223
Jooyoung Choi
Changwook Jun, Jooyoung Choi, Myoseop Sim, Hyun Kim, Hansol Jang, Kyungkoo Min
Korean-Specific Dataset for Table Question Answering
7 pages including references and 4 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
Existing question answering systems mainly focus on dealing with text data. However, much of the data produced daily is stored in the form of tables that can be found in documents and relational databases, or on the web. To solve the task of question answering over tables, there exist many datasets for table question answering written in English, but few Korean datasets. In this paper, we demonstrate how we construct Korean-specific datasets for table question answering: Korean tabular dataset is a collection of 1.4M tables with corresponding descriptions for unsupervised pre-training language models. Korean table question answering corpus consists of 70k pairs of questions and answers created by crowd-sourced workers. Subsequently, we then build a pre-trained language model based on Transformer and fine-tune the model for table question answering with these datasets. We then report the evaluation results of our model. We make our datasets publicly available via our GitHub repository and hope that those datasets will help further studies for question answering over tables, and for the transformation of table formats.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 05:47:44 GMT" }, { "version": "v2", "created": "Sun, 1 May 2022 12:35:19 GMT" } ]
2022-05-03T00:00:00
[ [ "Jun", "Changwook", "" ], [ "Choi", "Jooyoung", "" ], [ "Sim", "Myoseop", "" ], [ "Kim", "Hyun", "" ], [ "Jang", "Hansol", "" ], [ "Min", "Kyungkoo", "" ] ]
new_dataset
0.999664
2201.10248
Prabhat Kumar
Prabhat Kumar
HoneyTop90: A 90-line MATLAB code for topology optimization using honeycomb tessellation
null
Optimization and Engineering, 2022
10.1007/s11081-022-09715-6
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper provides a simple, compact and efficient 90-line pedagogical MATLAB code for topology optimization using hexagonal elements (honeycomb tessellation). Hexagonal elements provide nonsingular connectivity between two juxtaposed elements and, thus, subdue checkerboard patterns and point connections inherently from the optimized designs. A novel approach to generate honeycomb tessellation is proposed. The element connectivity matrix and corresponding nodal coordinates array are determined in 5 (7) and 4 (6) lines, respectively. Two additional lines for the meshgrid generation are required for an even number of elements in the vertical direction. The code takes a fraction of a second to generate meshgrid information for the millions of hexagonal elements. Wachspress shape functions are employed for the finite element analysis, and compliance minimization is performed using the optimality criteria method. The provided Matlab code and its extensions are explained in detail. Options to run the optimization with and without filtering techniques are provided. Steps to include different boundary conditions, multiple load cases, active and passive regions, and a Heaviside projection filter are also discussed. The code is provided in Appendix~A, and it can also be downloaded along with supplementary materials from \url{https://github.com/PrabhatIn/HoneyTop90}.
[ { "version": "v1", "created": "Tue, 25 Jan 2022 11:41:43 GMT" }, { "version": "v2", "created": "Tue, 15 Mar 2022 05:42:52 GMT" }, { "version": "v3", "created": "Wed, 16 Mar 2022 01:06:30 GMT" }, { "version": "v4", "created": "Sun, 24 Apr 2022 17:09:19 GMT" }, { "version": "v5", "created": "Sat, 30 Apr 2022 23:25:09 GMT" } ]
2022-05-03T00:00:00
[ [ "Kumar", "Prabhat", "" ] ]
new_dataset
0.999305
2202.07265
Massimo Battaglioni Dr.
Massimo Battaglioni and Paolo Santini and Giulia Rafaiani and Franco Chiaraluce and Marco Baldi
Analysis of a blockchain protocol based on LDPC codes
null
null
null
null
cs.CR cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a blockchain Data Availability Attack (DAA), a malicious node publishes a block header but withholds part of the block, which contains invalid transactions. Honest full nodes, which can download and store the full blockchain, are aware that some data are not available but they have no formal way to prove it to light nodes, i.e., nodes that have limited resources and are not able to access the whole blockchain data. A common solution to counter these attacks exploits linear error correcting codes to encode the block content. A recent protocol, called SPAR, employs coded Merkle trees and low-density parity-check codes to counter DAAs. In this paper, we show that the protocol is less secure than claimed, owing to a redefinition of the adversarial success probability. As a consequence we show that, for some realistic choices of the parameters, the total amount of data downloaded by light nodes is larger than that obtainable with competitor solutions.
[ { "version": "v1", "created": "Tue, 15 Feb 2022 09:20:56 GMT" }, { "version": "v2", "created": "Mon, 11 Apr 2022 15:33:37 GMT" }, { "version": "v3", "created": "Sat, 30 Apr 2022 18:16:41 GMT" } ]
2022-05-03T00:00:00
[ [ "Battaglioni", "Massimo", "" ], [ "Santini", "Paolo", "" ], [ "Rafaiani", "Giulia", "" ], [ "Chiaraluce", "Franco", "" ], [ "Baldi", "Marco", "" ] ]
new_dataset
0.996411
2203.00101
Hossein Keshavarz
Hossein Keshavarz and Meiyappan Nagappan
ApacheJIT: A Large Dataset for Just-In-Time Defect Prediction
null
null
null
null
cs.SE cs.AI cs.LG cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present ApacheJIT, a large dataset for Just-In-Time defect prediction. ApacheJIT consists of clean and bug-inducing software changes in popular Apache projects. ApacheJIT has a total of 106,674 commits (28,239 bug-inducing and 78,435 clean commits). Having a large number of commits makes ApacheJIT a suitable dataset for machine learning models, especially deep learning models that require large training sets to effectively generalize the patterns present in the historical data to future data.
[ { "version": "v1", "created": "Mon, 28 Feb 2022 21:26:14 GMT" }, { "version": "v2", "created": "Sat, 30 Apr 2022 01:42:25 GMT" } ]
2022-05-03T00:00:00
[ [ "Keshavarz", "Hossein", "" ], [ "Nagappan", "Meiyappan", "" ] ]
new_dataset
0.999846
2203.00757
Ben Greenspan PhD
Ben Greenspan, Eric M. Gallo, Andreea Danielescu
FlexKeys: Rapidly Customizable 3D Printed Tactile Input Devices with No Assembly Required
Abstract accepted, paper in review for 13th International Conference on Applied Human Factors and Ergonomics (AHFE 2022). July 24-28, 2022
null
null
null
cs.HC
http://creativecommons.org/licenses/by-sa/4.0/
Physical input devices serve as a tactile interface between users and computing systems. These devices are often complex assemblies that consist of both electrical and mechanical components making customization difficult and out of reach for non-engineers. While these components can now be 3D printed on demand, they must still be independently designed and assembled. We present FlexKeys, an approach in which devices that include both electrical and deformable components can be created in a single print on a multi-material 3D printer, requiring no assembly. Designers can customize devices including the input type, travel distance and layout of keys, textures of surfaces, and route all electrical signals directly to a microcontroller socket. In many instances, these devices require no support material, producing a functional device the moment a print finishes. We demonstrate this approach by creating a customized keyboard and report on validation measurements of individual input keys as well as highlighting additional designs. This work provides the first step towards lowering the barrier to entry for non-engineers to design custom tactile inputs, enabling occupational and physical therapists, clinicians, and educators to design and create devices directly based on their assessments of individual user needs.
[ { "version": "v1", "created": "Tue, 1 Mar 2022 21:51:53 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 18:57:47 GMT" } ]
2022-05-03T00:00:00
[ [ "Greenspan", "Ben", "" ], [ "Gallo", "Eric M.", "" ], [ "Danielescu", "Andreea", "" ] ]
new_dataset
0.998635
2203.11480
Shuai Zhao
Sha Yuan, Shuai Zhao, Jiahong Leng, Zhao Xue, Hanyu Zhao, Peiyu Liu, Zheng Gong, Wayne Xin Zhao, Junyi Li and Jie Tang
WuDaoMM: A large-scale Multi-Modal Dataset for Pre-training models
Some data problems cannot be obtained
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by/4.0/
Compared with the domain-specific model, the vision-language pre-training models (VLPMs) have shown superior performance on downstream tasks with fast fine-tuning process. For example, ERNIE-ViL, Oscar and UNIMO trained VLPMs with a uniform transformers stack architecture and large amounts of image-text paired data, achieving remarkable results on downstream tasks such as image-text reference(IR and TR), vision question answering (VQA) and image captioning (IC) etc. During the training phase, VLPMs are always fed with a combination of multiple public datasets to meet the demand of large-scare training data. However, due to the unevenness of data distribution including size, task type and quality, using the mixture of multiple datasets for model training can be problematic. In this work, we introduce a large-scale multi-modal corpora named WuDaoMM, totally containing more than 650M image-text pairs. Specifically, about 600 million pairs of data are collected from multiple webpages in which image and caption present weak correlation, and the other 50 million strong-related image-text pairs are collected from some high-quality graphic websites. We also release a base version of WuDaoMM with 5 million strong-correlated image-text pairs, which is sufficient to support the common cross-modal model pre-training. Besides, we trained both an understanding and a generation vision-language (VL) model to test the dataset effectiveness. The results show that WuDaoMM can be applied as an efficient dataset for VLPMs, especially for the model in text-to-image generation task. The data is released at https://data.wudaoai.cn
[ { "version": "v1", "created": "Tue, 22 Mar 2022 06:12:20 GMT" }, { "version": "v2", "created": "Tue, 29 Mar 2022 12:44:43 GMT" }, { "version": "v3", "created": "Wed, 30 Mar 2022 00:35:53 GMT" }, { "version": "v4", "created": "Tue, 19 Apr 2022 00:49:22 GMT" }, { "version": "v5", "created": "Sun, 1 May 2022 02:34:42 GMT" } ]
2022-05-03T00:00:00
[ [ "Yuan", "Sha", "" ], [ "Zhao", "Shuai", "" ], [ "Leng", "Jiahong", "" ], [ "Xue", "Zhao", "" ], [ "Zhao", "Hanyu", "" ], [ "Liu", "Peiyu", "" ], [ "Gong", "Zheng", "" ], [ "Zhao", "Wayne Xin", "" ], [ "Li", "Junyi", "" ], [ "Tang", "Jie", "" ] ]
new_dataset
0.99973
2203.14712
Fadime Sener
Fadime Sener and Dibyadip Chatterjee and Daniel Shelepov and Kun He and Dipika Singhania and Robert Wang and Angela Yao
Assembly101: A Large-Scale Multi-View Video Dataset for Understanding Procedural Activities
CVPR 2022, https://assembly-101.github.io/
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Assembly101 is a new procedural activity dataset featuring 4321 videos of people assembling and disassembling 101 "take-apart" toy vehicles. Participants work without fixed instructions, and the sequences feature rich and natural variations in action ordering, mistakes, and corrections. Assembly101 is the first multi-view action dataset, with simultaneous static (8) and egocentric (4) recordings. Sequences are annotated with more than 100K coarse and 1M fine-grained action segments, and 18M 3D hand poses. We benchmark on three action understanding tasks: recognition, anticipation and temporal segmentation. Additionally, we propose a novel task of detecting mistakes. The unique recording format and rich set of annotations allow us to investigate generalization to new toys, cross-view transfer, long-tailed distributions, and pose vs. appearance. We envision that Assembly101 will serve as a new challenge to investigate various activity understanding problems.
[ { "version": "v1", "created": "Mon, 28 Mar 2022 12:59:50 GMT" }, { "version": "v2", "created": "Sun, 1 May 2022 14:49:02 GMT" } ]
2022-05-03T00:00:00
[ [ "Sener", "Fadime", "" ], [ "Chatterjee", "Dibyadip", "" ], [ "Shelepov", "Daniel", "" ], [ "He", "Kun", "" ], [ "Singhania", "Dipika", "" ], [ "Wang", "Robert", "" ], [ "Yao", "Angela", "" ] ]
new_dataset
0.999911
2204.09033
Oded Padon
Mingkuan Xu and Zikun Li and Oded Padon and Sina Lin and Jessica Pointing and Auguste Hirth and Henry Ma and Jens Palsberg and Alex Aiken and Umut A. Acar and Zhihao Jia
Quartz: Superoptimization of Quantum Circuits (Extended Version)
28 pages. Extended version of the paper presented in PLDI 2022. Typos corrected and artifact reference updated
null
null
null
cs.PL quant-ph
http://creativecommons.org/licenses/by/4.0/
Existing quantum compilers optimize quantum circuits by applying circuit transformations designed by experts. This approach requires significant manual effort to design and implement circuit transformations for different quantum devices, which use different gate sets, and can miss optimizations that are hard to find manually. We propose Quartz, a quantum circuit superoptimizer that automatically generates and verifies circuit transformations for arbitrary quantum gate sets. For a given gate set, Quartz generates candidate circuit transformations by systematically exploring small circuits and verifies the discovered transformations using an automated theorem prover. To optimize a quantum circuit, Quartz uses a cost-based backtracking search that applies the verified transformations to the circuit. Our evaluation on three popular gate sets shows that Quartz can effectively generate and verify transformations for different gate sets. The generated transformations cover manually designed transformations used by existing optimizers and also include new transformations. Quartz is therefore able to optimize a broad range of circuits for diverse gate sets, outperforming or matching the performance of hand-tuned circuit optimizers.
[ { "version": "v1", "created": "Tue, 19 Apr 2022 17:52:59 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 07:13:21 GMT" } ]
2022-05-03T00:00:00
[ [ "Xu", "Mingkuan", "" ], [ "Li", "Zikun", "" ], [ "Padon", "Oded", "" ], [ "Lin", "Sina", "" ], [ "Pointing", "Jessica", "" ], [ "Hirth", "Auguste", "" ], [ "Ma", "Henry", "" ], [ "Palsberg", "Jens", "" ], [ "Aiken", "Alex", "" ], [ "Acar", "Umut A.", "" ], [ "Jia", "Zhihao", "" ] ]
new_dataset
0.999385
2204.10685
Harikumar Kandath
Tanuja Joshi, Hariprasad Kodamana, Harikumar Kandath, and Niket Kaisare
TASAC: a twin-actor reinforcement learning framework with stochastic policy for batch process control
11 pages
null
null
null
cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Due to their complex nonlinear dynamics and batch-to-batch variability, batch processes pose a challenge for process control. Due to the absence of accurate models and resulting plant-model mismatch, these problems become harder to address for advanced model-based control strategies. Reinforcement Learning (RL), wherein an agent learns the policy by directly interacting with the environment, offers a potential alternative in this context. RL frameworks with actor-critic architecture have recently become popular for controlling systems where state and action spaces are continuous. It has been shown that an ensemble of actor and critic networks further helps the agent learn better policies due to the enhanced exploration due to simultaneous policy learning. To this end, the current study proposes a stochastic actor-critic RL algorithm, termed Twin Actor Soft Actor-Critic (TASAC), by incorporating an ensemble of actors for learning, in a maximum entropy framework, for batch process control.
[ { "version": "v1", "created": "Fri, 22 Apr 2022 13:00:51 GMT" }, { "version": "v2", "created": "Mon, 2 May 2022 09:31:58 GMT" } ]
2022-05-03T00:00:00
[ [ "Joshi", "Tanuja", "" ], [ "Kodamana", "Hariprasad", "" ], [ "Kandath", "Harikumar", "" ], [ "Kaisare", "Niket", "" ] ]
new_dataset
0.998474
2204.12261
Djordje Jevdjic
Dehui Lin, Yasamin Tabatabaee, Yash Pote, and Djordje Jevdjic
Managing Reliability Skew in DNA Storage
In Proceedings of the International Symposium on Computer Architecture (ISCA 2022)
null
10.1145/3470496.3527441
null
cs.ET cs.AR cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
DNA is emerging as an increasingly attractive medium for data storage due to a number of important and unique advantages it offers, most notably the unprecedented durability and density. While the technology is evolving rapidly, the prohibitive cost of reads and writes, the high frequency and the peculiar nature of errors occurring in DNA storage pose a significant challenge to its adoption. In this work we make a novel observation that the probability of successful recovery of a given bit from any type of a DNA-based storage system highly depends on its physical location within the DNA molecule. In other words, when used as a storage medium, some parts of DNA molecules appear significantly more reliable than others. We show that large differences in reliability between different parts of DNA molecules lead to highly inefficient use of error-correction resources, and that commonly used techniques such as unequal error-correction cannot be used to bridge the reliability gap between different locations in the context of DNA storage. We then propose two approaches to address the problem. The first approach is general and applies to any types of data; it stripes the data and ECC codewords across DNA molecules in a particular fashion such that the effects of errors are spread out evenly across different codewords and molecules, effectively de-biasing the underlying storage medium. The second approach is application-specific, and seeks to leverage the underlying reliability bias by using application-aware mapping of data onto DNA molecules such that data that requires higher reliability is stored in more reliable locations, whereas data that needs lower reliability is stored in less reliable parts of DNA molecules. We show that the proposed data mapping can be used to achieve graceful degradation in the presence of high error rates, or to implement the concept of approximate storage in DNA.
[ { "version": "v1", "created": "Tue, 26 Apr 2022 12:34:46 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 22:09:56 GMT" } ]
2022-05-03T00:00:00
[ [ "Lin", "Dehui", "" ], [ "Tabatabaee", "Yasamin", "" ], [ "Pote", "Yash", "" ], [ "Jevdjic", "Djordje", "" ] ]
new_dataset
0.967348
2204.12433
Minjia Shi
Minjia Shi, Haodong Lu, Shuang Zhou, Jiarui Xu, Yuhang Zhu
Equivalence and Duality of Polycyclic Codes Associated with Trinomials over Finite Fields
null
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
In this paper, several conjectures proposed in [2] are studied, involving the equivalence and duality of polycyclic codes associated with trinomials. According to the results, we give methods to construct isodual and self-dual polycyclic codes, and study the self-orthogonal and dual-containing polycyclic codes over F2.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 10:02:49 GMT" }, { "version": "v2", "created": "Wed, 27 Apr 2022 01:14:00 GMT" }, { "version": "v3", "created": "Sun, 1 May 2022 02:29:42 GMT" } ]
2022-05-03T00:00:00
[ [ "Shi", "Minjia", "" ], [ "Lu", "Haodong", "" ], [ "Zhou", "Shuang", "" ], [ "Xu", "Jiarui", "" ], [ "Zhu", "Yuhang", "" ] ]
new_dataset
0.995854
2205.00211
Hong-Shuo Chen
Hong-Shuo Chen, Shuowen Hu, Suya You and C.-C. Jay Kuo
DefakeHop++: An Enhanced Lightweight Deepfake Detector
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
On the basis of DefakeHop, an enhanced lightweight Deepfake detector called DefakeHop++ is proposed in this work. The improvements lie in two areas. First, DefakeHop examines three facial regions (i.e., two eyes and mouth) while DefakeHop++ includes eight more landmarks for broader coverage. Second, for discriminant features selection, DefakeHop uses an unsupervised approach while DefakeHop++ adopts a more effective approach with supervision, called the Discriminant Feature Test (DFT). In DefakeHop++, rich spatial and spectral features are first derived from facial regions and landmarks automatically. Then, DFT is used to select a subset of discriminant features for classifier training. As compared with MobileNet v3 (a lightweight CNN model of 1.5M parameters targeting at mobile applications), DefakeHop++ has a model of 238K parameters, which is 16% of MobileNet v3. Furthermore, DefakeHop++ outperforms MobileNet v3 in Deepfake image detection performance in a weakly-supervised setting.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 08:50:25 GMT" } ]
2022-05-03T00:00:00
[ [ "Chen", "Hong-Shuo", "" ], [ "Hu", "Shuowen", "" ], [ "You", "Suya", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
new_dataset
0.98182
2205.00220
Yi Chen
Yi Chen, Chong Han, Ziming Yu, Guangjian Wang
Channel Measurement, Characterization and Modeling for Terahertz Indoor Communications Above 200 GHz
30 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Terahertz (THz) communications are envisioned as a promising technology for sixth-generation (6G) and beyond systems, owing to its unprecedented multi-gigahertz (GHz) bandwidth. In this paper, channel measurement campaigns in indoor scenarios at 201-209~GHz are reported. Four different communication scenarios including 90 transmitter-receiver pairs are measured in two channel measurement campaigns of a meeting room and an office room, respectively. The two measurement campaigns contains four scenarios, namely, a meeting room, cubicle area, hallway and non-line-of-sight (NLoS) case. The propagation of multi-path components (MPCs) in the four scenarios is characterized by the power-delay-angular profiles. Based on them, the temporal and spatial consistency for varying receiver locations in the complex hallway and NLoS scenarios are verified. To characterize, the large-scale best-direction and omni-directional path losses in indoor scenarios are separately analyzed and modeled by the close-in (CI) model. Furthermore, the small-scale channel parameters, e.g., the number of clusters, delay spread, angular spread, and cluster time-of-arrival are analyzed and modeled by proper distributions. As a general framework, a ray-tracing-statistical hybrid model is proposed for wireless propagation at 201-209~GHz, although, admittedly, the measurement results and analysis reveal that the channel characteristics in various indoor scenarios exhibit noticeable differences that need tailored parameter settings.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 09:51:33 GMT" } ]
2022-05-03T00:00:00
[ [ "Chen", "Yi", "" ], [ "Han", "Chong", "" ], [ "Yu", "Ziming", "" ], [ "Wang", "Guangjian", "" ] ]
new_dataset
0.994382
2205.00254
Yifan Gao
Yifan Gao
PGD: A Large-scale Professional Go Dataset for Data-driven Analytics
IEEE Conference on Games 2022. Dataset is available at https://github.com/Gifanan/Professional-Go-Dataset
null
null
null
cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Lee Sedol is on a winning streak--does this legend rise again after the competition with AlphaGo? Ke Jie is invincible in the world championship--can he still win the title this time? Go is one of the most popular board games in East Asia, with a stable professional sports system that has lasted for decades in China, Japan, and Korea. There are mature data-driven analysis technologies for many sports, such as soccer, basketball, and esports. However, developing such technology for Go remains nontrivial and challenging due to the lack of datasets, meta-information, and in-game statistics. This paper creates the Professional Go Dataset (PGD), containing 98,043 games played by 2,148 professional players from 1950 to 2021. After manual cleaning and labeling, we provide detailed meta-information for each player, game, and tournament. Moreover, the dataset includes analysis results for each move in the match evaluated by advanced AlphaZero-based AI. To establish a benchmark for PGD, we further analyze the data and extract meaningful in-game features based on prior knowledge related to Go that can indicate the game status. With the help of complete meta-information and constructed in-game features, our results prediction system achieves an accuracy of 75.30%, much higher than several state-of-the-art approaches (64%-65%). As far as we know, PGD is the first dataset for data-driven analytics in Go and even in board games. Beyond this promising result, we provide more examples of tasks that benefit from our dataset. The ultimate goal of this paper is to bridge this ancient game and the modern data science community. It will advance research on Go-related analytics to enhance the fan experience, help players improve their ability, and facilitate other promising aspects. The dataset will be made publicly available.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 12:53:04 GMT" } ]
2022-05-03T00:00:00
[ [ "Gao", "Yifan", "" ] ]
new_dataset
0.999753
2205.00257
Hui Kong
Yubin Guo, Haobo Jiang, Xinlei Qi, Jin Xie, Cheng-Zhong Xu and Hui Kong
Unsupervised Visible-light Images Guided Cross-Spectrum Depth Estimation from Dual-Modality Cameras
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-spectrum depth estimation aims to provide a depth map in all illumination conditions with a pair of dual-spectrum images. It is valuable for autonomous vehicle applications when the vehicle is equipped with two cameras of different modalities. However, images captured by different-modality cameras can be photometrically quite different. Therefore, cross-spectrum depth estimation is a very challenging problem. Moreover, the shortage of large-scale open-source datasets also retards further research in this field. In this paper, we propose an unsupervised visible-light image guided cross-spectrum (i.e., thermal and visible-light, TIR-VIS in short) depth estimation framework given a pair of RGB and thermal images captured from a visible-light camera and a thermal one. We first adopt a base depth estimation network using RGB-image pairs. Then we propose a multi-scale feature transfer network to transfer features from the TIR-VIS domain to the VIS domain at the feature level to fit the trained depth estimation network. At last, we propose a cross-spectrum depth cycle consistency to improve the depth result of dual-spectrum image pairs. Meanwhile, we release a large dual-spectrum depth estimation dataset with visible-light and far-infrared stereo images captured in different scenes to the society. The experiment result shows that our method achieves better performance than the compared existing methods. Our datasets is available at https://github.com/whitecrow1027/VIS-TIR-Datasets.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 12:58:35 GMT" } ]
2022-05-03T00:00:00
[ [ "Guo", "Yubin", "" ], [ "Jiang", "Haobo", "" ], [ "Qi", "Xinlei", "" ], [ "Xie", "Jin", "" ], [ "Xu", "Cheng-Zhong", "" ], [ "Kong", "Hui", "" ] ]
new_dataset
0.995456
2205.00323
Nadya Peek
Blair Subbaraman, Nadya Peek
p5.fab: Direct Control of Digital Fabrication Machines from a Creative Coding Environment
Submitted to DIS 2022, 12 pages plus references
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Machine settings and tuning are critical for digital fabrication outcomes. However, exploring these parameters is non-trivial. We seek to enable exploration of the full design space of digital fabrication. To identify where we might intervene, we studied how practitioners approach 3D printing. We found that beyond using CAD/CAM, they create bespoke routines and workflows to explore interdependent material and machine settings. We seek to provide a system that supports this workflow development. We identified design goals around material exploration, fine-tuned control, and iteration. Based on these, we present p5.fab, a system for controlling digital fabrication machines from the creative coding environment p5.js. We demonstrate p5.fab with examples of 3D prints that cannot be made with traditional 3D printing software. We evaluate p5.fab in workshops and find that it encourages novel printing workflows and artifacts. Finally, we discuss implications for future digital fabrication systems.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 18:52:55 GMT" } ]
2022-05-03T00:00:00
[ [ "Subbaraman", "Blair", "" ], [ "Peek", "Nadya", "" ] ]
new_dataset
0.999373
2205.00331
Wei Jiang
Wei Jiang and Hans Dieter Schotten
Dual-Beam Intelligent Reflecting Surface for Millimeter and THz Communications
2022 IEEE 95th Vehicular Technology Conference (VTC2022-Spring)
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Intelligent reflecting surface (IRS) is a cost-efficient technique to improve power efficiency and spectral efficiency. However, IRS-aided multi-antenna transmission needs to jointly optimize the passive and active beamforming, imposing a high computational burden and high latency due to its iterative optimization process. Making use of hybrid analog-digital beamforming in high-frequency transmission systems, a novel technique, coined dual-beam IRS, is proposed in this paper. The key idea is to form a pair of beams towards the IRS and user, respectively. Then, the optimization of passive and active beamforming can be decoupled, resulting in a simplified system design. Simulation results corroborate that it achieves a good balance between the cell-edge and cell-center performance. Compared with the performance bound, the gap is moderate, but it remarkably outperforms other sub-optimal schemes.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 19:39:23 GMT" } ]
2022-05-03T00:00:00
[ [ "Jiang", "Wei", "" ], [ "Schotten", "Hans Dieter", "" ] ]
new_dataset
0.984209
2205.00347
Kerem Turgutlu
Kerem Turgutlu, Sanat Sharma and Jayant Kumar
LayoutBERT: Masked Language Layout Model for Object Insertion
8 pages main paper, 6 pages supplemental material. Accepted to AI4CC Workshop @CVPR 2022
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Image compositing is one of the most fundamental steps in creative workflows. It involves taking objects/parts of several images to create a new image, called a composite. Currently, this process is done manually by creating accurate masks of objects to be inserted and carefully blending them with the target scene or images, usually with the help of tools such as Photoshop or GIMP. While there have been several works on automatic selection of objects for creating masks, the problem of object placement within an image with the correct position, scale, and harmony remains a difficult problem with limited exploration. Automatic object insertion in images or designs is a difficult problem as it requires understanding of the scene geometry and the color harmony between objects. We propose LayoutBERT for the object insertion task. It uses a novel self-supervised masked language model objective and bidirectional multi-head self-attention. It outperforms previous layout-based likelihood models and shows favorable properties in terms of model capacity. We demonstrate the effectiveness of our approach for object insertion in the image compositing setting and other settings like documents and design templates. We further demonstrate the usefulness of the learned representations for layout-based retrieval tasks. We provide both qualitative and quantitative evaluations on datasets from diverse domains like COCO, PublayNet, and two new datasets which we call Image Layouts and Template Layouts. Image Layouts which consists of 5.8 million images with layout annotations is the largest image layout dataset to our knowledge. We also share ablation study results on the effect of dataset size, model size and class sample size for this task.
[ { "version": "v1", "created": "Sat, 30 Apr 2022 21:35:38 GMT" } ]
2022-05-03T00:00:00
[ [ "Turgutlu", "Kerem", "" ], [ "Sharma", "Sanat", "" ], [ "Kumar", "Jayant", "" ] ]
new_dataset
0.999359
2205.00377
Haoming Guo
Haoming Guo, Tianyi Huang, Huixuan Huang, Mingyue Fan, Gerald Friedland
Detecting COVID-19 Conspiracy Theories with Transformers and TF-IDF
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The sharing of fake news and conspiracy theories on social media has wide-spread negative effects. By designing and applying different machine learning models, researchers have made progress in detecting fake news from text. However, existing research places a heavy emphasis on general, common-sense fake news, while in reality fake news often involves rapidly changing topics and domain-specific vocabulary. In this paper, we present our methods and results for three fake news detection tasks at MediaEval benchmark 2021 that specifically involve COVID-19 related topics. We experiment with a group of text-based models including Support Vector Machines, Random Forest, BERT, and RoBERTa. We find that a pre-trained transformer yields the best validation results, but a randomly initialized transformer with smart design can also be trained to reach accuracies close to that of the pre-trained transformer.
[ { "version": "v1", "created": "Sun, 1 May 2022 01:48:48 GMT" } ]
2022-05-03T00:00:00
[ [ "Guo", "Haoming", "" ], [ "Huang", "Tianyi", "" ], [ "Huang", "Huixuan", "" ], [ "Fan", "Mingyue", "" ], [ "Friedland", "Gerald", "" ] ]
new_dataset
0.998556
2205.00440
Rajdeep Mukherjee
Raghav R, Adarsh Vemali, Rajdeep Mukherjee
ETMS@IITKGP at SemEval-2022 Task 10: Structured Sentiment Analysis Using A Generative Approach
9 pages, accepted at SemEval 2022 (collocated with NAACL 2022)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Structured Sentiment Analysis (SSA) deals with extracting opinion tuples in a text, where each tuple (h, e, t, p) consists of h, the holder, who expresses a sentiment polarity p towards a target t through a sentiment expression e. While prior works explore graph-based or sequence labeling-based approaches for the task, we in this paper present a novel unified generative method to solve SSA, a SemEval2022 shared task. We leverage a BART-based encoder-decoder architecture and suitably modify it to generate, given a sentence, a sequence of opinion tuples. Each generated tuple consists of seven integers respectively representing the indices corresponding to the start and end positions of the holder, target, and expression spans, followed by the sentiment polarity class associated between the target and the sentiment expression. We perform rigorous experiments for both Monolingual and Cross-lingual subtasks, and achieve competitive Sentiment F1 scores on the leaderboard in both settings.
[ { "version": "v1", "created": "Sun, 1 May 2022 10:39:53 GMT" } ]
2022-05-03T00:00:00
[ [ "R", "Raghav", "" ], [ "Vemali", "Adarsh", "" ], [ "Mukherjee", "Rajdeep", "" ] ]
new_dataset
0.997566
2205.00445
Ehud Karpas Dr.
Ehud Karpas, Omri Abend, Yonatan Belinkov, Barak Lenz, Opher Lieber, Nir Ratner, Yoav Shoham, Hofit Bata, Yoav Levine, Kevin Leyton-Brown, Dor Muhlgay, Noam Rozen, Erez Schwartz, Gal Shachaf, Shai Shalev-Shwartz, Amnon Shashua, Moshe Tenenholtz
MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Huge language models (LMs) have ushered in a new era for AI, serving as a gateway to natural-language-based knowledge tasks. Although an essential element of modern AI, LMs are also inherently limited in a number of ways. We discuss these limitations and how they can be avoided by adopting a systems approach. Conceptualizing the challenge as one that involves knowledge and reasoning in addition to linguistic processing, we define a flexible architecture with multiple neural models, complemented by discrete knowledge and reasoning modules. We describe this neuro-symbolic architecture, dubbed the Modular Reasoning, Knowledge and Language (MRKL, pronounced "miracle") system, some of the technical challenges in implementing it, and Jurassic-X, AI21 Labs' MRKL system implementation.
[ { "version": "v1", "created": "Sun, 1 May 2022 11:01:28 GMT" } ]
2022-05-03T00:00:00
[ [ "Karpas", "Ehud", "" ], [ "Abend", "Omri", "" ], [ "Belinkov", "Yonatan", "" ], [ "Lenz", "Barak", "" ], [ "Lieber", "Opher", "" ], [ "Ratner", "Nir", "" ], [ "Shoham", "Yoav", "" ], [ "Bata", "Hofit", "" ], [ "Levine", "Yoav", "" ], [ "Leyton-Brown", "Kevin", "" ], [ "Muhlgay", "Dor", "" ], [ "Rozen", "Noam", "" ], [ "Schwartz", "Erez", "" ], [ "Shachaf", "Gal", "" ], [ "Shalev-Shwartz", "Shai", "" ], [ "Shashua", "Amnon", "" ], [ "Tenenholtz", "Moshe", "" ] ]
new_dataset
0.99913
2205.00467
Federico Pigozzi Mr
Federico Pigozzi
Shape Change and Control of Pressure-based Soft Agents
Accepted at ALife'22 conference as full paper
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Biological agents possess bodies that are mostly of soft tissues. Researchers have resorted to soft bodies to investigate Artificial Life (ALife)-related questions; similarly, a new era of soft-bodied robots has just begun. Nevertheless, because of their infinite degrees of freedom, soft bodies pose unique challenges in terms of simulation, control, and optimization. Here we propose a novel soft-bodied agents formalism, namely Pressure-based Soft Agents (PSAs): they are bodies of gas enveloped by a chain of springs and masses, with pressure pushing on the masses from inside the body. Pressure endows the agents with structure, while springs and masses simulate softness and allow the agents to assume a large gamut of shapes. Actuation takes place by changing the length of springs or modulating global pressure. We optimize the controller of PSAs for a locomotion task on hilly terrain and an escape task from a cage; the latter is particularly suitable for soft-bodied agents, as it requires the agent to contort itself to squeeze through a small aperture. Our results suggest that PSAs are indeed effective at those tasks and that controlling pressure is fundamental for shape-changing. Looking forward, we envision PSAs to play a role in the modeling of soft-bodied agents, including soft robots and biological cells. Videos of evolved agents are available at https://pressuresoftagents.github.io.
[ { "version": "v1", "created": "Sun, 1 May 2022 13:36:27 GMT" } ]
2022-05-03T00:00:00
[ [ "Pigozzi", "Federico", "" ] ]
new_dataset
0.999292
2205.00485
Liuhui Deng
Liuhui Deng, Roger Hsiao, Arnab Ghoshal
Bilingual End-to-End ASR with Byte-Level Subwords
5 pages, to be published in IEEE ICASSP 2022
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we investigate how the output representation of an end-to-end neural network affects multilingual automatic speech recognition (ASR). We study different representations including character-level, byte-level, byte pair encoding (BPE), and byte-level byte pair encoding (BBPE) representations, and analyze their strengths and weaknesses. We focus on developing a single end-to-end model to support utterance-based bilingual ASR, where speakers do not alternate between two languages in a single utterance but may change languages across utterances. We conduct our experiments on English and Mandarin dictation tasks, and we find that BBPE with penalty schemes can improve utterance-based bilingual ASR performance by 2% to 5% relative even with smaller number of outputs and fewer parameters. We conclude with analysis that indicates directions for further improving multilingual ASR.
[ { "version": "v1", "created": "Sun, 1 May 2022 15:01:01 GMT" } ]
2022-05-03T00:00:00
[ [ "Deng", "Liuhui", "" ], [ "Hsiao", "Roger", "" ], [ "Ghoshal", "Arnab", "" ] ]
new_dataset
0.983512
2205.00613
Tianyuan Zhang
Tianyuan Zhang, Xuanyao Chen, Yue Wang, Yilun Wang, Hang Zhao
MUTR3D: A Multi-camera Tracking Framework via 3D-to-2D Queries
Appear on CVPR 2022 Workshop on Autonomous Driving
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate and consistent 3D tracking from multiple cameras is a key component in a vision-based autonomous driving system. It involves modeling 3D dynamic objects in complex scenes across multiple cameras. This problem is inherently challenging due to depth estimation, visual occlusions, appearance ambiguity, etc. Moreover, objects are not consistently associated across time and cameras. To address that, we propose an end-to-end \textbf{MU}lti-camera \textbf{TR}acking framework called MUTR3D. In contrast to prior works, MUTR3D does not explicitly rely on the spatial and appearance similarity of objects. Instead, our method introduces \textit{3D track query} to model spatial and appearance coherent track for each object that appears in multiple cameras and multiple frames. We use camera transformations to link 3D trackers with their observations in 2D images. Each tracker is further refined according to the features that are obtained from camera images. MUTR3D uses a set-to-set loss to measure the difference between the predicted tracking results and the ground truths. Therefore, it does not require any post-processing such as non-maximum suppression and/or bounding box association. MUTR3D outperforms state-of-the-art methods by 5.3 AMOTA on the nuScenes dataset. Code is available at: \url{https://github.com/a1600012888/MUTR3D}.
[ { "version": "v1", "created": "Mon, 2 May 2022 01:45:41 GMT" } ]
2022-05-03T00:00:00
[ [ "Zhang", "Tianyuan", "" ], [ "Chen", "Xuanyao", "" ], [ "Wang", "Yue", "" ], [ "Wang", "Yilun", "" ], [ "Zhao", "Hang", "" ] ]
new_dataset
0.995177
2205.00618
Aleksandar Zlateski
Bram Wasti, Jos\'e Pablo Cambronero, Benoit Steiner, Hugh Leather and Aleksandar Zlateski
LoopStack: a Lightweight Tensor Algebra Compiler Stack
null
null
null
null
cs.LG cs.PF cs.SC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present LoopStack, a domain specific compiler stack for tensor operations, composed of a frontend, LoopTool, and an efficient optimizing code generator, LoopNest. This stack enables us to compile entire neural networks and generate code targeting the AVX2, AVX512, NEON, and NEONfp16 instruction sets while incorporating optimizations often missing from other machine learning compiler backends. We evaluate our stack on a collection of full neural networks and commonly used network blocks as well as individual operators, and show that LoopStack generates machine code that matches and frequently exceeds the performance of in state-of-the-art machine learning frameworks in both cases. We also show that for a large collection of schedules LoopNest's compilation is orders of magnitude faster than LLVM, while resulting in equal or improved run time performance. Additionally, LoopStack has a very small memory footprint - a binary size of 245KB, and under 30K lines of effective code makes it ideal for use on mobile and embedded devices.
[ { "version": "v1", "created": "Mon, 2 May 2022 01:57:58 GMT" } ]
2022-05-03T00:00:00
[ [ "Wasti", "Bram", "" ], [ "Cambronero", "José Pablo", "" ], [ "Steiner", "Benoit", "" ], [ "Leather", "Hugh", "" ], [ "Zlateski", "Aleksandar", "" ] ]
new_dataset
0.99878
2205.00661
Runzhou Tao
Runzhou Tao, Yunong Shi, Jianan Yao, Xupeng Li, Ali Javadi-Abhari, Andrew W. Cross, Frederic T. Chong, Ronghui Gu
Giallar: Push-Button Verification for the Qiskit Quantum Compiler
PLDI 2022; Improves arXiv:1908.08963
null
null
null
cs.PL quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents Giallar, a fully-automated verification toolkit for quantum compilers. Giallar requires no manual specifications, invariants, or proofs, and can automatically verify that a compiler pass preserves the semantics of quantum circuits. To deal with unbounded loops in quantum compilers, Giallar abstracts three loop templates, whose loop invariants can be automatically inferred. To efficiently check the equivalence of arbitrary input and output circuits that have complicated matrix semantics representation, Giallar introduces a symbolic representation for quantum circuits and a set of rewrite rules for showing the equivalence of symbolic quantum circuits. With Giallar, we implemented and verified 44 (out of 56) compiler passes in 13 versions of the Qiskit compiler, the open-source quantum compiler standard, during which three bugs were detected in and confirmed by Qiskit. Our evaluation shows that most of Qiskit compiler passes can be automatically verified in seconds and verification imposes only a modest overhead to compilation performance.
[ { "version": "v1", "created": "Mon, 2 May 2022 05:37:18 GMT" } ]
2022-05-03T00:00:00
[ [ "Tao", "Runzhou", "" ], [ "Shi", "Yunong", "" ], [ "Yao", "Jianan", "" ], [ "Li", "Xupeng", "" ], [ "Javadi-Abhari", "Ali", "" ], [ "Cross", "Andrew W.", "" ], [ "Chong", "Frederic T.", "" ], [ "Gu", "Ronghui", "" ] ]
new_dataset
0.986824
2205.00777
TianSheuan Chang
Dun-Hao Yang, and Tian-Sheuan Chang
BSRA: Block-based Super Resolution Accelerator with Hardware Efficient Pixel Attention
5 pages, 5 figures, published in IEEE ISCAS 2022
null
null
null
cs.AR cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by-sa/4.0/
Increasingly, convolution neural network (CNN) based super resolution models have been proposed for better reconstruction results, but their large model size and complicated structure inhibit their real-time hardware implementation. Current hardware designs are limited to a plain network and suffer from lower quality and high memory bandwidth requirements. This paper proposes a super resolution hardware accelerator with hardware efficient pixel attention that just needs 25.9K parameters and simple structure but achieves 0.38dB better reconstruction images than the widely used FSRCNN. The accelerator adopts full model block wise convolution for full model layer fusion to reduce external memory access to model input and output only. In addition, CNN and pixel attention are well supported by PE arrays with distributed weights. The final implementation can support full HD image reconstruction at 30 frames per second with TSMC 40nm CMOS process.
[ { "version": "v1", "created": "Mon, 2 May 2022 09:56:29 GMT" } ]
2022-05-03T00:00:00
[ [ "Yang", "Dun-Hao", "" ], [ "Chang", "Tian-Sheuan", "" ] ]
new_dataset
0.999467
2205.00806
Tharindu Ranasinghe Dr
Alistair Plum, Tharindu Ranasinghe, Spencer Jones, Constantin Orasan, Ruslan Mitkov
Biographical: A Semi-Supervised Relation Extraction Dataset
Accepted to ACM SIGIR 2022
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Extracting biographical information from online documents is a popular research topic among the information extraction (IE) community. Various natural language processing (NLP) techniques such as text classification, text summarisation and relation extraction are commonly used to achieve this. Among these techniques, RE is the most common since it can be directly used to build biographical knowledge graphs. RE is usually framed as a supervised machine learning (ML) problem, where ML models are trained on annotated datasets. However, there are few annotated datasets for RE since the annotation process can be costly and time-consuming. To address this, we developed Biographical, the first semi-supervised dataset for RE. The dataset, which is aimed towards digital humanities (DH) and historical research, is automatically compiled by aligning sentences from Wikipedia articles with matching structured data from sources including Pantheon and Wikidata. By exploiting the structure of Wikipedia articles and robust named entity recognition (NER), we match information with relatively high precision in order to compile annotated relation pairs for ten different relations that are important in the DH domain. Furthermore, we demonstrate the effectiveness of the dataset by training a state-of-the-art neural model to classify relation pairs, and evaluate it on a manually annotated gold standard set. Biographical is primarily aimed at training neural models for RE within the domain of digital humanities and history, but as we discuss at the end of this paper, it can be useful for other purposes as well.
[ { "version": "v1", "created": "Mon, 2 May 2022 10:48:23 GMT" } ]
2022-05-03T00:00:00
[ [ "Plum", "Alistair", "" ], [ "Ranasinghe", "Tharindu", "" ], [ "Jones", "Spencer", "" ], [ "Orasan", "Constantin", "" ], [ "Mitkov", "Ruslan", "" ] ]
new_dataset
0.999822
2205.00868
Jeanine Treffers-Daller Professor
Jeanine Treffers-Daller and, Ozlem \c{C}etino\u{g}lu
TuGeBiC: A Turkish German Bilingual Code-Switching Corpus
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this paper we describe the process of collection, transcription, and annotation of recordings of spontaneous speech samples from Turkish-German bilinguals, and the compilation of a corpus called TuGeBiC. Participants in the study were adult Turkish-German bilinguals living in Germany or Turkey at the time of recording in the first half of the 1990s. The data were manually tokenised and normalised, and all proper names (names of participants and places mentioned in the conversations) were replaced with pseudonyms. Token-level automatic language identification was performed, which made it possible to establish the proportions of words from each language. The corpus is roughly balanced between both languages. We also present quantitative information about the number of code-switches, and give examples of different types of code-switching found in the data. The resulting corpus has been made freely available to the research community.
[ { "version": "v1", "created": "Mon, 2 May 2022 12:53:05 GMT" } ]
2022-05-03T00:00:00
[ [ "and", "Jeanine Treffers-Daller", "" ], [ "Çetinoğlu", "Ozlem", "" ] ]
new_dataset
0.996744
2205.00871
Alexandra Buchmann
Alexandra Buchmann, Bernadett Kiss, Alexander Badri-Sprowitz and Daniel Renjewski
Power to the springs: Passive elements are sufficient to drive push-off in human walking
12 pages, 4 figures
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the impulsive ankle push-off (APO) observed in human walking two muscle-tendon-units (MTUs) spanning the ankle joint play an important role: Gastrocnemius (GAS) and Soleus (SOL). GAS and SOL load the Achilles tendon to store elastic energy during stance followed by a rapid energy release during APO. We use a neuromuscular simulation (NMS) and a bipedal robot to investigate the role of GAS and SOL on the APO. We optimize the simulation for a robust gait and then sequentially replace the MTUs of (1) GAS, (2) SOL and (3) GAS and SOL by linear springs. To validate the simulation, we implement NMS-3 on a bipedal robot. Simulation and robot walk steady for all trials showing an impulsive APO. Our results imply that the elastic MTU properties shape the impulsive APO. For prosthesis or robot design that is, no complex ankle actuation is needed to obtain an impulsive APO, if more mechanical intelligence is incorporated in the design.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 15:05:38 GMT" } ]
2022-05-03T00:00:00
[ [ "Buchmann", "Alexandra", "" ], [ "Kiss", "Bernadett", "" ], [ "Badri-Sprowitz", "Alexander", "" ], [ "Renjewski", "Daniel", "" ] ]
new_dataset
0.999349
2205.00889
Vera Traub
Jannis Blauth, Stephan Held, Dirk M\"uller, Niklas Schlomberg, Vera Traub, Thorben Tr\"obst, Jens Vygen
Vehicle Routing with Time-Dependent Travel Times: Theory, Practice, and Benchmarks
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We develop theoretical foundations and practical algorithms for vehicle routing with time-dependent travel times. We also provide new benchmark instances and experimental results. First, we study basic operations on piecewise linear arrival time functions. In particular, we devise a faster algorithm to compute the pointwise minimum of a set of piecewise linear functions and a monotonicity-preserving variant of the Imai-Iri algorithm to approximate an arrival time function with fewer breakpoints. Next, we show how to evaluate insertion and deletion operations in tours efficiently and update the underlying data structure faster than previously known when a tour changes. Evaluating a tour also requires a scheduling step which is non-trivial in the presence of time windows and time-dependent travel times. We show how to perform this in linear time. Based on these results, we develop a local search heuristic to solve real-world vehicle routing problems with various constraints efficiently and report experimental results on classical benchmarks. Since most of these do not have time-dependent travel times, we generate and publish new benchmark instances that are based on real-world data. This data also demonstrates the importance of considering time-dependent travel times in instances with tight time windows.
[ { "version": "v1", "created": "Mon, 2 May 2022 13:01:55 GMT" } ]
2022-05-03T00:00:00
[ [ "Blauth", "Jannis", "" ], [ "Held", "Stephan", "" ], [ "Müller", "Dirk", "" ], [ "Schlomberg", "Niklas", "" ], [ "Traub", "Vera", "" ], [ "Tröbst", "Thorben", "" ], [ "Vygen", "Jens", "" ] ]
new_dataset
0.994945
2205.00911
Emil H\"aglund
Emil H\"aglund and Johanna Bj\"orklund
AI-Driven Contextual Advertising: A Technology Report and Implication Analysis
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Programmatic advertising consists in automated auctioning of digital ad space. Every time a user requests a web page, placeholders on the page are populated with ads from the highest-bidding advertisers. The bids are typically based on information about the user, and to an increasing extent, on information about the surrounding media context. The growing interest in contextual advertising is in part a counterreaction to the current dependency on personal data, which is problematic from legal and ethical standpoints. The transition is further accelerated by developments in Artificial Intelligence (AI), which allow for a deeper semantic understanding of context and, by extension, more effective ad placement. In this article, we begin by identifying context factors that have been shown in previous research to positively influence how ads are received. We then continue to discuss applications of AI in contextual advertising, where it adds value by, e.g., extracting high-level information about media context and optimising bidding strategies. However, left unchecked, these new practices can lead to unfair ad delivery and manipulative use of context. We summarize these and other concerns for consumers, publishers and advertisers in an implication analysis.
[ { "version": "v1", "created": "Mon, 2 May 2022 13:44:58 GMT" } ]
2022-05-03T00:00:00
[ [ "Häglund", "Emil", "" ], [ "Björklund", "Johanna", "" ] ]
new_dataset
0.990936
2205.00916
Kai Wang
Xiaohong Li, Xiang Wang, Kai Wang, Shiguo Lian
A Novel Speech-Driven Lip-Sync Model with CNN and LSTM
This paper has been published on CISP-BMEI 2021. See https://ieeexplore.ieee.org/document/9624360
null
10.1109/CISP-BMEI53629.2021.9624360
null
cs.SD cs.AI cs.CV cs.GR eess.AS
http://creativecommons.org/licenses/by/4.0/
Generating synchronized and natural lip movement with speech is one of the most important tasks in creating realistic virtual characters. In this paper, we present a combined deep neural network of one-dimensional convolutions and LSTM to generate vertex displacement of a 3D template face model from variable-length speech input. The motion of the lower part of the face, which is represented by the vertex movement of 3D lip shapes, is consistent with the input speech. In order to enhance the robustness of the network to different sound signals, we adapt a trained speech recognition model to extract speech feature, and a velocity loss term is adopted to reduce the jitter of generated facial animation. We recorded a series of videos of a Chinese adult speaking Mandarin and created a new speech-animation dataset to compensate the lack of such public data. Qualitative and quantitative evaluations indicate that our model is able to generate smooth and natural lip movements synchronized with speech.
[ { "version": "v1", "created": "Mon, 2 May 2022 13:57:50 GMT" } ]
2022-05-03T00:00:00
[ [ "Li", "Xiaohong", "" ], [ "Wang", "Xiang", "" ], [ "Wang", "Kai", "" ], [ "Lian", "Shiguo", "" ] ]
new_dataset
0.965512
2205.00952
Sriram Baireddy
Sriram Baireddy and Da-Young Lee and Carlos Gongora-Canul and Christian D. Cruz and Edward J. Delp
Leaf Tar Spot Detection Using RGB Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tar spot disease is a fungal disease that appears as a series of black circular spots containing spores on corn leaves. Tar spot has proven to be an impactful disease in terms of reducing crop yield. To quantify disease progression, experts usually have to visually phenotype leaves from the plant. This process is very time-consuming and is difficult to incorporate in any high-throughput phenotyping system. Deep neural networks could provide quick, automated tar spot detection with sufficient ground truth. However, manually labeling tar spots in images to serve as ground truth is also tedious and time-consuming. In this paper we first describe an approach that uses automated image analysis tools to generate ground truth images that are then used for training a Mask R-CNN. We show that a Mask R-CNN can be used effectively to detect tar spots in close-up images of leaf surfaces. We additionally show that the Mask R-CNN can also be used for in-field images of whole leaves to capture the number of tar spots and area of the leaf infected by the disease.
[ { "version": "v1", "created": "Mon, 2 May 2022 14:56:06 GMT" } ]
2022-05-03T00:00:00
[ [ "Baireddy", "Sriram", "" ], [ "Lee", "Da-Young", "" ], [ "Gongora-Canul", "Carlos", "" ], [ "Cruz", "Christian D.", "" ], [ "Delp", "Edward J.", "" ] ]
new_dataset
0.997603
2205.00973
Stefano Savazzi
Marco Santoboni, Riccardo Bersan, Stefano Savazzi, Alberto Zecchin, Vittorio Rampa Daniele Piazza
Wireless LAN sensing with smart antennas
Accepted for publication in EuCAP 2022, https://www.eucap2022.org/
null
null
null
cs.NI cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper targets the problem of human motion detection using Wireless Local Area Network devices (WiFi) equipped with pattern reconfigurable antennas. Motion sensing is obtained by monitoring the body-induced alterations of the ambient WiFi signals originated from smart antennas supporting the beam-steering technology, thus allowing to channelize the antenna radiation pattern to pre-defined spots of interest. We first discuss signal and Channel State Information (CSI) processing and sanitization. Next, we describe the motion detection algorithm based on Angle-of-Arrival (AoA) monitoring. Proposed algorithms are validated experimentally inside a large size smart home environment.
[ { "version": "v1", "created": "Wed, 27 Apr 2022 17:29:24 GMT" } ]
2022-05-03T00:00:00
[ [ "Santoboni", "Marco", "" ], [ "Bersan", "Riccardo", "" ], [ "Savazzi", "Stefano", "" ], [ "Zecchin", "Alberto", "" ], [ "Piazza", "Vittorio Rampa Daniele", "" ] ]
new_dataset
0.996927
2205.01044
Adrianus Vinck
A.J. Han Vinck
Coding Concepts and Reed-Solomon Codes
null
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
The material in this book is presented to graduate students in Information and Communication theory. The idea is that we give an introduction to particular applications of information theory and coding in digital communications. The goal is to bring understanding of the underlying concepts, both in theory as well as in practice. We mainly concentrate on our own research results. After showing obtainable performance, we give a specific implementation using Reed-Solomon (RS) codes. The reason for using RS codes is that they can be seen as optimal codes with maximum obtainable minimum distance. Furthermore, the structure of RS codes enables specific applications that fit perfectly into the developed concepts. We do not intend to develop the theory of error correcting codes.
[ { "version": "v1", "created": "Mon, 2 May 2022 17:26:43 GMT" } ]
2022-05-03T00:00:00
[ [ "Vinck", "A. J. Han", "" ] ]
new_dataset
0.992334
2205.01048
Xiaobao Wei
Shang Liu, Xiaobao Wei, Lulu Wang, Jing Zhang, Boyu Li and Haosong Yue
Center-of-Mass-based Robust Grasp Pose Adaptation Using RGBD Camera and Force/Torque Sensing
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Object dropping may occur when the robotic arm grasps objects with uneven mass distribution due to additional moments generated by objects' gravity. To solve this problem, we present a novel work that does not require extra wrist and tactile sensors and large amounts of experiments for learning. First, we obtain the center-of-mass position of the rod object using the widely fixed joint torque sensors on the robot arm and RGBD camera. Further, we give the strategy of grasping to improve grasp stability. Simulation experiments are performed in "Mujoco". Results demonstrate that our work is effective in enhancing grasping robustness.
[ { "version": "v1", "created": "Mon, 2 May 2022 17:32:17 GMT" } ]
2022-05-03T00:00:00
[ [ "Liu", "Shang", "" ], [ "Wei", "Xiaobao", "" ], [ "Wang", "Lulu", "" ], [ "Zhang", "Jing", "" ], [ "Li", "Boyu", "" ], [ "Yue", "Haosong", "" ] ]
new_dataset
0.974453
2205.01086
Felix Wu
Felix Wu, Kwangyoun Kim, Shinji Watanabe, Kyu Han, Ryan McDonald, Kilian Q. Weinberger, Yoav Artzi
Wav2Seq: Pre-training Speech-to-Text Encoder-Decoder Models Using Pseudo Languages
Code available at https://github.com/asappresearch/wav2seq
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
We introduce Wav2Seq, the first self-supervised approach to pre-train both parts of encoder-decoder models for speech data. We induce a pseudo language as a compact discrete representation, and formulate a self-supervised pseudo speech recognition task -- transcribing audio inputs into pseudo subword sequences. This process stands on its own, or can be applied as low-cost second-stage pre-training. We experiment with automatic speech recognition (ASR), spoken named entity recognition, and speech-to-text translation. We set new state-of-the-art results for end-to-end spoken named entity recognition, and show consistent improvements on 20 language pairs for speech-to-text translation, even when competing methods use additional text data for training. Finally, on ASR, our approach enables encoder-decoder methods to benefit from pre-training for all parts of the network, and shows comparable performance to highly optimized recent methods.
[ { "version": "v1", "created": "Mon, 2 May 2022 17:59:02 GMT" } ]
2022-05-03T00:00:00
[ [ "Wu", "Felix", "" ], [ "Kim", "Kwangyoun", "" ], [ "Watanabe", "Shinji", "" ], [ "Han", "Kyu", "" ], [ "McDonald", "Ryan", "" ], [ "Weinberger", "Kilian Q.", "" ], [ "Artzi", "Yoav", "" ] ]
new_dataset
0.994491
2205.01089
Chuang Gan
Zhenfang Chen, Kexin Yi, Yunzhu Li, Mingyu Ding, Antonio Torralba, Joshua B. Tenenbaum, Chuang Gan
ComPhy: Compositional Physical Reasoning of Objects and Events from Videos
ICLR 2022. Project page: https://comphyreasoning.github.io/
null
null
null
cs.CV cs.AI cs.LG cs.RO
http://creativecommons.org/publicdomain/zero/1.0/
Objects' motions in nature are governed by complex interactions and their properties. While some properties, such as shape and material, can be identified via the object's visual appearances, others like mass and electric charge are not directly visible. The compositionality between the visible and hidden properties poses unique challenges for AI models to reason from the physical world, whereas humans can effortlessly infer them with limited observations. Existing studies on video reasoning mainly focus on visually observable elements such as object appearance, movement, and contact interaction. In this paper, we take an initial step to highlight the importance of inferring the hidden physical properties not directly observable from visual appearances, by introducing the Compositional Physical Reasoning (ComPhy) dataset. For a given set of objects, ComPhy includes few videos of them moving and interacting under different initial conditions. The model is evaluated based on its capability to unravel the compositional hidden properties, such as mass and charge, and use this knowledge to answer a set of questions posted on one of the videos. Evaluation results of several state-of-the-art video reasoning models on ComPhy show unsatisfactory performance as they fail to capture these hidden properties. We further propose an oracle neural-symbolic framework named Compositional Physics Learner (CPL), combining visual perception, physical property learning, dynamic prediction, and symbolic execution into a unified framework. CPL can effectively identify objects' physical properties from their interactions and predict their dynamics to answer questions.
[ { "version": "v1", "created": "Mon, 2 May 2022 17:59:13 GMT" } ]
2022-05-03T00:00:00
[ [ "Chen", "Zhenfang", "" ], [ "Yi", "Kexin", "" ], [ "Li", "Yunzhu", "" ], [ "Ding", "Mingyu", "" ], [ "Torralba", "Antonio", "" ], [ "Tenenbaum", "Joshua B.", "" ], [ "Gan", "Chuang", "" ] ]
new_dataset
0.999662
1508.07593
Remi Ronfard
Remi Ronfard and Vineet Gandhi and Laurent Boiron and Vaishnavi Ameya Murukutla
The Prose Storyboard Language: A Tool for Annotating and Directing Movies
20 pages, extended version includes new figures and references
null
null
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The prose storyboard language is a formal language for describing movies shot by shot, where each shot is described with a unique sentence. The language uses a simple syntax and limited vocabulary borrowed from working practices in traditional movie-making, and is intended to be readable both by machines and humans. The language is designed to serve as a high-level user interface for intelligent cinematography and editing systems.
[ { "version": "v1", "created": "Sun, 30 Aug 2015 16:12:59 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2019 07:56:06 GMT" }, { "version": "v3", "created": "Fri, 13 Dec 2019 22:48:24 GMT" }, { "version": "v4", "created": "Fri, 30 Oct 2020 11:55:13 GMT" }, { "version": "v5", "created": "Fri, 29 Apr 2022 07:02:49 GMT" } ]
2022-05-02T00:00:00
[ [ "Ronfard", "Remi", "" ], [ "Gandhi", "Vineet", "" ], [ "Boiron", "Laurent", "" ], [ "Murukutla", "Vaishnavi Ameya", "" ] ]
new_dataset
0.985775
2101.06838
Jaime Arias
Jaime Arias, {\L}ukasz Ma\'sko, Wojciech Penczek, Laure Petrucci and Teofil Sidoruk
Minimal Schedule with Minimal Number of Agents in Attack-Defence Trees
null
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Expressing attack-defence trees in a multi-agent setting allows for studying a new aspect of security scenarios, namely how the number of agents and their task assignment impact the performance, e.g. attack time, of strategies executed by opposing coalitions. Optimal scheduling of agents' actions, a non-trivial problem, is thus vital. We discuss associated caveats and propose an algorithm that synthesises such an assignment, targeting minimal attack time and using minimal number of agents for a given attack-defence tree.
[ { "version": "v1", "created": "Mon, 18 Jan 2021 02:08:53 GMT" }, { "version": "v2", "created": "Mon, 1 Feb 2021 18:38:37 GMT" }, { "version": "v3", "created": "Sun, 14 Feb 2021 09:49:51 GMT" }, { "version": "v4", "created": "Mon, 26 Apr 2021 07:35:59 GMT" }, { "version": "v5", "created": "Fri, 29 Apr 2022 13:19:09 GMT" } ]
2022-05-02T00:00:00
[ [ "Arias", "Jaime", "" ], [ "Maśko", "Łukasz", "" ], [ "Penczek", "Wojciech", "" ], [ "Petrucci", "Laure", "" ], [ "Sidoruk", "Teofil", "" ] ]
new_dataset
0.996518
2101.08750
Emiliano De Cristofaro
Antonis Papasavva, Max Aliapoulios, Cameron Ballard, Emiliano De Cristofaro, Gianluca Stringhini, Savvas Zannettou, and Jeremy Blackburn
The Gospel According to Q: Understanding the QAnon Conspiracy from the Perspective of Canonical Information
null
Published in the Proceedings of the 16th International AAAI Conference on Web and Social Media (ICWSM 2022). Please cite accordingly
null
null
cs.CY cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The QAnon conspiracy theory claims that a cabal of (literally) blood-thirsty politicians and media personalities are engaged in a war to destroy society. By interpreting cryptic "drops" of information from an anonymous insider calling themself Q, adherents of the conspiracy theory believe that Donald Trump is leading them in an active fight against this cabal. QAnon has been covered extensively by the media, as its adherents have been involved in multiple violent acts, including the January 6th, 2021 seditious storming of the US Capitol building. Nevertheless, we still have relatively little understanding of how the theory evolved and spread on the Web, and the role played in that by multiple platforms. To address this gap, we study QAnon from the perspective of "Q" themself. We build a dataset of 4,949 canonical Q drops collected from six "aggregation sites," which curate and archive them from their original posting to anonymous and ephemeral image boards. We expose that these sites have a relatively low (overall) agreement, and thus at least some Q drops should probably be considered apocryphal. We then analyze the Q drops' contents to identify topics of discussion and find statistically significant indications that drops were not authored by a single individual. Finally, we look at how posts on Reddit are used to disseminate Q drops to wider audiences. We find that dissemination was (initially) limited to a few sub-communities and that, while heavy-handed moderation decisions have reduced the overall issue, the "gospel" of Q persists on the Web.
[ { "version": "v1", "created": "Thu, 21 Jan 2021 18:03:24 GMT" }, { "version": "v2", "created": "Thu, 20 May 2021 17:57:49 GMT" }, { "version": "v3", "created": "Fri, 29 Apr 2022 10:32:00 GMT" } ]
2022-05-02T00:00:00
[ [ "Papasavva", "Antonis", "" ], [ "Aliapoulios", "Max", "" ], [ "Ballard", "Cameron", "" ], [ "De Cristofaro", "Emiliano", "" ], [ "Stringhini", "Gianluca", "" ], [ "Zannettou", "Savvas", "" ], [ "Blackburn", "Jeremy", "" ] ]
new_dataset
0.999474
2106.07560
Marios Papachristou
Marios Papachristou, Jon Kleinberg
Allocating Stimulus Checks in Times of Crisis
Accepted at WWW 2022 (Proceedings of the Web Conference)
null
10.1145/3485447.3512047
null
cs.SI cs.GT
http://creativecommons.org/licenses/by-sa/4.0/
We study the problem of allocating bailouts (stimulus, subsidy allocations) to people participating in a financial network subject to income shocks. We build on the financial clearing framework of Eisenberg and Noe that allows the incorporation of a bailout policy that is based on discrete bailouts motivated by the types of stimulus checks people receive around the world as part of COVID-19 economical relief plans. We show that optimally allocating such bailouts on a financial network in order to maximize a variety of social welfare objectives of this form is a computationally intractable problem. We develop approximation algorithms to optimize these objectives and establish guarantees for their approximation rations. Then, we incorporate multiple fairness constraints in the optimization problems and establish relative bounds on the solutions with versus without these constraints. Finally, we apply our methodology to a variety of data, both in the context of a system of large financial institutions with real-world data, as well as in a realistic societal context with financial interactions between people and businesses for which we use semi-artificial data derived from mobility patterns. Our results suggest that the algorithms we develop and study have reasonable results in practice and outperform other network-based heuristics. We argue that the presented problem through the societal-level lens could assist policymakers in making informed decisions on issuing subsidies.
[ { "version": "v1", "created": "Mon, 14 Jun 2021 16:17:50 GMT" }, { "version": "v2", "created": "Tue, 31 Aug 2021 15:03:31 GMT" }, { "version": "v3", "created": "Mon, 20 Sep 2021 19:54:09 GMT" }, { "version": "v4", "created": "Fri, 29 Apr 2022 13:50:06 GMT" } ]
2022-05-02T00:00:00
[ [ "Papachristou", "Marios", "" ], [ "Kleinberg", "Jon", "" ] ]
new_dataset
0.994723
2109.07703
Guanxiong Chen
Guanxiong Chen, Haoyu Yang and Ian M. Mitchell
ROS-X-Habitat: Bridging the ROS Ecosystem with Embodied AI
Camera-ready version submitted to Canadian Conference on Computer and Robot Vision (CRV) 2022
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
We introduce ROS-X-Habitat, a software interface that bridges the AI Habitat platform for embodied learning-based agents with other robotics resources via ROS. This interface not only offers standardized communication protocols between embodied agents and simulators, but also enables physically and photorealistic simulation that benefits the training and/or testing of vision-based embodied agents. With this interface, roboticists can evaluate their own Habitat RL agents in another ROS-based simulator or use Habitat Sim v2 as the test bed for their own robotic algorithms. Through in silico experiments, we demonstrate that ROS-X-Habitat has minimal impact on the navigation performance and simulation speed of a Habitat RGBD agent; that a standard set of ROS mapping, planning and navigation tools can run in Habitat Sim v2; and that a Habitat agent can run in the standard ROS simulator Gazebo.
[ { "version": "v1", "created": "Thu, 16 Sep 2021 03:53:52 GMT" }, { "version": "v2", "created": "Fri, 17 Sep 2021 04:27:25 GMT" }, { "version": "v3", "created": "Fri, 29 Apr 2022 06:11:42 GMT" } ]
2022-05-02T00:00:00
[ [ "Chen", "Guanxiong", "" ], [ "Yang", "Haoyu", "" ], [ "Mitchell", "Ian M.", "" ] ]
new_dataset
0.973579
2111.04576
Malintha Fernando
Malintha Fernando, Ransalu Senanayake, Martin Swany
CoCo Games: Graphical Game-Theoretic Swarm Control for Communication-Aware Coverage
8 pages, 7 figures
2022 - IEEE Robotics and Automation Letters
10.1109/LRA.2022.3160968
null
cs.RO cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by-sa/4.0/
We propose a novel framework for real-time communication-aware coverage control in networked robot swarms. Our framework unifies the robot dynamics with network-level message-routing to reach consensus on swarm formations in the presence of communication uncertainties by leveraging local information. Specifically, we formulate the communication-aware coverage as a cooperative graphical game, and use variational inference to reach mixed strategy Nash equilibria of the stage games. We experimentally validate the proposed approach in a mobile ad-hoc wireless network scenario using teams of aerial vehicles and terrestrial user equipment (UE) operating over a large geographic region of interest. We show that our approach can provide wireless coverage to stationary and mobile UEs under realistic network conditions.
[ { "version": "v1", "created": "Mon, 8 Nov 2021 15:37:15 GMT" }, { "version": "v2", "created": "Fri, 8 Apr 2022 15:08:38 GMT" }, { "version": "v3", "created": "Tue, 12 Apr 2022 15:17:25 GMT" }, { "version": "v4", "created": "Mon, 25 Apr 2022 16:10:58 GMT" }, { "version": "v5", "created": "Thu, 28 Apr 2022 20:29:40 GMT" } ]
2022-05-02T00:00:00
[ [ "Fernando", "Malintha", "" ], [ "Senanayake", "Ransalu", "" ], [ "Swany", "Martin", "" ] ]
new_dataset
0.997076
2111.11535
Kanav Vats
Kanav Vats, William McNally, Pascale Walters, David A. Clausi, John S. Zelek
Ice hockey player identification via transformers and weakly supervised learning
CVSports 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Identifying players in video is a foundational step in computer vision-based sports analytics. Obtaining player identities is essential for analyzing the game and is used in downstream tasks such as game event recognition. Transformers are the existing standard in Natural Language Processing (NLP) and are swiftly gaining traction in computer vision. Motivated by the increasing success of transformers in computer vision, in this paper, we introduce a transformer network for recognizing players through their jersey numbers in broadcast National Hockey League (NHL) videos. The transformer takes temporal sequences of player frames (also called player tracklets) as input and outputs the probabilities of jersey numbers present in the frames. The proposed network performs better than the previous benchmark on the dataset used. We implement a weakly-supervised training approach by generating approximate frame-level labels for jersey number presence and use the frame-level labels for faster training. We also utilize player shifts available in the NHL play-by-play data by reading the game time using optical character recognition (OCR) to get the players on the ice rink at a certain game time. Using player shifts improved the player identification accuracy by 6%.
[ { "version": "v1", "created": "Mon, 22 Nov 2021 21:10:26 GMT" }, { "version": "v2", "created": "Thu, 28 Apr 2022 18:35:01 GMT" } ]
2022-05-02T00:00:00
[ [ "Vats", "Kanav", "" ], [ "McNally", "William", "" ], [ "Walters", "Pascale", "" ], [ "Clausi", "David A.", "" ], [ "Zelek", "John S.", "" ] ]
new_dataset
0.998557
2201.05510
Youde Liu
Youde Liu, Jian Guan, Qiaoxi Zhu and Wenwu Wang
Anomalous Sound Detection using Spectral-Temporal Information Fusion
To appear at ICASSP 2022
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unsupervised anomalous sound detection aims to detect unknown abnormal sounds of machines from normal sounds. However, the state-of-the-art approaches are not always stable and perform dramatically differently even for machines of the same type, making it impractical for general applications. This paper proposes a spectral-temporal fusion based self-supervised method to model the feature of the normal sound, which improves the stability and performance consistency in detection of anomalous sounds from individual machines, even of the same type. Experiments on the DCASE 2020 Challenge Task 2 dataset show that the proposed method achieved 81.39\%, 83.48\%, 98.22\% and 98.83\% in terms of the minimum AUC (worst-case detection performance amongst individuals) in four types of real machines (fan, pump, slider and valve), respectively, giving 31.79\%, 17.78\%, 10.42\% and 21.13\% improvement compared to the state-of-the-art method, i.e., Glow\_Aff. Moreover, the proposed method has improved AUC (average performance of individuals) for all the types of machines in the dataset. The source codes are available at https://github.com/liuyoude/STgram_MFN
[ { "version": "v1", "created": "Fri, 14 Jan 2022 15:29:47 GMT" }, { "version": "v2", "created": "Wed, 27 Apr 2022 08:40:47 GMT" }, { "version": "v3", "created": "Fri, 29 Apr 2022 02:30:24 GMT" } ]
2022-05-02T00:00:00
[ [ "Liu", "Youde", "" ], [ "Guan", "Jian", "" ], [ "Zhu", "Qiaoxi", "" ], [ "Wang", "Wenwu", "" ] ]
new_dataset
0.976236
2202.01284
Wenzel Jakob
Wenzel Jakob, S\'ebastien Speierer, Nicolas Roussel, Delio Vicini
Dr.Jit: A Just-In-Time Compiler for Differentiable Rendering
To appear at SIGGRAPH 2022
ACM Transactions on Graphics (Proceedings of SIGGRAPH 2022)
10.1145/3528223.3530099
null
cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Dr.Jit is a new just-in-time compiler for physically based rendering and its derivative. Dr.Jit expedites research on these topics in two ways: first, it traces high-level simulation code (e.g., written in Python) and aggressively simplifies and specializes the resulting program representation, producing data-parallel kernels with state-of-the-art performance on CPUs and GPUs. Second, it simplifies the development of differentiable rendering algorithms. Efficient methods in this area turn the derivative of a simulation into a simulation of the derivative. Dr.Jit provides fine-grained control over the process of automatic differentiation to help with this transformation. Specialization is particularly helpful in the context of differentiation, since large parts of the simulation ultimately do not influence the computed gradients. Dr.Jit tracks data dependencies globally to find and remove redundant computation.
[ { "version": "v1", "created": "Wed, 2 Feb 2022 21:13:42 GMT" }, { "version": "v2", "created": "Thu, 28 Apr 2022 18:39:27 GMT" } ]
2022-05-02T00:00:00
[ [ "Jakob", "Wenzel", "" ], [ "Speierer", "Sébastien", "" ], [ "Roussel", "Nicolas", "" ], [ "Vicini", "Delio", "" ] ]
new_dataset
0.994198
2202.02013
Vijini Pilana Liyanage
Vijini Liyanage, Davide Buscaldi, Adeline Nazarenko
A Benchmark Corpus for the Detection of Automatically Generated Text in Academic Publications
9 pages including references, submitted to LREC 2022. arXiv admin note: text overlap with arXiv:2110.10577 by other authors
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Automatic text generation based on neural language models has achieved performance levels that make the generated text almost indistinguishable from those written by humans. Despite the value that text generation can have in various applications, it can also be employed for malicious tasks. The diffusion of such practices represent a threat to the quality of academic publishing. To address these problems, we propose in this paper two datasets comprised of artificially generated research content: a completely synthetic dataset and a partial text substitution dataset. In the first case, the content is completely generated by the GPT-2 model after a short prompt extracted from original papers. The partial or hybrid dataset is created by replacing several sentences of abstracts with sentences that are generated by the Arxiv-NLP model. We evaluate the quality of the datasets comparing the generated texts to aligned original texts using fluency metrics such as BLEU and ROUGE. The more natural the artificial texts seem, the more difficult they are to detect and the better is the benchmark. We also evaluate the difficulty of the task of distinguishing original from generated text by using state-of-the-art classification models.
[ { "version": "v1", "created": "Fri, 4 Feb 2022 08:16:56 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 12:04:33 GMT" } ]
2022-05-02T00:00:00
[ [ "Liyanage", "Vijini", "" ], [ "Buscaldi", "Davide", "" ], [ "Nazarenko", "Adeline", "" ] ]
new_dataset
0.996373
2202.05735
Kevin Kotzen
Kevin Kotzen, Peter H. Charlton, Sharon Salabi, Lea Amar, Amir Landesberg and Joachim A. Behar
SleepPPG-Net: a deep learning algorithm for robust sleep staging from continuous photoplethysmography
11 pages, 10 figures
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
Introduction: Sleep staging is an essential component in the diagnosis of sleep disorders and management of sleep health. It is traditionally measured in a clinical setting and requires a labor-intensive labeling process. We hypothesize that it is possible to perform robust 4-class sleep staging using the raw photoplethysmography (PPG) time series and modern advances in deep learning (DL). Methods: We used two publicly available sleep databases that included raw PPG recordings, totalling 2,374 patients and 23,055 hours. We developed SleepPPG-Net, a DL model for 4-class sleep staging from the raw PPG time series. SleepPPG-Net was trained end-to-end and consists of a residual convolutional network for automatic feature extraction and a temporal convolutional network to capture long-range contextual information. We benchmarked the performance of SleepPPG-Net against models based on the best-reported state-of-the-art (SOTA) algorithms. Results: When benchmarked on a held-out test set, SleepPPG-Net obtained a median Cohen's Kappa ($\kappa$) score of 0.75 against 0.69 for the best SOTA approach. SleepPPG-Net showed good generalization performance to an external database, obtaining a $\kappa$ score of 0.74 after transfer learning. Perspective: Overall, SleepPPG-Net provides new SOTA performance. In addition, performance is high enough to open the path to the development of wearables that meet the requirements for usage in clinical applications such as the diagnosis and monitoring of obstructive sleep apnea.
[ { "version": "v1", "created": "Fri, 11 Feb 2022 16:17:42 GMT" }, { "version": "v2", "created": "Fri, 18 Feb 2022 19:13:21 GMT" }, { "version": "v3", "created": "Tue, 15 Mar 2022 16:55:41 GMT" }, { "version": "v4", "created": "Fri, 29 Apr 2022 15:00:18 GMT" } ]
2022-05-02T00:00:00
[ [ "Kotzen", "Kevin", "" ], [ "Charlton", "Peter H.", "" ], [ "Salabi", "Sharon", "" ], [ "Amar", "Lea", "" ], [ "Landesberg", "Amir", "" ], [ "Behar", "Joachim A.", "" ] ]
new_dataset
0.997813
2203.06229
Eric Koskinen
Adam Chen, Parisa Fathololumi, Eric Koskinen, Jared Pincus
Veracity: Declarative Multicore Programming with Commutativity
null
null
null
null
cs.PL
http://creativecommons.org/licenses/by-nc-nd/4.0/
There is an ongoing effort to provide programming abstractions that ease the burden of exploiting multicore hardware. Many programming abstractions (e.g., concurrent objects, transactional memory, etc.) simplify matters, but still involve intricate engineering. We argue that some difficulty of multicore programming can be meliorated through a declarative programming style in which programmers directly express the independence of fragments of sequential programs. In our proposed paradigm, programmers write programs in a familiar, sequential manner, with the added ability to explicitly express the conditions under which code fragments sequentially commute. Putting such commutativity conditions into source code offers a new entry point for a compiler to exploit the known connection between commutativity and parallelism. We give a semantics for the programmer's sequential perspective and, under a correctness condition, find that a compiler-transformed parallel execution is equivalent to the sequential semantics. Serializability/linearizability are not the right fit for this condition, so we introduce scoped serializability and show how it can be enforced with lock synthesis techniques. We next describe a technique for automatically verifying and synthesizing commute conditions via a new reduction from our commute blocks to logical specifications, upon which symbolic commutativity reasoning can be performed. We implemented our work in a new language called Veracity, implemented in Multicore OCaml. We show that commutativity conditions can be automatically generated across a variety of new benchmark programs, confirm the expectation that concurrency speedups can be seen as the computation increases, and apply our work to a small in-memory filesystem and an adaptation of a crowdfund blockchain smart contract.
[ { "version": "v1", "created": "Fri, 11 Mar 2022 20:13:32 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 15:04:14 GMT" } ]
2022-05-02T00:00:00
[ [ "Chen", "Adam", "" ], [ "Fathololumi", "Parisa", "" ], [ "Koskinen", "Eric", "" ], [ "Pincus", "Jared", "" ] ]
new_dataset
0.99551
2204.08746
Maneet Singh
Maneet Singh, S.R.S. Iyengar, Akrati Saxena and Rishemjit Kaur
A Bi-level assessment of Twitter in predicting the results of an election: Delhi Assembly Elections 2020
15 pages, 11 figures and 2 tables
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Elections are the backbone of any democratic country, where voters elect the candidates as their representatives. The emergence of social networking sites has provided a platform for political parties and their candidates to connect with voters in order to spread their political ideas. Our study aims to use Twitter in assessing the outcome of Delhi Assembly elections held in 2020, using a bi-level approach, i.e., concerning political parties and their candidates. We analyze the correlation of election results with the activities of different candidates and parties on Twitter, and the response of voters on them, especially the mentions and sentiment of voters towards a party. The Twitter profiles of the candidates are compared both at the party level as well as the candidate level to evaluate their association with the outcome of the election. We observe that the number of followers and the replies to the tweets of candidates are good indicators for predicting actual election outcome. However, we observe that the number of tweets mentioning a party and the sentiment of voters towards the party shown in tweets are not aligned with the election result. We also use machine learning models on various features such as linguistic, word embeddings and moral dimensions for predicting the election result (win or lose). The random forest model using tweet features provides promising results for predicting if the tweet belongs to a winning or losing candidate.
[ { "version": "v1", "created": "Tue, 19 Apr 2022 08:40:18 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 06:10:55 GMT" } ]
2022-05-02T00:00:00
[ [ "Singh", "Maneet", "" ], [ "Iyengar", "S. R. S.", "" ], [ "Saxena", "Akrati", "" ], [ "Kaur", "Rishemjit", "" ] ]
new_dataset
0.980492
2204.10687
Alfio Di Mauro
Alfio Di Mauro, Arpan Suravi Prasad, Zhikai Huang, Matteo Spallanzani, Francesco Conti, Luca Benini
SNE: an Energy-Proportional Digital Accelerator for Sparse Event-Based Convolutions
Accepted at DATE22
null
null
null
cs.AR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Event-based sensors are drawing increasing attention due to their high temporal resolution, low power consumption, and low bandwidth. To efficiently extract semantically meaningful information from sparse data streams produced by such sensors, we present a 4.5TOP/s/W digital accelerator capable of performing 4-bits-quantized event-based convolutional neural networks (eCNN). Compared to standard convolutional engines, our accelerator performs a number of operations proportional to the number of events contained into the input data stream, ultimately achieving a high energy-to-information processing proportionality. On the IBM-DVS-Gesture dataset, we report 80uJ/inf to 261uJ/inf, respectively, when the input activity is 1.2% and 4.9%. Our accelerator consumes 0.221pJ/SOP, to the best of our knowledge it is the lowest energy/OP reported on a digital neuromorphic engine.
[ { "version": "v1", "created": "Fri, 22 Apr 2022 13:05:02 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 16:54:45 GMT" } ]
2022-05-02T00:00:00
[ [ "Di Mauro", "Alfio", "" ], [ "Prasad", "Arpan Suravi", "" ], [ "Huang", "Zhikai", "" ], [ "Spallanzani", "Matteo", "" ], [ "Conti", "Francesco", "" ], [ "Benini", "Luca", "" ] ]
new_dataset
0.971212
2204.11333
Antonio Casares
Antonio Casares, Thomas Colcombet, Karoliina Lehtinen
On the size of good-for-games Rabin automata and its link with the memory in Muller games
null
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
In this paper, we look at good-for-games Rabin automata that recognise a Muller language (a language that is entirely characterised by the set of letters that appear infinitely often in each word). We establish that minimal such automata are exactly of the same size as the minimal memory required for winning Muller games that have this language as their winning condition. We show how to effectively construct such minimal automata. Finally, we establish that these automata can be exponentially more succinct than equivalent deterministic ones, thus proving as a consequence that chromatic memory for winning a Muller game can be exponentially larger than unconstrained memory.
[ { "version": "v1", "created": "Sun, 24 Apr 2022 18:40:45 GMT" }, { "version": "v2", "created": "Fri, 29 Apr 2022 15:07:13 GMT" } ]
2022-05-02T00:00:00
[ [ "Casares", "Antonio", "" ], [ "Colcombet", "Thomas", "" ], [ "Lehtinen", "Karoliina", "" ] ]
new_dataset
0.999438
2204.11736
Yang An
Yang An, Bo Jin, Xiaopeng Wei
KnowAugNet: Multi-Source Medical Knowledge Augmented Medication Prediction Network with Multi-Level Graph Contrastive Learning
null
null
null
null
cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Predicting medications is a crucial task in many intelligent healthcare systems. It can assist doctors in making informed medication decisions for patients according to electronic medical records (EMRs). However, medication prediction is a challenging data mining task due to the complex relations between medical codes. Most existing studies focus on utilizing inherent relations between homogeneous codes of medical ontology graph to enhance their representations using supervised methods, and few studies pay attention to the valuable relations between heterogeneous or homogeneous medical codes from history EMRs, which further limits the prediction performance and application scenarios. Therefore, to address these limitations, this paper proposes KnowAugNet, a multi-sourced medical knowledge augmented medication prediction network which can fully capture the diverse relations between medical codes via multi-level graph contrastive learning framework. Specifically, KnowAugNet first leverages the graph contrastive learning using graph attention network as the encoder to capture the implicit relations between homogeneous medical codes from the medical ontology graph and obtains the knowledge augmented medical codes embedding vectors. Then, it utilizes the graph contrastive learning using a weighted graph convolutional network as the encoder to capture the correlative relations between homogeneous or heterogeneous medical codes from the constructed medical prior relation graph and obtains the relation augmented medical codes embedding vectors. Finally, the augmented medical codes embedding vectors and the supervised medical codes embedding vectors are retrieved and input to the sequential learning network to capture the temporal relations of medical codes and predict medications for patients.
[ { "version": "v1", "created": "Mon, 25 Apr 2022 15:47:41 GMT" }, { "version": "v2", "created": "Thu, 28 Apr 2022 18:03:43 GMT" } ]
2022-05-02T00:00:00
[ [ "An", "Yang", "" ], [ "Jin", "Bo", "" ], [ "Wei", "Xiaopeng", "" ] ]
new_dataset
0.997176
2204.12061
Diptesh Kanojia
Leonardo Zilio, Hadeel Saadany, Prashant Sharma, Diptesh Kanojia, Constantin Or\u{a}san
PLOD: An Abbreviation Detection Dataset for Scientific Documents
Accepted at LREC 2022, 8 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The detection and extraction of abbreviations from unstructured texts can help to improve the performance of Natural Language Processing tasks, such as machine translation and information retrieval. However, in terms of publicly available datasets, there is not enough data for training deep-neural-networks-based models to the point of generalising well over data. This paper presents PLOD, a large-scale dataset for abbreviation detection and extraction that contains 160k+ segments automatically annotated with abbreviations and their long forms. We performed manual validation over a set of instances and a complete automatic validation for this dataset. We then used it to generate several baseline models for detecting abbreviations and long forms. The best models achieved an F1-score of 0.92 for abbreviations and 0.89 for detecting their corresponding long forms. We release this dataset along with our code and all the models publicly in https://github.com/surrey-nlp/PLOD-AbbreviationDetection
[ { "version": "v1", "created": "Tue, 26 Apr 2022 03:52:21 GMT" }, { "version": "v2", "created": "Thu, 28 Apr 2022 19:08:19 GMT" } ]
2022-05-02T00:00:00
[ [ "Zilio", "Leonardo", "" ], [ "Saadany", "Hadeel", "" ], [ "Sharma", "Prashant", "" ], [ "Kanojia", "Diptesh", "" ], [ "Orăsan", "Constantin", "" ] ]
new_dataset
0.999846
2204.13743
Diptesh Kanojia
Rudra Murthy, Pallab Bhattacharjee, Rahul Sharnagat, Jyotsana Khatri, Diptesh Kanojia, Pushpak Bhattacharyya
HiNER: A Large Hindi Named Entity Recognition Dataset
Accepted at LREC 2022, 8 pages
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Named Entity Recognition (NER) is a foundational NLP task that aims to provide class labels like Person, Location, Organisation, Time, and Number to words in free text. Named Entities can also be multi-word expressions where the additional I-O-B annotation information helps label them during the NER annotation process. While English and European languages have considerable annotated data for the NER task, Indian languages lack on that front -- both in terms of quantity and following annotation standards. This paper releases a significantly sized standard-abiding Hindi NER dataset containing 109,146 sentences and 2,220,856 tokens, annotated with 11 tags. We discuss the dataset statistics in all their essential detail and provide an in-depth analysis of the NER tag-set used with our data. The statistics of tag-set in our dataset show a healthy per-tag distribution, especially for prominent classes like Person, Location and Organisation. Since the proof of resource-effectiveness is in building models with the resource and testing the model on benchmark data and against the leader-board entries in shared tasks, we do the same with the aforesaid data. We use different language models to perform the sequence labelling task for NER and show the efficacy of our data by performing a comparative evaluation with models trained on another dataset available for the Hindi NER task. Our dataset helps achieve a weighted F1 score of 88.78 with all the tags and 92.22 when we collapse the tag-set, as discussed in the paper. To the best of our knowledge, no available dataset meets the standards of volume (amount) and variability (diversity), as far as Hindi NER is concerned. We fill this gap through this work, which we hope will significantly help NLP for Hindi. We release this dataset with our code and models at https://github.com/cfiltnlp/HiNER
[ { "version": "v1", "created": "Thu, 28 Apr 2022 19:14:21 GMT" } ]
2022-05-02T00:00:00
[ [ "Murthy", "Rudra", "" ], [ "Bhattacharjee", "Pallab", "" ], [ "Sharnagat", "Rahul", "" ], [ "Khatri", "Jyotsana", "" ], [ "Kanojia", "Diptesh", "" ], [ "Bhattacharyya", "Pushpak", "" ] ]
new_dataset
0.999655
2204.13848
Daniel Deutsch
Daniel Deutsch and Dan Roth
Repro: An Open-Source Library for Improving the Reproducibility and Usability of Publicly Available Research Code
null
null
null
null
cs.CL cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Repro, an open-source library which aims at improving the reproducibility and usability of research code. The library provides a lightweight Python API for running software released by researchers within Docker containers which contain the exact required runtime configuration and dependencies for the code. Because the environment setup for each package is handled by Docker, users do not have to do any configuration themselves. Once Repro is installed, users can run the code for the 30+ papers currently supported by the library. We hope researchers see the value provided to others by including their research code in Repro and consider adding support for their own research code.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 01:54:54 GMT" } ]
2022-05-02T00:00:00
[ [ "Deutsch", "Daniel", "" ], [ "Roth", "Dan", "" ] ]
new_dataset
0.999677
2204.13915
Pavel P\v{r}ib\'a\v{n}
Pavel P\v{r}ib\'a\v{n}, Josef Steinberger
Czech Dataset for Cross-lingual Subjectivity Classification
Accepted to LREC2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce a new Czech subjectivity dataset of 10k manually annotated subjective and objective sentences from movie reviews and descriptions. Our prime motivation is to provide a reliable dataset that can be used with the existing English dataset as a benchmark to test the ability of pre-trained multilingual models to transfer knowledge between Czech and English and vice versa. Two annotators annotated the dataset reaching 0.83 of the Cohen's \k{appa} inter-annotator agreement. To the best of our knowledge, this is the first subjectivity dataset for the Czech language. We also created an additional dataset that consists of 200k automatically labeled sentences. Both datasets are freely available for research purposes. Furthermore, we fine-tune five pre-trained BERT-like models to set a monolingual baseline for the new dataset and we achieve 93.56% of accuracy. We fine-tune models on the existing English dataset for which we obtained results that are on par with the current state-of-the-art results. Finally, we perform zero-shot cross-lingual subjectivity classification between Czech and English to verify the usability of our dataset as the cross-lingual benchmark. We compare and discuss the cross-lingual and monolingual results and the ability of multilingual models to transfer knowledge between languages.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 07:31:46 GMT" } ]
2022-05-02T00:00:00
[ [ "Přibáň", "Pavel", "" ], [ "Steinberger", "Josef", "" ] ]
new_dataset
0.999761
2204.13973
Zhongyuan Hau
Zhongyuan Hau, Soteris Demetriou, Emil C. Lupu
Using 3D Shadows to Detect Object Hiding Attacks on Autonomous Vehicle Perception
To appear in the Proceedings of the 2022 IEEE Security and Privacy Workshop on the Internet of Safe Things (SafeThings 2022)
null
null
null
cs.CV cs.CR
http://creativecommons.org/licenses/by/4.0/
Autonomous Vehicles (AVs) are mostly reliant on LiDAR sensors which enable spatial perception of their surroundings and help make driving decisions. Recent works demonstrated attacks that aim to hide objects from AV perception, which can result in severe consequences. 3D shadows, are regions void of measurements in 3D point clouds which arise from occlusions of objects in a scene. 3D shadows were proposed as a physical invariant valuable for detecting spoofed or fake objects. In this work, we leverage 3D shadows to locate obstacles that are hidden from object detectors. We achieve this by searching for void regions and locating the obstacles that cause these shadows. Our proposed methodology can be used to detect an object that has been hidden by an adversary as these objects, while hidden from 3D object detectors, still induce shadow artifacts in 3D point clouds, which we use for obstacle detection. We show that using 3D shadows for obstacle detection can achieve high accuracy in matching shadows to their object and provide precise prediction of an obstacle's distance from the ego-vehicle.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 09:49:29 GMT" } ]
2022-05-02T00:00:00
[ [ "Hau", "Zhongyuan", "" ], [ "Demetriou", "Soteris", "" ], [ "Lupu", "Emil C.", "" ] ]
new_dataset
0.99086
2204.13979
Mohammad Mehdi Jaziriyan
Mohammad Mehdi Jaziriyan, Ahmad Akbari, Hamed Karbasi
ExaASC: A General Target-Based Stance Detection Corpus in Arabic Language
6 pages, 1 figure, 4 tables. Accepted at ICCKE 2021
2021 11th International Conference on Computer Engineering and Knowledge (ICCKE)
10.1109/ICCKE54056.2021.9721486
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Target-based Stance Detection is the task of finding a stance toward a target. Twitter is one of the primary sources of political discussions in social media and one of the best resources to analyze Stance toward entities. This work proposes a new method toward Target-based Stance detection by using the stance of replies toward a most important and arguing target in source tweet. This target is detected with respect to the source tweet itself and not limited to a set of pre-defined targets which is the usual approach of the current state-of-the-art methods. Our proposed new attitude resulted in a new corpus called ExaASC for the Arabic Language, one of the low resource languages in this field. In the end, we used BERT to evaluate our corpus and reached a 70.69 Macro F-score. This shows that our data and model can work in a general Target-base Stance Detection system. The corpus is publicly available1.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 10:03:51 GMT" } ]
2022-05-02T00:00:00
[ [ "Jaziriyan", "Mohammad Mehdi", "" ], [ "Akbari", "Ahmad", "" ], [ "Karbasi", "Hamed", "" ] ]
new_dataset
0.995802
2204.14034
Haotang Li
Haotang Li, Shengtao Guo, Kailin Lyu, Xiao Yang, Tianchen Chen, Jianqing Zhu, Huanqiang Zeng
A Challenging Benchmark of Anime Style Recognition
accepted by CVPRW 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given two images of different anime roles, anime style recognition (ASR) aims to learn abstract painting style to determine whether the two images are from the same work, which is an interesting but challenging problem. Unlike biometric recognition, such as face recognition, iris recognition, and person re-identification, ASR suffers from a much larger semantic gap but receives less attention. In this paper, we propose a challenging ASR benchmark. Firstly, we collect a large-scale ASR dataset (LSASRD), which contains 20,937 images of 190 anime works and each work at least has ten different roles. In addition to the large-scale, LSASRD contains a list of challenging factors, such as complex illuminations, various poses, theatrical colors and exaggerated compositions. Secondly, we design a cross-role protocol to evaluate ASR performance, in which query and gallery images must come from different roles to validate an ASR model is to learn abstract painting style rather than learn discriminative features of roles. Finally, we apply two powerful person re-identification methods, namely, AGW and TransReID, to construct the baseline performance on LSASRD. Surprisingly, the recent transformer model (i.e., TransReID) only acquires a 42.24% mAP on LSASRD. Therefore, we believe that the ASR task of a huge semantic gap deserves deep and long-term research. We will open our dataset and code at https://github.com/nkjcqvcpi/ASR.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 12:09:42 GMT" } ]
2022-05-02T00:00:00
[ [ "Li", "Haotang", "" ], [ "Guo", "Shengtao", "" ], [ "Lyu", "Kailin", "" ], [ "Yang", "Xiao", "" ], [ "Chen", "Tianchen", "" ], [ "Zhu", "Jianqing", "" ], [ "Zeng", "Huanqiang", "" ] ]
new_dataset
0.999859
2204.14044
Minyi Zhao
Minyi Zhao, Miao Wang, Fan Bai, Bingjia Li, Jie Wang, Shuigeng Zhou
C3-STISR: Scene Text Image Super-resolution with Triple Clues
Accepted by IJCAI 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Scene text image super-resolution (STISR) has been regarded as an important pre-processing task for text recognition from low-resolution scene text images. Most recent approaches use the recognizer's feedback as clues to guide super-resolution. However, directly using recognition clue has two problems: 1) Compatibility. It is in the form of probability distribution, has an obvious modal gap with STISR - a pixel-level task; 2) Inaccuracy. it usually contains wrong information, thus will mislead the main task and degrade super-resolution performance. In this paper, we present a novel method C3-STISR that jointly exploits the recognizer's feedback, visual and linguistical information as clues to guide super-resolution. Here, visual clue is from the images of texts predicted by the recognizer, which is informative and more compatible with the STISR task; while linguistical clue is generated by a pre-trained character-level language model, which is able to correct the predicted texts. We design effective extraction and fusion mechanisms for the triple cross-modal clues to generate a comprehensive and unified guidance for super-resolution. Extensive experiments on TextZoom show that C3-STISR outperforms the SOTA methods in fidelity and recognition performance. Code is available in https://github.com/zhaominyiz/C3-STISR.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 12:39:51 GMT" } ]
2022-05-02T00:00:00
[ [ "Zhao", "Minyi", "" ], [ "Wang", "Miao", "" ], [ "Bai", "Fan", "" ], [ "Li", "Bingjia", "" ], [ "Wang", "Jie", "" ], [ "Zhou", "Shuigeng", "" ] ]
new_dataset
0.999587
2204.14116
Benjamin Provan-Bessell
Benjamin Provan-Bessell, Marco Dalla, Andrea Visentin, Barry O'Sullivan
SATfeatPy -- A Python-based Feature Extraction System for Satisfiability
8 pages, 2 figures, code available at https://github.com/bprovanbessell/SATfeatPy
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Feature extraction is a fundamental task in the application of machine learning methods to SAT solving. It is used in algorithm selection and configuration for solver portfolios and satisfiability classification. Many approaches have been proposed to extract meaningful attributes from CNF instances. Most of them lack a working/updated implementation, and the limited descriptions lack clarity affecting the reproducibility. Furthermore, the literature misses a comparison among the features. This paper introduces SATfeatPy, a library that offers feature extraction techniques for SAT problems in the CNF form. This package offers the implementation of all the structural and statistical features from there major papers in the field. The library is provided in an up-to-date, easy-to-use Python package alongside a detailed feature description. We show the high accuracy of SAT/UNSAT and problem category classification, using five sets of features generated using our library from a dataset of 3000 SAT and UNSAT instances, over ten different classes of problems. Finally, we compare the usefulness of the features and importance for predicting a SAT instance's original structure in an ablation study.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 14:10:01 GMT" } ]
2022-05-02T00:00:00
[ [ "Provan-Bessell", "Benjamin", "" ], [ "Dalla", "Marco", "" ], [ "Visentin", "Andrea", "" ], [ "O'Sullivan", "Barry", "" ] ]
new_dataset
0.956773
2204.14204
Sunanda Thunder
Thomas Egler, Hans Dittmann, Sunanda Thunder and Artur Useinov
3T-1R Analog Write and Digital Read of MRAM for RNG and Low Power Memory Application
null
null
null
null
cs.ET
http://creativecommons.org/licenses/by/4.0/
This work represents integration of MTJ with 30nm FinFET for low voltage analog write operations and readout optimization for the p-bit or true random number generator (TRNG), where the induced p-bit, the probabilistic state of the magnetic tunnel junction (MTJ), is detected within only a single computational period. The period contains two sub-cycles: write and joined read & reset cycles. The operation with MTJ becomes stochastic, independent after calibrating at the desired working point against the factors, which can induce the signal deviations, e.g. temperature, material degradation or external magnetic field.
[ { "version": "v1", "created": "Wed, 27 Apr 2022 08:19:18 GMT" } ]
2022-05-02T00:00:00
[ [ "Egler", "Thomas", "" ], [ "Dittmann", "Hans", "" ], [ "Thunder", "Sunanda", "" ], [ "Useinov", "Artur", "" ] ]
new_dataset
0.996798
2204.14240
Pablo Azagra Millan
Pablo Azagra, Carlos Sostres, \'Angel Ferrandez, Luis Riazuelo, Clara Tomasini, Oscar Le\'on Barbed, Javier Morlana, David Recasens, Victor M. Batlle, Juan J. G\'omez-Rodr\'iguez, Richard Elvira, Julia L\'opez, Cristina Oriol, Javier Civera, Juan D. Tard\'os, Ana Cristina Murillo, Angel Lanas and Jos\'e M.M. Montiel
EndoMapper dataset of complete calibrated endoscopy procedures
11 pages, 7 figures, 4 tables
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Computer-assisted systems are becoming broadly used in medicine. In endoscopy, most research focuses on automatic detection of polyps or other pathologies, but localization and navigation of the endoscope is completely performed manually by physicians. To broaden this research and bring spatial Artificial Intelligence to endoscopies, data from complete procedures are needed. This data will be used to build a 3D mapping and localization systems that can perform special task like, for example, detect blind zones during exploration, provide automatic polyp measurements, guide doctors to a polyp found in a previous exploration and retrieve previous images of the same area aligning them for easy comparison. These systems will provide an improvement in the quality and precision of the procedures while lowering the burden on the physicians. This paper introduces the Endomapper dataset, the first collection of complete endoscopy sequences acquired during regular medical practice, including slow and careful screening explorations, making secondary use of medical data. Its original purpose is to facilitate the development and evaluation of VSLAM (Visual Simultaneous Localization and Mapping) methods in real endoscopy data. The first release of the dataset is composed of 59 sequences with more than 15 hours of video. It is also the first endoscopic dataset that includes both the computed geometric and photometric endoscope calibration with the original calibration videos. Meta-data and annotations associated to the dataset varies from anatomical landmark and description of the procedure labeling, tools segmentation masks, COLMAP 3D reconstructions, simulated sequences with groundtruth and meta-data related to special cases, such as sequences from the same patient. This information will improve the research in endoscopic VSLAM, as well as other research lines, and create new research lines.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 17:10:01 GMT" } ]
2022-05-02T00:00:00
[ [ "Azagra", "Pablo", "" ], [ "Sostres", "Carlos", "" ], [ "Ferrandez", "Ángel", "" ], [ "Riazuelo", "Luis", "" ], [ "Tomasini", "Clara", "" ], [ "Barbed", "Oscar León", "" ], [ "Morlana", "Javier", "" ], [ "Recasens", "David", "" ], [ "Batlle", "Victor M.", "" ], [ "Gómez-Rodríguez", "Juan J.", "" ], [ "Elvira", "Richard", "" ], [ "López", "Julia", "" ], [ "Oriol", "Cristina", "" ], [ "Civera", "Javier", "" ], [ "Tardós", "Juan D.", "" ], [ "Murillo", "Ana Cristina", "" ], [ "Lanas", "Angel", "" ], [ "Montiel", "José M. M.", "" ] ]
new_dataset
0.999609
2204.14244
Marcos V. Conde
Marcos V. Conde, Kerem Turgutlu
CLIP-Art: Contrastive Pre-training for Fine-Grained Art Classification
CVPR CVFAD Workshop 2021
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops, 2021, pp. 3956-3960
10.1109/CVPRW53098.2021.00444
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Existing computer vision research in artwork struggles with artwork's fine-grained attributes recognition and lack of curated annotated datasets due to their costly creation. To the best of our knowledge, we are one of the first methods to use CLIP (Contrastive Language-Image Pre-Training) to train a neural network on a variety of artwork images and text descriptions pairs. CLIP is able to learn directly from free-form art descriptions, or, if available, curated fine-grained labels. Model's zero-shot capability allows predicting accurate natural language description for a given image, without directly optimizing for the task. Our approach aims to solve 2 challenges: instance retrieval and fine-grained artwork attribute recognition. We use the iMet Dataset, which we consider the largest annotated artwork dataset. In this benchmark we achieved competitive results using only self-supervision.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 17:17:24 GMT" } ]
2022-05-02T00:00:00
[ [ "Conde", "Marcos V.", "" ], [ "Turgutlu", "Kerem", "" ] ]
new_dataset
0.991143
2204.14249
Kai Katsumata
Kai Katsumata and Duc Minh Vo and Hideki Nakayama
OSSGAN: Open-Set Semi-Supervised Image Generation
Accepted at CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a challenging training scheme of conditional GANs, called open-set semi-supervised image generation, where the training dataset consists of two parts: (i) labeled data and (ii) unlabeled data with samples belonging to one of the labeled data classes, namely, a closed-set, and samples not belonging to any of the labeled data classes, namely, an open-set. Unlike the existing semi-supervised image generation task, where unlabeled data only contain closed-set samples, our task is more general and lowers the data collection cost in practice by allowing open-set samples to appear. Thanks to entropy regularization, the classifier that is trained on labeled data is able to quantify sample-wise importance to the training of cGAN as confidence, allowing us to use all samples in unlabeled data. We design OSSGAN, which provides decision clues to the discriminator on the basis of whether an unlabeled image belongs to one or none of the classes of interest, smoothly integrating labeled and unlabeled data during training. The results of experiments on Tiny ImageNet and ImageNet show notable improvements over supervised BigGAN and semi-supervised methods. Our code is available at https://github.com/raven38/OSSGAN.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 17:26:09 GMT" } ]
2022-05-02T00:00:00
[ [ "Katsumata", "Kai", "" ], [ "Vo", "Duc Minh", "" ], [ "Nakayama", "Hideki", "" ] ]
new_dataset
0.999464
2204.14272
Chenyu You
Chenyu You, Nuo Chen, Fenglin Liu, Shen Ge, Xian Wu, Yuexian Zou
End-to-end Spoken Conversational Question Answering: Task, Dataset and Model
In Findings of NAACL 2022. arXiv admin note: substantial text overlap with arXiv:2010.08923
null
null
null
cs.CL cs.AI cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In spoken question answering, the systems are designed to answer questions from contiguous text spans within the related speech transcripts. However, the most natural way that human seek or test their knowledge is via human conversations. Therefore, we propose a new Spoken Conversational Question Answering task (SCQA), aiming at enabling the systems to model complex dialogue flows given the speech documents. In this task, our main objective is to build the system to deal with conversational questions based on the audio recordings, and to explore the plausibility of providing more cues from different modalities with systems in information gathering. To this end, instead of directly adopting automatically generated speech transcripts with highly noisy data, we propose a novel unified data distillation approach, DDNet, which effectively ingests cross-modal information to achieve fine-grained representations of the speech and language modalities. Moreover, we propose a simple and novel mechanism, termed Dual Attention, by encouraging better alignments between audio and text to ease the process of knowledge transfer. To evaluate the capacity of SCQA systems in a dialogue-style interaction, we assemble a Spoken Conversational Question Answering (Spoken-CoQA) dataset with more than 40k question-answer pairs from 4k conversations. The performance of the existing state-of-the-art methods significantly degrade on our dataset, hence demonstrating the necessity of cross-modal information integration. Our experimental results demonstrate that our proposed method achieves superior performance in spoken conversational question answering tasks.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 17:56:59 GMT" } ]
2022-05-02T00:00:00
[ [ "You", "Chenyu", "" ], [ "Chen", "Nuo", "" ], [ "Liu", "Fenglin", "" ], [ "Ge", "Shen", "" ], [ "Wu", "Xian", "" ], [ "Zou", "Yuexian", "" ] ]
new_dataset
0.994814