id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2208.14738
Shang Xu
Jianlin Liu, Zhuofei Huang, Dihe Huang, Shang Xu, Ying Chen, and Yong Liu
Scatter Points in Space: 3D Detection from Multi-view Monocular Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D object detection from monocular image(s) is a challenging and long-standing problem of computer vision. To combine information from different perspectives without troublesome 2D instance tracking, recent methods tend to aggregate multiview feature by sampling regular 3D grid densely in space, which is inefficient. In this paper, we attempt to improve multi-view feature aggregation by proposing a learnable keypoints sampling method, which scatters pseudo surface points in 3D space, in order to keep data sparsity. The scattered points augmented by multi-view geometric constraints and visual features are then employed to infer objects location and shape in the scene. To make up the limitations of single frame and model multi-view geometry explicitly, we further propose a surface filter module for noise suppression. Experimental results show that our method achieves significantly better performance than previous works in terms of 3D detection (more than 0.1 AP improvement on some categories of ScanNet). The code will be publicly available.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 09:38:05 GMT" } ]
2022-09-01T00:00:00
[ [ "Liu", "Jianlin", "" ], [ "Huang", "Zhuofei", "" ], [ "Huang", "Dihe", "" ], [ "Xu", "Shang", "" ], [ "Chen", "Ying", "" ], [ "Liu", "Yong", "" ] ]
new_dataset
0.974683
2208.14743
Mohamed Sayed
Mohamed Sayed, John Gibson, Jamie Watson, Victor Prisacariu, Michael Firman, Cl\'ement Godard
SimpleRecon: 3D Reconstruction Without 3D Convolutions
ECCV2022 version with improved timings. 14 pages + 5 pages of references
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Traditionally, 3D indoor scene reconstruction from posed images happens in two phases: per-image depth estimation, followed by depth merging and surface reconstruction. Recently, a family of methods have emerged that perform reconstruction directly in final 3D volumetric feature space. While these methods have shown impressive reconstruction results, they rely on expensive 3D convolutional layers, limiting their application in resource-constrained environments. In this work, we instead go back to the traditional route, and show how focusing on high quality multi-view depth prediction leads to highly accurate 3D reconstructions using simple off-the-shelf depth fusion. We propose a simple state-of-the-art multi-view depth estimator with two main contributions: 1) a carefully-designed 2D CNN which utilizes strong image priors alongside a plane-sweep feature volume and geometric losses, combined with 2) the integration of keyframe and geometric metadata into the cost volume which allows informed depth plane scoring. Our method achieves a significant lead over the current state-of-the-art for depth estimation and close or better for 3D reconstruction on ScanNet and 7-Scenes, yet still allows for online real-time low-memory reconstruction. Code, models and results are available at https://nianticlabs.github.io/simplerecon
[ { "version": "v1", "created": "Wed, 31 Aug 2022 09:46:34 GMT" } ]
2022-09-01T00:00:00
[ [ "Sayed", "Mohamed", "" ], [ "Gibson", "John", "" ], [ "Watson", "Jamie", "" ], [ "Prisacariu", "Victor", "" ], [ "Firman", "Michael", "" ], [ "Godard", "Clément", "" ] ]
new_dataset
0.998849
2208.14796
Baian Chen
Baian Chen, Liangliang Nan, Haoran Xie, Dening Lu, Fu Lee Wang and Mingqiang Wei
3DLG-Detector: 3D Object Detection via Simultaneous Local-Global Feature Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Capturing both local and global features of irregular point clouds is essential to 3D object detection (3OD). However, mainstream 3D detectors, e.g., VoteNet and its variants, either abandon considerable local features during pooling operations or ignore many global features in the whole scene context. This paper explores new modules to simultaneously learn local-global features of scene point clouds that serve 3OD positively. To this end, we propose an effective 3OD network via simultaneous local-global feature learning (dubbed 3DLG-Detector). 3DLG-Detector has two key contributions. First, it develops a Dynamic Points Interaction (DPI) module that preserves effective local features during pooling. Besides, DPI is detachable and can be incorporated into existing 3OD networks to boost their performance. Second, it develops a Global Context Aggregation module to aggregate multi-scale features from different layers of the encoder to achieve scene context-awareness. Our method shows improvements over thirteen competitors in terms of detection accuracy and robustness on both the SUN RGB-D and ScanNet datasets. Source code will be available upon publication.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 12:23:40 GMT" } ]
2022-09-01T00:00:00
[ [ "Chen", "Baian", "" ], [ "Nan", "Liangliang", "" ], [ "Xie", "Haoran", "" ], [ "Lu", "Dening", "" ], [ "Wang", "Fu Lee", "" ], [ "Wei", "Mingqiang", "" ] ]
new_dataset
0.999256
2208.14861
Andrew Kuznetsov
Andrew Kuznetsov, Joseph Chee Chang, Nathan Hahn, Napol Rachatasumrit, Bradley Breneisen, Julina Coupland, Aniket Kittur
Fuse: In-Situ Sensemaking Support in the Browser
null
null
10.1145/3526113.3545693
null
cs.HC cs.IR
http://creativecommons.org/licenses/by/4.0/
People spend a significant amount of time trying to make sense of the internet, collecting content from a variety of sources and organizing it to make decisions and achieve their goals. While humans are able to fluidly iterate on collecting and organizing information in their minds, existing tools and approaches introduce significant friction into the process. We introduce Fuse, a browser extension that externalizes users' working memory by combining low-cost collection with lightweight organization of content in a compact card-based sidebar that is always available. Fuse helps users simultaneously extract key web content and structure it in a lightweight and visual way. We discuss how these affordances help users externalize more of their mental model into the system (e.g., saving, annotating, and structuring items) and support fast reviewing and resumption of task contexts. Our 22-month public deployment and follow-up interviews provide longitudinal insights into the structuring behaviors of real-world users conducting information foraging tasks.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 13:43:27 GMT" } ]
2022-09-01T00:00:00
[ [ "Kuznetsov", "Andrew", "" ], [ "Chang", "Joseph Chee", "" ], [ "Hahn", "Nathan", "" ], [ "Rachatasumrit", "Napol", "" ], [ "Breneisen", "Bradley", "" ], [ "Coupland", "Julina", "" ], [ "Kittur", "Aniket", "" ] ]
new_dataset
0.977679
2208.14877
Leonardo Bonati
Leonardo Bonati, Michele Polese, Salvatore D'Oro, Stefano Basagni, Tommaso Melodia
Intelligent Closed-loop RAN Control with xApps in OpenRAN Gym
6 pages, 4 figures
null
null
null
cs.NI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Softwarization, programmable network control and the use of all-encompassing controllers acting at different timescales are heralded as the key drivers for the evolution to next-generation cellular networks. These technologies have fostered newly designed intelligent data-driven solutions for managing large sets of diverse cellular functionalities, basically impossible to implement in traditionally closed cellular architectures. Despite the evident interest of industry on Artificial Intelligence (AI) and Machine Learning (ML) solutions for closed-loop control of the Radio Access Network (RAN), and several research works in the field, their design is far from mainstream, and it is still a sophisticated and often overlooked operation. In this paper, we discuss how to design AI/ML solutions for the intelligent closed-loop control of the Open RAN, providing guidelines and insights based on exemplary solutions with high-performance record. We then show how to embed these solutions into xApps instantiated on the O-RAN near-real-time RAN Intelligent Controller (RIC) through OpenRAN Gym, the first publicly available toolbox for data-driven O-RAN experimentation at scale. We showcase a use case of an xApp developed with OpenRAN Gym and tested on a cellular network with 7 base stations and 42 users deployed on the Colosseum wireless network emulator. Our demonstration shows the high degree of flexibility of the OpenRAN Gym-based xApp development environment, which is independent of deployment scenarios and traffic demand.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 14:09:12 GMT" } ]
2022-09-01T00:00:00
[ [ "Bonati", "Leonardo", "" ], [ "Polese", "Michele", "" ], [ "D'Oro", "Salvatore", "" ], [ "Basagni", "Stefano", "" ], [ "Melodia", "Tommaso", "" ] ]
new_dataset
0.993613
2208.14884
Federico Rossetto
Carlos Gemmell, Iain Mackie, Paul Owoicho, Federico Rossetto, Sophie Fischer, Jeffrey Dalton
GRILLBot: An Assistant for Real-World Tasks with Neural Semantic Parsing and Graph-Based Representations
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GRILLBot is the winning system in the 2022 Alexa Prize TaskBot Challenge, moving towards the next generation of multimodal task assistants. It is a voice assistant to guide users through complex real-world tasks in the domains of cooking and home improvement. These are long-running and complex tasks that require flexible adjustment and adaptation. The demo highlights the core aspects, including a novel Neural Decision Parser for contextualized semantic parsing, a new "TaskGraph" state representation that supports conditional execution, knowledge-grounded chit-chat, and automatic enrichment of tasks with images and videos.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 14:24:35 GMT" } ]
2022-09-01T00:00:00
[ [ "Gemmell", "Carlos", "" ], [ "Mackie", "Iain", "" ], [ "Owoicho", "Paul", "" ], [ "Rossetto", "Federico", "" ], [ "Fischer", "Sophie", "" ], [ "Dalton", "Jeffrey", "" ] ]
new_dataset
0.964436
2208.14885
Ray-Guang Cheng
Fransiscus Asisi Bimo, Ferlinda Feliana, Shu-Hua Liao, Chih-Wei Lin, David F. Kinsey, James Li, Rittwik Jana, Richard Wright, Ray-Guang Cheng
OSC Community Lab: The Integration Test Bed for O-RAN Software Community
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
O-RAN Software Community (OSC) is an open-source project collaborated by O-RAN Alliance and Linux Foundation, aiming to develop reference software components based on 3GPP and O-RAN Alliance specifications. The OSC has twelve projects. Among them, the Integration and Testing (INT) project is responsible for testing the requirements documented in each release for end-to-end and use case testing. Three OSC Community Laboratories were built to speed up the integration and interoperability testing among different projects. This paper summarizes the software components developed by OSC projects and the status of the three OSC Community Laboratories. The activities of each laboratory, how the community collaborates, and the challenges we encountered along the way were elaborated.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 14:25:06 GMT" } ]
2022-09-01T00:00:00
[ [ "Bimo", "Fransiscus Asisi", "" ], [ "Feliana", "Ferlinda", "" ], [ "Liao", "Shu-Hua", "" ], [ "Lin", "Chih-Wei", "" ], [ "Kinsey", "David F.", "" ], [ "Li", "James", "" ], [ "Jana", "Rittwik", "" ], [ "Wright", "Richard", "" ], [ "Cheng", "Ray-Guang", "" ] ]
new_dataset
0.964787
2208.14925
Tim Schreiter
Tim Schreiter, Tiago Rodrigues de Almeida, Yufei Zhu, Eduardo Gutierrez Maestro, Lucas Morillo-Mendez, Andrey Rudenko, Tomasz P. Kucner, Oscar Martinez Mozos, Martin Magnusson, Luigi Palmieri, Kai O. Arras, Achim J. Lilienthal
The Magni Human Motion Dataset: Accurate, Complex, Multi-Modal, Natural, Semantically-Rich and Contextualized
in SIRRW Workshop held in conjunction with 31st IEEE International Conference on Robot & Human Interactive Communication, 29/08 - 02/09 2022, Naples (Italy)
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Rapid development of social robots stimulates active research in human motion modeling, interpretation and prediction, proactive collision avoidance, human-robot interaction and co-habitation in shared spaces. Modern approaches to this end require high quality datasets for training and evaluation. However, the majority of available datasets suffers from either inaccurate tracking data or unnatural, scripted behavior of the tracked people. This paper attempts to fill this gap by providing high quality tracking information from motion capture, eye-gaze trackers and on-board robot sensors in a semantically-rich environment. To induce natural behavior of the recorded participants, we utilise loosely scripted task assignment, which induces the participants navigate through the dynamic laboratory environment in a natural and purposeful way. The motion dataset, presented in this paper, sets a high quality standard, as the realistic and accurate data is enhanced with semantic information, enabling development of new algorithms which rely not only on the tracking information but also on contextual cues of the moving agents, static and dynamic environment.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 15:37:45 GMT" } ]
2022-09-01T00:00:00
[ [ "Schreiter", "Tim", "" ], [ "de Almeida", "Tiago Rodrigues", "" ], [ "Zhu", "Yufei", "" ], [ "Maestro", "Eduardo Gutierrez", "" ], [ "Morillo-Mendez", "Lucas", "" ], [ "Rudenko", "Andrey", "" ], [ "Kucner", "Tomasz P.", "" ], [ "Mozos", "Oscar Martinez", "" ], [ "Magnusson", "Martin", "" ], [ "Palmieri", "Luigi", "" ], [ "Arras", "Kai O.", "" ], [ "Lilienthal", "Achim J.", "" ] ]
new_dataset
0.999137
2208.14935
Qiange Wang
Qiange Wang, Xin Ai, Yanfeng Zhang, Jing Chen, Ge Yu
HyTGraph: GPU-Accelerated Graph Processing with Hybrid Transfer Management
14 pages with 10 figures. Accepted by ICDE 2023
null
null
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Processing large graphs with memory-limited GPU needs to resolve issues of host-GPU data transfer, which is a key performance bottleneck. Existing GPU-accelerated graph processing frameworks reduce the data transfers by managing the active subgraph transfer at runtime. Some frameworks adopt explicit transfer management approaches based on explicit memory copy with filter or compaction. In contrast, others adopt implicit transfer management approaches based on on-demand access with zero-copy or unified-memory. Having made intensive analysis, we find that as the active vertices evolve, the performance of the two approaches varies in different workloads. Due to heavy redundant data transfers, high CPU compaction overhead, or low bandwidth utilization, adopting a single approach often results in suboptimal performance. In this work, we propose a hybrid transfer management approach to take the merits of both the two approaches at runtime, with an objective to achieve the shortest execution time in each iteration. Based on the hybrid approach, we present HytGraph, a GPU-accelerated graph processing framework, which is empowered by a set of effective task scheduling optimizations to improve the performance. Our experimental results on real-world and synthesized graphs demonstrate that HyTGraph achieves up to 10.27X speedup over existing GPU-accelerated graph processing systems including Grus, Subway, and EMOGI.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 16:05:19 GMT" } ]
2022-09-01T00:00:00
[ [ "Wang", "Qiange", "" ], [ "Ai", "Xin", "" ], [ "Zhang", "Yanfeng", "" ], [ "Chen", "Jing", "" ], [ "Yu", "Ge", "" ] ]
new_dataset
0.987815
2208.14971
Cameron Boeder
Cameron Boeder and Troy Januchowski
Zero-day DDoS Attack Detection
null
null
null
null
cs.CR cs.LG cs.NI
http://creativecommons.org/licenses/by/4.0/
The ability to detect zero-day (novel) attacks has become essential in the network security industry. Due to ever evolving attack signatures, existing network intrusion detection systems often fail to detect these threats. This project aims to solve the task of detecting zero-day DDoS (distributed denial-of-service) attacks by utilizing network traffic that is captured before entering a private network. Modern feature extraction techniques are used in conjunction with neural networks to determine if a network packet is either benign or malicious.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 17:14:43 GMT" } ]
2022-09-01T00:00:00
[ [ "Boeder", "Cameron", "" ], [ "Januchowski", "Troy", "" ] ]
new_dataset
0.997451
2208.15001
Mingyuan Zhang
Mingyuan Zhang, Zhongang Cai, Liang Pan, Fangzhou Hong, Xinying Guo, Lei Yang, Ziwei Liu
MotionDiffuse: Text-Driven Human Motion Generation with Diffusion Model
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human motion modeling is important for many modern graphics applications, which typically require professional skills. In order to remove the skill barriers for laymen, recent motion generation methods can directly generate human motions conditioned on natural languages. However, it remains challenging to achieve diverse and fine-grained motion generation with various text inputs. To address this problem, we propose MotionDiffuse, the first diffusion model-based text-driven motion generation framework, which demonstrates several desired properties over existing methods. 1) Probabilistic Mapping. Instead of a deterministic language-motion mapping, MotionDiffuse generates motions through a series of denoising steps in which variations are injected. 2) Realistic Synthesis. MotionDiffuse excels at modeling complicated data distribution and generating vivid motion sequences. 3) Multi-Level Manipulation. MotionDiffuse responds to fine-grained instructions on body parts, and arbitrary-length motion synthesis with time-varied text prompts. Our experiments show MotionDiffuse outperforms existing SoTA methods by convincing margins on text-driven motion generation and action-conditioned motion generation. A qualitative analysis further demonstrates MotionDiffuse's controllability for comprehensive motion generation. Homepage: https://mingyuan-zhang.github.io/projects/MotionDiffuse.html
[ { "version": "v1", "created": "Wed, 31 Aug 2022 17:58:54 GMT" } ]
2022-09-01T00:00:00
[ [ "Zhang", "Mingyuan", "" ], [ "Cai", "Zhongang", "" ], [ "Pan", "Liang", "" ], [ "Hong", "Fangzhou", "" ], [ "Guo", "Xinying", "" ], [ "Yang", "Lei", "" ], [ "Liu", "Ziwei", "" ] ]
new_dataset
0.957784
2009.01498
Kurt Mehlhorn
Vincenzo Bonifaci and Enrico Facca and Frederic Folz and Andreas Karrenbauer and Pavel Kolev and Kurt Mehlhorn and Giovanna Morigi and Golnoosh Shahkarami and Quentin Vermande
Physarum-Inspired Multi-Commodity Flow Dynamics
to appear in Theoretical Computer Science
Theoretical Computer Science 920, pp. 1-20 (2022)
null
null
cs.DS cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In wet-lab experiments, the slime mold Physarum polycephalum has demonstrated its ability to solve shortest path problems and to design efficient networks. For the shortest path problem, a mathematical model for the evolution of the slime is available and it has been shown in computer experiments and through mathematical analysis that the dynamics solves the shortest path problem. In this paper, we introduce a dynamics for the network design problem. We formulate network design as the problem of constructing a network that efficiently supports a multi-commodity flow problem. We investigate the dynamics in computer simulations and analytically. The simulations show that the dynamics is able to construct efficient and elegant networks. In the theoretical part we show that the dynamics minimizes an objective combining the cost of the network and the cost of routing the demands through the network. We also give alternative characterization of the optimum solution.
[ { "version": "v1", "created": "Thu, 3 Sep 2020 07:48:48 GMT" }, { "version": "v2", "created": "Wed, 23 Sep 2020 15:17:07 GMT" }, { "version": "v3", "created": "Fri, 23 Oct 2020 11:36:33 GMT" }, { "version": "v4", "created": "Wed, 10 Mar 2021 21:05:59 GMT" }, { "version": "v5", "created": "Wed, 9 Feb 2022 07:22:56 GMT" } ]
2022-08-31T00:00:00
[ [ "Bonifaci", "Vincenzo", "" ], [ "Facca", "Enrico", "" ], [ "Folz", "Frederic", "" ], [ "Karrenbauer", "Andreas", "" ], [ "Kolev", "Pavel", "" ], [ "Mehlhorn", "Kurt", "" ], [ "Morigi", "Giovanna", "" ], [ "Shahkarami", "Golnoosh", "" ], [ "Vermande", "Quentin", "" ] ]
new_dataset
0.994291
2102.01480
Muneeb Ul Hassan
Muneeb Ul Hassan, Mubashir Husain Rehmani, Jinjun Chen
VPT: Privacy Preserving Energy Trading and Block Mining Mechanism for Blockchain based Virtual Power Plants
Article Submitted for Review
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The desire to overcome reliability issues of distributed energy resources (DERs) lead researchers to development of a novel concept named as virtual power plant (VPP). VPPs are supposed to carry out intelligent, secure, and smart energy trading among prosumers, buyers, and generating stations along with providing efficient energy management. Therefore, integrating blockchain in a decentralized VPP network emerged as a new paradigm, and recent experiments over this integration have shown fruitful results. However, this decentralization also suffers with energy management, trust, reliability, and efficiency issues due to the dynamic nature of DERs. In order to overcome this, in this paper, we first work over providing an efficient energy management strategy for VPP to enhance demand response, then we propose an energy oriented trading and block mining protocol and name it as proof of energy market (PoEM). To enhance it further, we integrate differential privacy in PoEM and propose a Private PoEM (PPoEM) model. Collectively, we propose a private decentralized VPP trading model and named it as Virtual Private Trading (VPT) model. We further carry out extensive theoretical analysis and derive step-by-step valuations for market race probability, market stability probability, energy trading expectation, winning state probability, and prospective leading time profit values. Afterwards, we carry out simulation-based experiments of our proposed model. The performance evaluation and theoretical analysis of our VPT model make it one of the most viable models for blockchain based VPP networks as compared to other state-of-the-art works.
[ { "version": "v1", "created": "Tue, 2 Feb 2021 13:11:24 GMT" }, { "version": "v2", "created": "Tue, 30 Aug 2022 00:49:26 GMT" } ]
2022-08-31T00:00:00
[ [ "Hassan", "Muneeb Ul", "" ], [ "Rehmani", "Mubashir Husain", "" ], [ "Chen", "Jinjun", "" ] ]
new_dataset
0.992685
2107.06149
Haocheng Ren
Haocheng Ren and Hao Zhang and Jia Zheng and Jiaxiang Zheng and Rui Tang and Yuchi Huo and Hujun Bao and Rui Wang
MINERVAS: Massive INterior EnviRonments VirtuAl Synthesis
Accepted by Computer Graphics Forum, Pacific Graphics 2022. The two first authors contribute equally. Project pape: https://coohom.github.io/MINERVAS
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the rapid development of data-driven techniques, data has played an essential role in various computer vision tasks. Many realistic and synthetic datasets have been proposed to address different problems. However, there are lots of unresolved challenges: (1) the creation of dataset is usually a tedious process with manual annotations, (2) most datasets are only designed for a single specific task, (3) the modification or randomization of the 3D scene is difficult, and (4) the release of commercial 3D data may encounter copyright issue. This paper presents MINERVAS, a Massive INterior EnviRonments VirtuAl Synthesis system, to facilitate the 3D scene modification and the 2D image synthesis for various vision tasks. In particular, we design a programmable pipeline with Domain-Specific Language, allowing users to (1) select scenes from the commercial indoor scene database, (2) synthesize scenes for different tasks with customized rules, and (3) render various imagery data, such as visual color, geometric structures, semantic label. Our system eases the difficulty of customizing massive numbers of scenes for different tasks and relieves users from manipulating fine-grained scene configurations by providing user-controllable randomness using multi-level samplers. Most importantly, it empowers users to access commercial scene databases with millions of indoor scenes and protects the copyright of core data assets, e.g., 3D CAD models. We demonstrate the validity and flexibility of our system by using our synthesized data to improve the performance on different kinds of computer vision tasks.
[ { "version": "v1", "created": "Tue, 13 Jul 2021 14:53:01 GMT" }, { "version": "v2", "created": "Wed, 14 Jul 2021 14:21:45 GMT" }, { "version": "v3", "created": "Sun, 12 Jun 2022 02:45:04 GMT" }, { "version": "v4", "created": "Tue, 30 Aug 2022 09:21:25 GMT" } ]
2022-08-31T00:00:00
[ [ "Ren", "Haocheng", "" ], [ "Zhang", "Hao", "" ], [ "Zheng", "Jia", "" ], [ "Zheng", "Jiaxiang", "" ], [ "Tang", "Rui", "" ], [ "Huo", "Yuchi", "" ], [ "Bao", "Hujun", "" ], [ "Wang", "Rui", "" ] ]
new_dataset
0.998516
2111.06102
Yoshinori Aono
Yoshinori Aono, Sitong Liu, Tomoki Tanaka, Shumpei Uno, Rodney Van Meter, Naoyuki Shinohara, Ryo Nojima
The Present and Future of Discrete Logarithm Problems on Noisy Quantum Computers
null
IEEE Transactions on Quantum Engineering, vol. 3, pp. 1-21, 2022
10.1109/TQE.2022.3183385
null
cs.CR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The discrete logarithm problem (DLP) is the basis for several cryptographic primitives. Since Shor's work, it has been known that the DLP can be solved by combining a polynomial-size quantum circuit and a polynomial-time classical post-processing algorithm. Evaluating and predicting the instance size that quantum devices can solve is an emerging research topic. In this paper, we propose a quantitative measure based on the success probability of the post-processing algorithm to determine whether an experiment on a quantum device (or a classical simulator) succeeded. We also propose a procedure to modify bit strings observed from a Shor circuit to increase the success probability of a lattice-based post-processing algorithm. We report preliminary experiments conducted on IBM-Quantum quantum computers and near-future predictions based on noisy-device simulations. We conducted our experiments with the ibm_kawasaki device and discovered that the simplest circuit (7 qubits) from a 2-bit DLP instance achieves a sufficiently high success probability to proclaim the experiment successful. Experiments on another circuit from a slightly harder 2-bit DLP instance, on the other hand, did not succeed, and we determined that reducing the noise level by half is required to achieve a successful experiment. Finally, we give a near-term prediction based on required noise levels to solve some selected small DLP and integer factoring instances.
[ { "version": "v1", "created": "Thu, 11 Nov 2021 08:49:16 GMT" } ]
2022-08-31T00:00:00
[ [ "Aono", "Yoshinori", "" ], [ "Liu", "Sitong", "" ], [ "Tanaka", "Tomoki", "" ], [ "Uno", "Shumpei", "" ], [ "Van Meter", "Rodney", "" ], [ "Shinohara", "Naoyuki", "" ], [ "Nojima", "Ryo", "" ] ]
new_dataset
0.995624
2112.14663
E Zhixuan Zeng
Yuhao Chen, E. Zhixuan Zeng, Maximilian Gilles, Alexander Wong
MetaGraspNet_v0: A Large-Scale Benchmark Dataset for Vision-driven Robotic Grasping via Physics-based Metaverse Synthesis
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
There has been increasing interest in smart factories powered by robotics systems to tackle repetitive, laborious tasks. One impactful yet challenging task in robotics-powered smart factory applications is robotic grasping: using robotic arms to grasp objects autonomously in different settings. Robotic grasping requires a variety of computer vision tasks such as object detection, segmentation, grasp prediction, pick planning, etc. While significant progress has been made in leveraging of machine learning for robotic grasping, particularly with deep learning, a big challenge remains in the need for large-scale, high-quality RGBD datasets that cover a wide diversity of scenarios and permutations. To tackle this big, diverse data problem, we are inspired by the recent rise in the concept of metaverse, which has greatly closed the gap between virtual worlds and the physical world. Metaverses allow us to create digital twins of real-world manufacturing scenarios and to virtually create different scenarios from which large volumes of data can be generated for training models. In this paper, we present MetaGraspNet: a large-scale benchmark dataset for vision-driven robotic grasping via physics-based metaverse synthesis. The proposed dataset contains 100,000 images and 25 different object types and is split into 5 difficulties to evaluate object detection and segmentation model performance in different grasping scenarios. We also propose a new layout-weighted performance metric alongside the dataset for evaluating object detection and segmentation performance in a manner that is more appropriate for robotic grasp applications compared to existing general-purpose performance metrics. Our benchmark dataset is available open-source on Kaggle, with the first phase consisting of detailed object detection, segmentation, layout annotations, and a layout-weighted performance metric script.
[ { "version": "v1", "created": "Wed, 29 Dec 2021 17:23:24 GMT" }, { "version": "v2", "created": "Thu, 30 Dec 2021 18:05:26 GMT" }, { "version": "v3", "created": "Tue, 30 Aug 2022 17:53:40 GMT" } ]
2022-08-31T00:00:00
[ [ "Chen", "Yuhao", "" ], [ "Zeng", "E. Zhixuan", "" ], [ "Gilles", "Maximilian", "" ], [ "Wong", "Alexander", "" ] ]
new_dataset
0.999863
2208.08900
Mohit Vaishnav
Mohit Vaishnav, Thomas Fel, Iva\'n Felipe Rodr\'iguez and Thomas Serre
Conviformers: Convolutionally guided Vision Transformer
12 pages; 4 Figures; 8 Tables
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Vision transformers are nowadays the de-facto choice for image classification tasks. There are two broad categories of classification tasks, fine-grained and coarse-grained. In fine-grained classification, the necessity is to discover subtle differences due to the high level of similarity between sub-classes. Such distinctions are often lost as we downscale the image to save the memory and computational cost associated with vision transformers (ViT). In this work, we present an in-depth analysis and describe the critical components for developing a system for the fine-grained categorization of plants from herbarium sheets. Our extensive experimental analysis indicated the need for a better augmentation technique and the ability of modern-day neural networks to handle higher dimensional images. We also introduce a convolutional transformer architecture called Conviformer which, unlike the popular Vision Transformer (ConViT), can handle higher resolution images without exploding memory and computational cost. We also introduce a novel, improved pre-processing technique called PreSizer to resize images better while preserving their original aspect ratios, which proved essential for classifying natural plants. With our simple yet effective approach, we achieved SoTA on Herbarium 202x and iNaturalist 2019 dataset.
[ { "version": "v1", "created": "Wed, 17 Aug 2022 13:09:24 GMT" }, { "version": "v2", "created": "Sun, 28 Aug 2022 11:46:25 GMT" } ]
2022-08-31T00:00:00
[ [ "Vaishnav", "Mohit", "" ], [ "Fel", "Thomas", "" ], [ "Rodríguez", "Ivań Felipe", "" ], [ "Serre", "Thomas", "" ] ]
new_dataset
0.979163
2208.12037
Weixian Lei
Stan Weixian Lei, Difei Gao, Jay Zhangjie Wu, Yuxuan Wang, Wei Liu, Mengmi Zhang, Mike Zheng Shou
Symbolic Replay: Scene Graph as Prompt for Continual Learning on VQA Task
18 pages, 13 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
VQA is an ambitious task aiming to answer any image-related question. However, in reality, it is hard to build such a system once for all since the needs of users are continuously updated, and the system has to implement new functions. Thus, Continual Learning (CL) ability is a must in developing advanced VQA systems. Recently, a pioneer work split a VQA dataset into disjoint answer sets to study this topic. However, CL on VQA involves not only the expansion of label sets (new Answer sets). It is crucial to study how to answer questions when deploying VQA systems to new environments (new Visual scenes) and how to answer questions requiring new functions (new Question types). Thus, we propose CLOVE, a benchmark for Continual Learning On Visual quEstion answering, which contains scene- and function-incremental settings for the two aforementioned CL scenarios. In terms of methodology, the main difference between CL on VQA and classification is that the former additionally involves expanding and preventing forgetting of reasoning mechanisms, while the latter focusing on class representation. Thus, we propose a real-data-free replay-based method tailored for CL on VQA, named Scene Graph as Prompt for Symbolic Replay. Using a piece of scene graph as a prompt, it replays pseudo scene graphs to represent the past images, along with correlated QA pairs. A unified VQA model is also proposed to utilize the current and replayed data to enhance its QA ability. Finally, experimental results reveal challenges in CLOVE and demonstrate the effectiveness of our method. The dataset and code will be available at https://github.com/showlab/CLVQA.
[ { "version": "v1", "created": "Wed, 24 Aug 2022 12:00:02 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2022 10:22:20 GMT" } ]
2022-08-31T00:00:00
[ [ "Lei", "Stan Weixian", "" ], [ "Gao", "Difei", "" ], [ "Wu", "Jay Zhangjie", "" ], [ "Wang", "Yuxuan", "" ], [ "Liu", "Wei", "" ], [ "Zhang", "Mengmi", "" ], [ "Shou", "Mike Zheng", "" ] ]
new_dataset
0.996628
2208.12886
Jean-Philippe Corbeil
Jean-Philippe Corbeil, Mia Taige Li, Hadi Abdi Ghavidel
Building the Intent Landscape of Real-World Conversational Corpora with Extractive Question-Answering Transformers
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
For companies with customer service, mapping intents inside their conversational data is crucial in building applications based on natural language understanding (NLU). Nevertheless, there is no established automated technique to gather the intents from noisy online chats or voice transcripts. Simple clustering approaches are not suited to intent-sparse dialogues. To solve this intent-landscape task, we propose an unsupervised pipeline that extracts the intents and the taxonomy of intents from real-world dialogues. Our pipeline mines intent-span candidates with an extractive Question-Answering Electra model and leverages sentence embeddings to apply a low-level density clustering followed by a top-level hierarchical clustering. Our results demonstrate the generalization ability of an ELECTRA large model fine-tuned on the SQuAD2 dataset to understand dialogues. With the right prompting question, this model achieves a rate of linguistic validation on intent spans beyond 85%. We furthermore reconstructed the intent schemes of five domains from the MultiDoGo dataset with an average recall of 94.3%.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 22:53:19 GMT" }, { "version": "v2", "created": "Tue, 30 Aug 2022 16:03:38 GMT" } ]
2022-08-31T00:00:00
[ [ "Corbeil", "Jean-Philippe", "" ], [ "Li", "Mia Taige", "" ], [ "Ghavidel", "Hadi Abdi", "" ] ]
new_dataset
0.994956
2208.13900
Erfan Pakdamanian
Erfan Pakdamanian, Erzhen Hu, Shili Sheng, Sarit Kraus, Seongkook Heo, Lu Feng
Enjoy the Ride Consciously with CAWA: Context-Aware Advisory Warnings for Automated Driving
Proceeding of the 14th International Conference on Automotive User Interfaces and Interactive Vehicular Applications (AutomotiveUI '22)
null
10.1145/3543174.3546835
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In conditionally automated driving, drivers decoupled from driving while immersed in non-driving-related tasks (NDRTs) could potentially either miss the system-initiated takeover request (TOR) or a sudden TOR may startle them. To better prepare drivers for a safer takeover in an emergency, we propose novel context-aware advisory warnings (CAWA) for automated driving to gently inform drivers. This will help them stay vigilant while engaging in NDRTs. The key innovation is that CAWA adapts warning modalities according to the context of NDRTs. We conducted a user study to investigate the effectiveness of CAWA. The study results show that CAWA has statistically significant effects on safer takeover behavior, improved driver situational awareness, less attention demand, and more positive user feedback, compared with uniformly distributed speech-based warnings across all NDRTs.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 21:44:49 GMT" } ]
2022-08-31T00:00:00
[ [ "Pakdamanian", "Erfan", "" ], [ "Hu", "Erzhen", "" ], [ "Sheng", "Shili", "" ], [ "Kraus", "Sarit", "" ], [ "Heo", "Seongkook", "" ], [ "Feng", "Lu", "" ] ]
new_dataset
0.998667
2208.13935
Yingfu Xu
Yingfu Xu and Guido C. H. E. de Croon
CUAHN-VIO: Content-and-Uncertainty-Aware Homography Network for Visual-Inertial Odometry
19 pages, 14 figures, 6 tables
null
null
null
cs.RO cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Learning-based visual ego-motion estimation is promising yet not ready for navigating agile mobile robots in the real world. In this article, we propose CUAHN-VIO, a robust and efficient monocular visual-inertial odometry (VIO) designed for micro aerial vehicles (MAVs) equipped with a downward-facing camera. The vision frontend is a content-and-uncertainty-aware homography network (CUAHN) that is robust to non-homography image content and failure cases of network prediction. It not only predicts the homography transformation but also estimates its uncertainty. The training is self-supervised, so that it does not require ground truth that is often difficult to obtain. The network has good generalization that enables "plug-and-play" deployment in new environments without fine-tuning. A lightweight extended Kalman filter (EKF) serves as the VIO backend and utilizes the mean prediction and variance estimation from the network for visual measurement updates. CUAHN-VIO is evaluated on a high-speed public dataset and shows rivaling accuracy to state-of-the-art (SOTA) VIO approaches. Thanks to the robustness to motion blur, low network inference time (~23ms), and stable processing latency (~26ms), CUAHN-VIO successfully runs onboard an Nvidia Jetson TX2 embedded processor to navigate a fast autonomous MAV.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 00:11:55 GMT" } ]
2022-08-31T00:00:00
[ [ "Xu", "Yingfu", "" ], [ "de Croon", "Guido C. H. E.", "" ] ]
new_dataset
0.996235
2208.13947
Juan Manuel Perez
Tom\'as Alves Salgueiro, Emilio Recart Zapata, Dami\'an Furman, Juan Manuel P\'erez, Pablo Nicol\'as Fern\'andez Larrosa
A Spanish dataset for Targeted Sentiment Analysis of political headlines
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Subjective texts have been studied by several works as they can induce certain behaviours in their users. Most work focuses on user-generated texts in social networks, but some other texts also comprise opinions on certain topics and could influence judgement criteria during political decisions. In this work, we address the task of Targeted Sentiment Analysis for the domain of news headlines, published by the main outlets during the 2019 Argentinean Presidential Elections. For this purpose, we present a polarity dataset of 1,976 headlines mentioning candidates in the 2019 elections at the target level. Preliminary experiments with state-of-the-art classification algorithms based on pre-trained linguistic models suggest that target information is helpful for this task. We make our data and pre-trained models publicly available.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 01:30:30 GMT" } ]
2022-08-31T00:00:00
[ [ "Salgueiro", "Tomás Alves", "" ], [ "Zapata", "Emilio Recart", "" ], [ "Furman", "Damián", "" ], [ "Pérez", "Juan Manuel", "" ], [ "Larrosa", "Pablo Nicolás Fernández", "" ] ]
new_dataset
0.999858
2208.14023
Edward Vendrow
Edward Vendrow, Satyajit Kumar, Ehsan Adeli, Hamid Rezatofighi
SoMoFormer: Multi-Person Pose Forecasting with Transformers
10 pages, 6 figures. Submitted to WACV 2023. Our method was submitted to the SoMoF benchmark leaderboard dated March 2022. See https://somof.stanford.edu/result/217/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Human pose forecasting is a challenging problem involving complex human body motion and posture dynamics. In cases that there are multiple people in the environment, one's motion may also be influenced by the motion and dynamic movements of others. Although there are several previous works targeting the problem of multi-person dynamic pose forecasting, they often model the entire pose sequence as time series (ignoring the underlying relationship between joints) or only output the future pose sequence of one person at a time. In this paper, we present a new method, called Social Motion Transformer (SoMoFormer), for multi-person 3D pose forecasting. Our transformer architecture uniquely models human motion input as a joint sequence rather than a time sequence, allowing us to perform attention over joints while predicting an entire future motion sequence for each joint in parallel. We show that with this problem reformulation, SoMoFormer naturally extends to multi-person scenes by using the joints of all people in a scene as input queries. Using learned embeddings to denote the type of joint, person identity, and global position, our model learns the relationships between joints and between people, attending more strongly to joints from the same or nearby people. SoMoFormer outperforms state-of-the-art methods for long-term motion prediction on the SoMoF benchmark as well as the CMU-Mocap and MuPoTS-3D datasets. Code will be made available after publication.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 06:59:28 GMT" } ]
2022-08-31T00:00:00
[ [ "Vendrow", "Edward", "" ], [ "Kumar", "Satyajit", "" ], [ "Adeli", "Ehsan", "" ], [ "Rezatofighi", "Hamid", "" ] ]
new_dataset
0.991266
2208.14039
Woon-Ha Yeo
Woon-Ha Yeo, Wang-Taek Oh, Kyung-Su Kang, Young-Il Kim, Han-Cheol Ryu
CAIR: Fast and Lightweight Multi-Scale Color Attention Network for Instagram Filter Removal
Accepted to ECCV Workshop 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image restoration is an important and challenging task in computer vision. Reverting a filtered image to its original image is helpful in various computer vision tasks. We employ a nonlinear activation function free network (NAFNet) for a fast and lightweight model and add a color attention module that extracts useful color information for better accuracy. We propose an accurate, fast, lightweight network with multi-scale and color attention for Instagram filter removal (CAIR). Experiment results show that the proposed CAIR outperforms existing Instagram filter removal networks in fast and lightweight ways, about 11$\times$ faster and 2.4$\times$ lighter while exceeding 3.69 dB PSNR on IFFI dataset. CAIR can successfully remove the Instagram filter with high quality and restore color information in qualitative results. The source code and pretrained weights are available at \url{https://github.com/HnV-Lab/CAIR}.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 07:42:45 GMT" } ]
2022-08-31T00:00:00
[ [ "Yeo", "Woon-Ha", "" ], [ "Oh", "Wang-Taek", "" ], [ "Kang", "Kyung-Su", "" ], [ "Kim", "Young-Il", "" ], [ "Ryu", "Han-Cheol", "" ] ]
new_dataset
0.977017
2208.14045
Luca Frittoli
Andrea Bionda, Luca Frittoli, Giacomo Boracchi
Deep Autoencoders for Anomaly Detection in Textured Images using CW-SSIM
International Conference on Image Analysis and Processing (ICIAP 2021). NVIDIA Prize winner
null
10.1007/978-3-031-06430-2_56
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Detecting anomalous regions in images is a frequently encountered problem in industrial monitoring. A relevant example is the analysis of tissues and other products that in normal conditions conform to a specific texture, while defects introduce changes in the normal pattern. We address the anomaly detection problem by training a deep autoencoder, and we show that adopting a loss function based on Complex Wavelet Structural Similarity (CW-SSIM) yields superior detection performance on this type of images compared to traditional autoencoder loss functions. Our experiments on well-known anomaly detection benchmarks show that a simple model trained with this loss function can achieve comparable or superior performance to state-of-the-art methods leveraging deeper, larger and more computationally demanding neural networks.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 08:01:25 GMT" } ]
2022-08-31T00:00:00
[ [ "Bionda", "Andrea", "" ], [ "Frittoli", "Luca", "" ], [ "Boracchi", "Giacomo", "" ] ]
new_dataset
0.980932
2208.14052
JingYang Chen
Songbin Chen
Intelligent Perception System for Vehicle-Road Cooperation
7 pages, 7 figures
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of autonomous driving, the improvement of autonomous driving technology for individual vehicles has reached the bottleneck. The advancement of vehicle-road cooperation autonomous driving technology can expand the vehicle's perception range, supplement the perception blind area and improve the perception accuracy, to promote the development of autonomous driving technology and achieve vehicle-road integration. This project mainly uses lidar to develop data fusion schemes to realize the sharing and combination of vehicle and road equipment data and achieve the detection and tracking of dynamic targets. At the same time, some test scenarios for the vehicle-road cooperative system were designed and used to test our vehicle-road cooperative awareness system, which proved the advantages of vehicle-road cooperative autonomous driving over single-vehicle autonomous driving.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 08:10:34 GMT" } ]
2022-08-31T00:00:00
[ [ "Chen", "Songbin", "" ] ]
new_dataset
0.99342
2208.14071
Luca Frittoli
Luca Frittoli, Diego Carrera, Beatrice Rossi, Pasqualina Fragneto, Giacomo Boracchi
Deep Open-Set Recognition for Silicon Wafer Production Monitoring
null
Pattern Recognition Volume 124, April 2022, 108488
10.1016/j.patcog.2021.108488
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
The chips contained in any electronic device are manufactured over circular silicon wafers, which are monitored by inspection machines at different production stages. Inspection machines detect and locate any defect within the wafer and return a Wafer Defect Map (WDM), i.e., a list of the coordinates where defects lie, which can be considered a huge, sparse, and binary image. In normal conditions, wafers exhibit a small number of randomly distributed defects, while defects grouped in specific patterns might indicate known or novel categories of failures in the production line. Needless to say, a primary concern of semiconductor industries is to identify these patterns and intervene as soon as possible to restore normal production conditions. Here we address WDM monitoring as an open-set recognition problem to accurately classify WDM in known categories and promptly detect novel patterns. In particular, we propose a comprehensive pipeline for wafer monitoring based on a Submanifold Sparse Convolutional Network, a deep architecture designed to process sparse data at an arbitrary resolution, which is trained on the known classes. To detect novelties, we define an outlier detector based on a Gaussian Mixture Model fitted on the latent representation of the classifier. Our experiments on a real dataset of WDMs show that directly processing full-resolution WDMs by Submanifold Sparse Convolutions yields superior classification performance on known classes than traditional Convolutional Neural Networks, which require a preliminary binning to reduce the size of the binary images representing WDMs. Moreover, our solution outperforms state-of-the-art open-set recognition solutions in detecting novelties.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 08:39:52 GMT" } ]
2022-08-31T00:00:00
[ [ "Frittoli", "Luca", "" ], [ "Carrera", "Diego", "" ], [ "Rossi", "Beatrice", "" ], [ "Fragneto", "Pasqualina", "" ], [ "Boracchi", "Giacomo", "" ] ]
new_dataset
0.997742
2208.14093
Li Yi
Yi Li, Wenjie Pei, Zhenyu He
SSORN: Self-Supervised Outlier Removal Network for Robust Homography Estimation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The traditional homography estimation pipeline consists of four main steps: feature detection, feature matching, outlier removal and transformation estimation. Recent deep learning models intend to address the homography estimation problem using a single convolutional network. While these models are trained in an end-to-end fashion to simplify the homography estimation problem, they lack the feature matching step and/or the outlier removal step, which are important steps in the traditional homography estimation pipeline. In this paper, we attempt to build a deep learning model that mimics all four steps in the traditional homography estimation pipeline. In particular, the feature matching step is implemented using the cost volume technique. To remove outliers in the cost volume, we treat this outlier removal problem as a denoising problem and propose a novel self-supervised loss to solve the problem. Extensive experiments on synthetic and real datasets demonstrate that the proposed model outperforms existing deep learning models.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 09:12:18 GMT" } ]
2022-08-31T00:00:00
[ [ "Li", "Yi", "" ], [ "Pei", "Wenjie", "" ], [ "He", "Zhenyu", "" ] ]
new_dataset
0.993693
2208.14139
Siyu Yuan
Siyu Yuan, Deqing Yang, Jiaqing Liang, Jilun Sun, Jingyue Huang, Kaiyan Cao, Yanghua Xiao, Rui Xie
Large-scale Multi-granular Concept Extraction Based on Machine Reading Comprehension
null
ISWC2021
10.1007/978-3-030-88361-4_6
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
The concepts in knowledge graphs (KGs) enable machines to understand natural language, and thus play an indispensable role in many applications. However, existing KGs have the poor coverage of concepts, especially fine-grained concepts. In order to supply existing KGs with more fine-grained and new concepts, we propose a novel concept extraction framework, namely MRC-CE, to extract large-scale multi-granular concepts from the descriptive texts of entities. Specifically, MRC-CE is built with a machine reading comprehension model based on BERT, which can extract more fine-grained concepts with a pointer network. Furthermore, a random forest and rule-based pruning are also adopted to enhance MRC-CE's precision and recall simultaneously. Our experiments evaluated upon multilingual KGs, i.e., English Probase and Chinese CN-DBpedia, justify MRC-CE's superiority over the state-of-the-art extraction models in KG completion. Particularly, after running MRC-CE for each entity in CN-DBpedia, more than 7,053,900 new concepts (instanceOf relations) are supplied into the KG. The code and datasets have been released at https://github.com/fcihraeipnusnacwh/MRC-CE
[ { "version": "v1", "created": "Tue, 30 Aug 2022 10:46:32 GMT" } ]
2022-08-31T00:00:00
[ [ "Yuan", "Siyu", "" ], [ "Yang", "Deqing", "" ], [ "Liang", "Jiaqing", "" ], [ "Sun", "Jilun", "" ], [ "Huang", "Jingyue", "" ], [ "Cao", "Kaiyan", "" ], [ "Xiao", "Yanghua", "" ], [ "Xie", "Rui", "" ] ]
new_dataset
0.97282
2208.14149
Miguel Altamirano Cabrera
Miguel Altamirano Cabrera, Jonathan Tirado, Juan Heredia, and Dzmitry Tsetserukou
LinkGlide-S: A Wearable Multi-Contact Tactile Display Aimed at Rendering Object Softness at the Palm with Impedance Control in VR and Telemanipulation
Accepted paper in IEEE CASE (International Conference on Automation Science and Engineering) 2022, IEEE copyrigh
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
LinkGlide-S is a novel wearable hand-worn tactile display to deliver multi-contact and multi-modal stimuli at the user's palm.} The array of inverted five-bar linkages generates three independent contact points to cover the whole palm area. \textcolor{black} {The independent contact points generate various tactile patterns at the user's hand, providing multi-contact tactile feedback. An impedance control delivers the stiffness of objects according to different parameters. Three experiments were performed to evaluate the perception of patterns, investigate the realistic perception of object interaction in Virtual Reality, and assess the users' softness perception by the impedance control. The experimental results revealed a high recognition rate for the generated patterns. These results confirm that the performance of LinkGlide-S is adequate to detect and manipulate virtual objects with different stiffness. This novel haptic device can potentially achieve a highly immersive VR experience and more interactive applications during telemanipulation.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 11:09:00 GMT" } ]
2022-08-31T00:00:00
[ [ "Cabrera", "Miguel Altamirano", "" ], [ "Tirado", "Jonathan", "" ], [ "Heredia", "Juan", "" ], [ "Tsetserukou", "Dzmitry", "" ] ]
new_dataset
0.999022
2208.14167
Fabian Herzog
Fabian Herzog, Junpeng Chen, Torben Teepe, Johannes Gilg, Stefan H\"ormann, Gerhard Rigoll
Synthehicle: Multi-Vehicle Multi-Camera Tracking in Virtual Cities
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Smart City applications such as intelligent traffic routing or accident prevention rely on computer vision methods for exact vehicle localization and tracking. Due to the scarcity of accurately labeled data, detecting and tracking vehicles in 3D from multiple cameras proves challenging to explore. We present a massive synthetic dataset for multiple vehicle tracking and segmentation in multiple overlapping and non-overlapping camera views. Unlike existing datasets, which only provide tracking ground truth for 2D bounding boxes, our dataset additionally contains perfect labels for 3D bounding boxes in camera- and world coordinates, depth estimation, and instance, semantic and panoptic segmentation. The dataset consists of 17 hours of labeled video material, recorded from 340 cameras in 64 diverse day, rain, dawn, and night scenes, making it the most extensive dataset for multi-target multi-camera tracking so far. We provide baselines for detection, vehicle re-identification, and single- and multi-camera tracking. Code and data are publicly available.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 11:36:07 GMT" } ]
2022-08-31T00:00:00
[ [ "Herzog", "Fabian", "" ], [ "Chen", "Junpeng", "" ], [ "Teepe", "Torben", "" ], [ "Gilg", "Johannes", "" ], [ "Hörmann", "Stefan", "" ], [ "Rigoll", "Gerhard", "" ] ]
new_dataset
0.999838
2208.14191
Lichen Jia
Lichen Jia, Bowen Tang, Chenggang Wu, Zhe Wang, Zihan Jiang, Yuanming Lai, Yan Kang, Ning Liu, Jingfeng Zhang
FuncFooler: A Practical Black-box Attack Against Learning-based Binary Code Similarity Detection Methods
9 pages, 4 figures
null
null
null
cs.CR cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The binary code similarity detection (BCSD) method measures the similarity of two binary executable codes. Recently, the learning-based BCSD methods have achieved great success, outperforming traditional BCSD in detection accuracy and efficiency. However, the existing studies are rather sparse on the adversarial vulnerability of the learning-based BCSD methods, which cause hazards in security-related applications. To evaluate the adversarial robustness, this paper designs an efficient and black-box adversarial code generation algorithm, namely, FuncFooler. FuncFooler constrains the adversarial codes 1) to keep unchanged the program's control flow graph (CFG), and 2) to preserve the same semantic meaning. Specifically, FuncFooler consecutively 1) determines vulnerable candidates in the malicious code, 2) chooses and inserts the adversarial instructions from the benign code, and 3) corrects the semantic side effect of the adversarial code to meet the constraints. Empirically, our FuncFooler can successfully attack the three learning-based BCSD models, including SAFE, Asm2Vec, and jTrans, which calls into question whether the learning-based BCSD is desirable.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 01:58:26 GMT" } ]
2022-08-31T00:00:00
[ [ "Jia", "Lichen", "" ], [ "Tang", "Bowen", "" ], [ "Wu", "Chenggang", "" ], [ "Wang", "Zhe", "" ], [ "Jiang", "Zihan", "" ], [ "Lai", "Yuanming", "" ], [ "Kang", "Yan", "" ], [ "Liu", "Ning", "" ], [ "Zhang", "Jingfeng", "" ] ]
new_dataset
0.989189
2208.14209
Weixin Luo
Shuqiang Cao, Weixin Luo, Bairui Wang, Wei Zhang, Lin Ma
A Circular Window-based Cascade Transformer for Online Action Detection
Submitted to TPAMI
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online action detection aims at the accurate action prediction of the current frame based on long historical observations. Meanwhile, it demands real-time inference on online streaming videos. In this paper, we advocate a novel and efficient principle for online action detection. It merely updates the latest and oldest historical representations in one window but reuses the intermediate ones, which have been already computed. Based on this principle, we introduce a window-based cascade Transformer with a circular historical queue, where it conducts multi-stage attentions and cascade refinement on each window. We also explore the association between online action detection and its counterpart offline action segmentation as an auxiliary task. We find that such an extra supervision helps discriminative history clustering and acts as feature augmentation for better training the classifier and cascade refinement. Our proposed method achieves the state-of-the-art performances on three challenging datasets THUMOS'14, TVSeries, and HDD. Codes will be available after acceptance.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 12:37:23 GMT" } ]
2022-08-31T00:00:00
[ [ "Cao", "Shuqiang", "" ], [ "Luo", "Weixin", "" ], [ "Wang", "Bairui", "" ], [ "Zhang", "Wei", "" ], [ "Ma", "Lin", "" ] ]
new_dataset
0.960143
2208.14225
Tawfiq Aljohani
Tawfiq M. Aljohani
Cyberattacks on Energy Infrastructures: Modern War Weapons
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Recent high-profile cyberattacks on energy infrastructures, such as the security breach of the Colonial Pipeline in 2021 and attacks that have disrupted Ukraine's power grid from the mid-2010s till date, have pushed cybersecurity as a top priority. As political tensions have escalated in Europe this year, concerns about critical infrastructure security have increased. Operators in the industrial sector face new cybersecurity threats that increase the risk of disruptions in services, property damages, and environmental harm. Amid rising geopolitical tensions, industrial companies, with their network-connected systems, are now considered major targets for adversaries to advance political, social, or military agendas. Moreover, the recent Russian-Ukrainian conflict has set the alarm worldwide about the danger of targeting energy grids via cyberattacks. Attack methodologies, techniques, and procedures used successfully to hack energy grids in Ukraine can be used elsewhere. This work aims to present a thorough analysis of the cybersecurity of the energy infrastructure amid the increased rise of cyberwars. The article navigates through the recent history of energy-related cyberattacks and their reasoning, discusses the grid's vulnerability, and makes a precautionary argument for securing the grids against them.
[ { "version": "v1", "created": "Sun, 28 Aug 2022 05:19:48 GMT" } ]
2022-08-31T00:00:00
[ [ "Aljohani", "Tawfiq M.", "" ] ]
new_dataset
0.992989
2208.14303
Sima Mashafi
Farzad Vatandoust, Hoseyn A. Amiri, Sima Mas-hafi
DLDNN: Deterministic Lateral Displacement Design Automation by Neural Networks
13 pages, 7 figures
null
null
null
cs.NE cs.AI math.OC physics.flu-dyn
http://creativecommons.org/licenses/by/4.0/
Size-based separation of bioparticles/cells is crucial to a variety of biomedical processing steps for applications such as exosomes and DNA isolation. Design and improvement of such microfluidic devices is a challenge to best answer the demand for producing homogeneous end-result for study and use. Deterministic lateral displacement (DLD) exploits a similar principle that has drawn extensive attention over years. However, the lack of predictive understanding of the particle trajectory and its induced mode makes designing a DLD device an iterative procedure. Therefore, this paper investigates a fast versatile design automation platform to address this issue. To do so, convolutional and artificial neural networks were employed to learn velocity fields and critical diameters of a wide range of DLD configurations. Later, these networks were combined with a multi-objective evolutionary algorithm to construct the automation tool. After ensuring the accuracy of the neural networks, the developed tool was tested for 12 critical conditions. Reaching the imposed conditions, the automation components performed reliably with errors of less than 4%. Moreover, this tool is generalizable to other field-based problems and since the neural network is an integral part of this method, it enables transfer learning for similar physics. All the codes generated and used in this study alongside the pre-trained neural network models are available on https://github.com/HoseynAAmiri/DLDNN.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 14:38:17 GMT" } ]
2022-08-31T00:00:00
[ [ "Vatandoust", "Farzad", "" ], [ "Amiri", "Hoseyn A.", "" ], [ "Mas-hafi", "Sima", "" ] ]
new_dataset
0.990098
2208.14345
Peiling Lu
Peiling Lu, Xu Tan, Botao Yu, Tao Qin, Sheng Zhao, Tie-Yan Liu
MeloForm: Generating Melody with Musical Form based on Expert Systems and Neural Networks
null
null
null
null
cs.SD cs.CL cs.LG cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Human usually composes music by organizing elements according to the musical form to express music ideas. However, for neural network-based music generation, it is difficult to do so due to the lack of labelled data on musical form. In this paper, we develop MeloForm, a system that generates melody with musical form using expert systems and neural networks. Specifically, 1) we design an expert system to generate a melody by developing musical elements from motifs to phrases then to sections with repetitions and variations according to pre-given musical form; 2) considering the generated melody is lack of musical richness, we design a Transformer based refinement model to improve the melody without changing its musical form. MeloForm enjoys the advantages of precise musical form control by expert systems and musical richness learning via neural models. Both subjective and objective experimental evaluations demonstrate that MeloForm generates melodies with precise musical form control with 97.79% accuracy, and outperforms baseline systems in terms of subjective evaluation score by 0.75, 0.50, 0.86 and 0.89 in structure, thematic, richness and overall quality, without any labelled musical form data. Besides, MeloForm can support various kinds of forms, such as verse and chorus form, rondo form, variational form, sonata form, etc.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 15:44:15 GMT" } ]
2022-08-31T00:00:00
[ [ "Lu", "Peiling", "" ], [ "Tan", "Xu", "" ], [ "Yu", "Botao", "" ], [ "Qin", "Tao", "" ], [ "Zhao", "Sheng", "" ], [ "Liu", "Tie-Yan", "" ] ]
new_dataset
0.999755
2208.14362
Nicholas Roberts
Nicholas Roberts, Xintong Li, Tzu-Heng Huang, Dyah Adila, Spencer Schoenberg, Cheng-Yu Liu, Lauren Pick, Haotian Ma, Aws Albarghouthi, Frederic Sala
AutoWS-Bench-101: Benchmarking Automated Weak Supervision with 100 Labels
null
null
null
null
cs.LG cs.AI cs.CV stat.ML
http://creativecommons.org/licenses/by/4.0/
Weak supervision (WS) is a powerful method to build labeled datasets for training supervised models in the face of little-to-no labeled data. It replaces hand-labeling data with aggregating multiple noisy-but-cheap label estimates expressed by labeling functions (LFs). While it has been used successfully in many domains, weak supervision's application scope is limited by the difficulty of constructing labeling functions for domains with complex or high-dimensional features. To address this, a handful of methods have proposed automating the LF design process using a small set of ground truth labels. In this work, we introduce AutoWS-Bench-101: a framework for evaluating automated WS (AutoWS) techniques in challenging WS settings -- a set of diverse application domains on which it has been previously difficult or impossible to apply traditional WS techniques. While AutoWS is a promising direction toward expanding the application-scope of WS, the emergence of powerful methods such as zero-shot foundation models reveals the need to understand how AutoWS techniques compare or cooperate with modern zero-shot or few-shot learners. This informs the central question of AutoWS-Bench-101: given an initial set of 100 labels for each task, we ask whether a practitioner should use an AutoWS method to generate additional labels or use some simpler baseline, such as zero-shot predictions from a foundation model or supervised learning. We observe that in many settings, it is necessary for AutoWS methods to incorporate signal from foundation models if they are to outperform simple few-shot baselines, and AutoWS-Bench-101 promotes future research in this direction. We conclude with a thorough ablation study of AutoWS methods.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 16:09:42 GMT" } ]
2022-08-31T00:00:00
[ [ "Roberts", "Nicholas", "" ], [ "Li", "Xintong", "" ], [ "Huang", "Tzu-Heng", "" ], [ "Adila", "Dyah", "" ], [ "Schoenberg", "Spencer", "" ], [ "Liu", "Cheng-Yu", "" ], [ "Pick", "Lauren", "" ], [ "Ma", "Haotian", "" ], [ "Albarghouthi", "Aws", "" ], [ "Sala", "Frederic", "" ] ]
new_dataset
0.997285
2208.14403
Ayoosh Bansal
Ayoosh Bansal, Hunmin Kim, Simon Yu, Bo Li, Naira Hovakimyan, Marco Caccamo and Lui Sha
Verifiable Obstacle Detection
Accepted at ISSRE 2022
null
null
null
cs.RO cs.CV cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Perception of obstacles remains a critical safety concern for autonomous vehicles. Real-world collisions have shown that the autonomy faults leading to fatal collisions originate from obstacle existence detection. Open source autonomous driving implementations show a perception pipeline with complex interdependent Deep Neural Networks. These networks are not fully verifiable, making them unsuitable for safety-critical tasks. In this work, we present a safety verification of an existing LiDAR based classical obstacle detection algorithm. We establish strict bounds on the capabilities of this obstacle detection algorithm. Given safety standards, such bounds allow for determining LiDAR sensor properties that would reliably satisfy the standards. Such analysis has as yet been unattainable for neural network based perception systems. We provide a rigorous analysis of the obstacle detection system with empirical results based on real-world sensor data.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 17:15:35 GMT" } ]
2022-08-31T00:00:00
[ [ "Bansal", "Ayoosh", "" ], [ "Kim", "Hunmin", "" ], [ "Yu", "Simon", "" ], [ "Li", "Bo", "" ], [ "Hovakimyan", "Naira", "" ], [ "Caccamo", "Marco", "" ], [ "Sha", "Lui", "" ] ]
new_dataset
0.961766
2208.14433
Tianjia Zhang
Tianjia Zhang, Yuen-Fui Lau, and Qifeng Chen
A Portable Multiscopic Camera for Novel View and Time Synthesis in Dynamic Scenes
To be presented at IROS2022
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a portable multiscopic camera system with a dedicated model for novel view and time synthesis in dynamic scenes. Our goal is to render high-quality images for a dynamic scene from any viewpoint at any time using our portable multiscopic camera. To achieve such novel view and time synthesis, we develop a physical multiscopic camera equipped with five cameras to train a neural radiance field (NeRF) in both time and spatial domains for dynamic scenes. Our model maps a 6D coordinate (3D spatial position, 1D temporal coordinate, and 2D viewing direction) to view-dependent and time-varying emitted radiance and volume density. Volume rendering is applied to render a photo-realistic image at a specified camera pose and time. To improve the robustness of our physical camera, we propose a camera parameter optimization module and a temporal frame interpolation module to promote information propagation across time. We conduct experiments on both real-world and synthetic datasets to evaluate our system, and the results show that our approach outperforms alternative solutions qualitatively and quantitatively. Our code and dataset are available at https://yuenfuilau.github.io.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 17:53:17 GMT" } ]
2022-08-31T00:00:00
[ [ "Zhang", "Tianjia", "" ], [ "Lau", "Yuen-Fui", "" ], [ "Chen", "Qifeng", "" ] ]
new_dataset
0.999428
2208.14441
Krzysztof Sornat
Matthias K\"oppe, Martin Kouteck\'y, Krzysztof Sornat, Nimrod Talmon
Fine-Grained Liquid Democracy for Cumulative Ballots
15 pages, 1 table
null
null
null
cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate efficient ways for the incorporation of liquid democracy into election settings in which voters submit cumulative ballots, i.e., when each voter is assigned a virtual coin that she can then distribute as she wishes among the available election options. In particular, we are interested in fine-grained liquid democracy, meaning that voters are able to designate a partial coin to a set of election options and delegate the decision on how to further split this partial coin among those election options to another voter of her choice. The fact that we wish such delegations to be transitive -- combined with our aim at fully respecting such delegations -- means that inconsistencies and cycles can occur, thus we set to find computationally-efficient ways of resolving voter delegations. To this aim we develop a theory based fixed-point theorems and mathematical programming techniques and we show that for various variants of definitions regarding how to resolve such transitive delegations, there is always a feasible resolution; and we identify under which conditions such solutions are efficiently computable.
[ { "version": "v1", "created": "Tue, 30 Aug 2022 17:58:08 GMT" } ]
2022-08-31T00:00:00
[ [ "Köppe", "Matthias", "" ], [ "Koutecký", "Martin", "" ], [ "Sornat", "Krzysztof", "" ], [ "Talmon", "Nimrod", "" ] ]
new_dataset
0.994148
1601.05218
Yonatan Yehezkeally
Yonatan Yehezkeally and Moshe Schwartz
Limited-Magnitude Error-Correcting Gray Codes for Rank Modulation
Revised version for journal submission. Additional results include more tight auxiliary constructions, a decoding shcema, ranking/unranking procedures, and application to snake-in-the-box codes under the Kendall tau-metric
null
10.1109/TIT.2017.2719710
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct Gray codes over permutations for the rank-modulation scheme, which are also capable of correcting errors under the infinity-metric. These errors model limited-magnitude or spike errors, for which only single-error-detecting Gray codes are currently known. Surprisingly, the error-correcting codes we construct achieve a better asymptotic rate than that of presently known constructions not having the Gray property, and exceed the Gilbert-Varshamov bound. Additionally, we present efficient ranking and unranking procedures, as well as a decoding procedure that runs in linear time. Finally, we also apply our methods to solve an outstanding issue with error-detecting rank-modulation Gray codes (snake-in-the-box codes) under a different metric, the Kendall $\tau$-metric, in the group of permutations over an even number of elements $S_{2n}$, where we provide asymptotically optimal codes.
[ { "version": "v1", "created": "Wed, 20 Jan 2016 09:46:02 GMT" }, { "version": "v2", "created": "Mon, 25 Jan 2016 07:57:55 GMT" }, { "version": "v3", "created": "Sun, 19 Jun 2016 17:56:06 GMT" } ]
2022-08-30T00:00:00
[ [ "Yehezkeally", "Yonatan", "" ], [ "Schwartz", "Moshe", "" ] ]
new_dataset
0.992344
1911.04788
Carlo Tiseo
Keyhan Kouhkiloui Babarahmati, Carlo Tiseo, Joshua Smith, Hsiu Chin Lin, Mustafa Suphi Erden and Michael Mistry
Fractal Impedance for Passive Controllers: A Framework for Interaction Robotics
Nonlinear Dyn (2022). Video Available at https://youtu.be/Ny8zNyPS8AM
null
10.1007/s11071-022-07754-3
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is increasing interest in control frameworks capable of moving robots from industrial cages to unstructured environments and coexisting with humans. Despite significant improvement in some specific applications (e.g., medical robotics), there is still the need for a general control framework that improves interaction robustness and motion dynamics. Passive controllers show promising results in this direction; however, they often rely on virtual energy tanks that can guarantee passivity as long as they do not run out of energy. In this paper, a Fractal Attractor is proposed to implement a variable impedance controller that can retain passivity without relying on energy tanks. The controller generates a Fractal Attractor around the desired state using an asymptotic stable potential field, making the controller robust to discretization and numerical integration errors. The results prove that it can accurately track both trajectories and end-effector forces during interaction. Therefore, these properties make the controller ideal for applications requiring robust dynamic interaction at the end-effector.
[ { "version": "v1", "created": "Tue, 12 Nov 2019 10:54:20 GMT" }, { "version": "v2", "created": "Tue, 1 Dec 2020 11:42:59 GMT" }, { "version": "v3", "created": "Fri, 28 May 2021 18:36:18 GMT" }, { "version": "v4", "created": "Wed, 27 Jul 2022 17:09:41 GMT" } ]
2022-08-30T00:00:00
[ [ "Babarahmati", "Keyhan Kouhkiloui", "" ], [ "Tiseo", "Carlo", "" ], [ "Smith", "Joshua", "" ], [ "Lin", "Hsiu Chin", "" ], [ "Erden", "Mustafa Suphi", "" ], [ "Mistry", "Michael", "" ] ]
new_dataset
0.989303
2107.01717
Canze Zhu
Canze Zhu and Qunying Liao
The $b$-weight distribution for MDS codes
null
null
null
null
cs.IT math.CO math.IT
http://creativecommons.org/licenses/by-sa/4.0/
For a positive integer $b\ge2$, the $b$-symbol code is a new coding framework proposed to combat $b$-errors in $b$-symbol read channels. Especially, the $2$-symbol code is called a symbol-pair code. Remarkably, a classical maximum distance separable (MDS) code is also an MDS $b$-symbol code. Recently, for any MDS code $\mathcal{C}$, Ma and Luo determined the symbol-pair weight distribution of $\mathcal{C}$. In this paper, by calculating the number of solutions for some equations and utilizing some shortened codes of $\mathcal{C}$, we give the connection between the $b$-weight distribution and the number of codewords in shortened codes of $\mathcal{C}$ with special shape. Furthermore, note that shortened codes of $\mathcal{C}$ are also MDS codes, the number of these codewords with special shape are also determined by the shorten method. From the above calculation, the $b$-weight distribution of $\mathcal{C}$ is determined. Our result generalies the corresonding result of Ma and Luo.
[ { "version": "v1", "created": "Sun, 4 Jul 2021 19:47:32 GMT" }, { "version": "v2", "created": "Fri, 10 Dec 2021 22:42:33 GMT" }, { "version": "v3", "created": "Fri, 13 May 2022 23:41:50 GMT" }, { "version": "v4", "created": "Sat, 27 Aug 2022 03:07:48 GMT" } ]
2022-08-30T00:00:00
[ [ "Zhu", "Canze", "" ], [ "Liao", "Qunying", "" ] ]
new_dataset
0.999353
2108.07920
Qian Zhang
Qian Zhang, Qing Guo, Ruijun Gao, Felix Juefei-Xu, Hongkai Yu, Wei Feng
Adversarial Relighting Against Face Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep face recognition (FR) has achieved significantly high accuracy on several challenging datasets and fosters successful real-world applications, even showing high robustness to the illumination variation that is usually regarded as a main threat to the FR system. However, in the real world, illumination variation caused by diverse lighting conditions cannot be fully covered by the limited face dataset. In this paper, we study the threat of lighting against FR from a new angle, i.e., adversarial attack, and identify a new task, i.e., adversarial relighting. Given a face image, adversarial relighting aims to produce a naturally relighted counterpart while fooling the state-of-the-art deep FR methods. To this end, we first propose the physical modelbased adversarial relighting attack (ARA) denoted as albedoquotient-based adversarial relighting attack (AQ-ARA). It generates natural adversarial light under the physical lighting model and guidance of FR systems and synthesizes adversarially relighted face images. Moreover, we propose the auto-predictive adversarial relighting attack (AP-ARA) by training an adversarial relighting network (ARNet) to automatically predict the adversarial light in a one-step manner according to different input faces, allowing efficiency-sensitive applications. More importantly, we propose to transfer the above digital attacks to physical ARA (PhyARA) through a precise relighting device, making the estimated adversarial lighting condition reproducible in the real world. We validate our methods on three state-of-the-art deep FR methods, i.e., FaceNet, ArcFace, and CosFace, on two public datasets. The extensive and insightful results demonstrate our work can generate realistic adversarial relighted face images fooling face recognition tasks easily, revealing the threat of specific light directions and strengths.
[ { "version": "v1", "created": "Wed, 18 Aug 2021 01:05:53 GMT" }, { "version": "v2", "created": "Wed, 1 Sep 2021 04:09:51 GMT" }, { "version": "v3", "created": "Tue, 16 Aug 2022 15:46:31 GMT" }, { "version": "v4", "created": "Sat, 27 Aug 2022 02:39:18 GMT" } ]
2022-08-30T00:00:00
[ [ "Zhang", "Qian", "" ], [ "Guo", "Qing", "" ], [ "Gao", "Ruijun", "" ], [ "Juefei-Xu", "Felix", "" ], [ "Yu", "Hongkai", "" ], [ "Feng", "Wei", "" ] ]
new_dataset
0.998071
2108.12790
Zhaoxin Fan
Zhaoxin Fan, Zhenbo Song, Wenping Zhang, Hongyan Liu, Jun He, and Xiaoyong Du
RPR-Net: A Point Cloud-based Rotation-aware Large Scale Place Recognition Network
Accept to ECCV 2022 AVVision Workshop
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Point cloud-based large scale place recognition is an important but challenging task for many applications such as Simultaneous Localization and Mapping (SLAM). Taking the task as a point cloud retrieval problem, previous methods have made delightful achievements. However, how to deal with catastrophic collapse caused by rotation problems is still under-explored. In this paper, to tackle the issue, we propose a novel Point Cloud-based Rotation-aware Large Scale Place Recognition Network (RPR-Net). In particular, to solve the problem, we propose to learn rotation-invariant features in three steps. First, we design three kinds of novel Rotation-Invariant Features (RIFs), which are low-level features that can hold the rotation-invariant property. Second, using these RIFs, we design an attentive module to learn rotation-invariant kernels. Third, we apply these kernels to previous point cloud features to generate new features, which is the well-known SO(3) mapping process. By doing so, high-level scene-specific rotation-invariant features can be learned. We call the above process an Attentive Rotation-Invariant Convolution (ARIConv). To achieve the place recognition goal, we build RPR-Net, which takes ARIConv as a basic unit to construct a dense network architecture. Then, powerful global descriptors used for retrieval-based place recognition can be sufficiently extracted from RPR-Net. Experimental results on prevalent datasets show that our method achieves comparable results to existing state-of-the-art place recognition models and significantly outperforms other rotation-invariant baseline models when solving rotation problems.
[ { "version": "v1", "created": "Sun, 29 Aug 2021 09:10:56 GMT" }, { "version": "v2", "created": "Tue, 8 Mar 2022 14:23:55 GMT" }, { "version": "v3", "created": "Sun, 28 Aug 2022 04:07:03 GMT" } ]
2022-08-30T00:00:00
[ [ "Fan", "Zhaoxin", "" ], [ "Song", "Zhenbo", "" ], [ "Zhang", "Wenping", "" ], [ "Liu", "Hongyan", "" ], [ "He", "Jun", "" ], [ "Du", "Xiaoyong", "" ] ]
new_dataset
0.997775
2202.08471
Hongjie Fang
Hongjie Fang, Hao-Shu Fang, Sheng Xu and Cewu Lu
TransCG: A Large-Scale Real-World Dataset for Transparent Object Depth Completion and a Grasping Baseline
project page: www.graspnet.net/transcg
IEEE Robotics and Automation Letters 7.3 (2022)
10.1109/LRA.2022.3183256
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Transparent objects are common in our daily life and frequently handled in the automated production line. Robust vision-based robotic grasping and manipulation for these objects would be beneficial for automation. However, the majority of current grasping algorithms would fail in this case since they heavily rely on the depth image, while ordinary depth sensors usually fail to produce accurate depth information for transparent objects owing to the reflection and refraction of light. In this work, we address this issue by contributing a large-scale real-world dataset for transparent object depth completion, which contains 57,715 RGB-D images from 130 different scenes. Our dataset is the first large-scale, real-world dataset that provides ground truth depth, surface normals, transparent masks in diverse and cluttered scenes. Cross-domain experiments show that our dataset is more general and can enable better generalization ability for models. Moreover, we propose an end-to-end depth completion network, which takes the RGB image and the inaccurate depth map as inputs and outputs a refined depth map. Experiments demonstrate superior efficacy, efficiency and robustness of our method over previous works, and it is able to process images of high resolutions under limited hardware resources. Real robot experiments show that our method can also be applied to novel transparent object grasping robustly. The full dataset and our method are publicly available at www.graspnet.net/transcg
[ { "version": "v1", "created": "Thu, 17 Feb 2022 06:50:20 GMT" }, { "version": "v2", "created": "Sun, 28 Aug 2022 03:38:12 GMT" } ]
2022-08-30T00:00:00
[ [ "Fang", "Hongjie", "" ], [ "Fang", "Hao-Shu", "" ], [ "Xu", "Sheng", "" ], [ "Lu", "Cewu", "" ] ]
new_dataset
0.99861
2203.06243
Xinyi Zhang
Yalin Li, Xinyi Zhang, Victoria L. Morgan, Hannah A.C. Lohman, Lewis S. Rowles, Smiti Mittal, Anna Kogler, Roland D. Cusick, William A. Tarpeh, Jeremy S. Guest
QSDsan: An Integrated Platform for Quantitative Sustainable Design of Sanitation and Resource Recovery Systems
null
null
10.1039/D2EW00455K
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Sustainable sanitation and resource recovery technologies are needed to address rapid environmental and socioeconomic changes. Research prioritization is critical to expedite the development and deployment of such technologies across their vast system space (e.g., technology choices, design and operating decisions). In this study, we introduce QSDsan - an open-source tool written in Python (under the object-oriented programming paradigm) and developed for the quantitative sustainable design (QSD) of sanitation and resource recovery systems. As an integrated platform for system design, process modeling and simulation, techno-economic analysis (TEA), and life cycle assessment (LCA), QSDsan can be used to enumerate and investigate the opportunity space for emerging technologies under uncertainty, while considering contextual parameters that are critical to technology deployment. We illustrate the core capabilities of QSDsan through two distinct examples: (i) evaluation of a complete sanitation value chain that compares three alternative systems; and (ii) dynamic simulation of the wastewater treatment plant described in the benchmark simulation model no. 1 (BSM1). Through these examples, we show the utility of QSDsan to automate design, enable flexible process modeling, achieve rapid and reproducible simulations, and to perform advanced statistical analyses with integrated visualization. We strive to make QSDsan a community-led platform with online documentation, tutorials (explanatory notes, executable scripts, and video demonstrations), and a growing ecosystem of supporting packages (e.g., DMsan for decision-making). This platform can be freely accessed, used, and expanded by researchers, practitioners, and the public alike, ultimately contributing to the advancement of safe and affordable sanitation technologies around the globe.
[ { "version": "v1", "created": "Mon, 7 Mar 2022 18:42:15 GMT" } ]
2022-08-30T00:00:00
[ [ "Li", "Yalin", "" ], [ "Zhang", "Xinyi", "" ], [ "Morgan", "Victoria L.", "" ], [ "Lohman", "Hannah A. C.", "" ], [ "Rowles", "Lewis S.", "" ], [ "Mittal", "Smiti", "" ], [ "Kogler", "Anna", "" ], [ "Cusick", "Roland D.", "" ], [ "Tarpeh", "William A.", "" ], [ "Guest", "Jeremy S.", "" ] ]
new_dataset
0.97903
2203.06357
Ling Ren
Dongning Guo and Ling Ren
Bitcoin's Latency--Security Analysis Made Simple
null
null
null
null
cs.CR cs.DC
http://creativecommons.org/licenses/by/4.0/
Simple closed-form upper and lower bounds are developed for the security of the Nakamoto consensus as a function of the confirmation depth, the honest and adversarial block mining rates, and an upper bound on the block propagation delay. The bounds are exponential in the confirmation depth and apply regardless of the adversary's attack strategy. The gap between the upper and lower bounds is small for Bitcoin's parameters. For example, assuming an average block interval of 10 minutes, a network delay bound of ten seconds, and 10% adversarial mining power, the widely used 6-block confirmation rule yields a safety violation between 0.11% and 0.35% probability.
[ { "version": "v1", "created": "Sat, 12 Mar 2022 06:36:56 GMT" }, { "version": "v2", "created": "Fri, 13 May 2022 04:57:37 GMT" }, { "version": "v3", "created": "Sat, 27 Aug 2022 03:31:44 GMT" } ]
2022-08-30T00:00:00
[ [ "Guo", "Dongning", "" ], [ "Ren", "Ling", "" ] ]
new_dataset
0.999298
2203.07825
Shidi Li
Shidi Li, Christian Walder, Miaomiao Liu
SPA-VAE: Similar-Parts-Assignment for Unsupervised 3D Point Cloud Generation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper addresses the problem of unsupervised parts-aware point cloud generation with learned parts-based self-similarity. Our SPA-VAE infers a set of latent canonical candidate shapes for any given object, along with a set of rigid body transformations for each such candidate shape to one or more locations within the assembled object. In this way, noisy samples on the surface of, say, each leg of a table, are effectively combined to estimate a single leg prototype. When parts-based self-similarity exists in the raw data, sharing data among parts in this way confers numerous advantages: modeling accuracy, appropriately self-similar generative outputs, precise in-filling of occlusions, and model parsimony. SPA-VAE is trained end-to-end using a variational Bayesian approach which uses the Gumbel-softmax trick for the shared part assignments, along with various novel losses to provide appropriate inductive biases. Quantitative and qualitative analyses on ShapeNet demonstrate the advantage of SPA-VAE.
[ { "version": "v1", "created": "Tue, 15 Mar 2022 12:26:32 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2022 01:04:23 GMT" } ]
2022-08-30T00:00:00
[ [ "Li", "Shidi", "" ], [ "Walder", "Christian", "" ], [ "Liu", "Miaomiao", "" ] ]
new_dataset
0.994521
2204.06988
Sahraoui Dhelim Dr
Sahraoui Dhelim, Nyothiri Aung, Tahar Kechadi, Huansheng Ning, Liming Chen and Abderrahmane Lakas
Trust2Vec: Large-Scale IoT Trust Management System based on Signed Network Embeddings
\c{opyright} 20XX IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works
IEEE Internet of Things Journal (2022). https://ieeexplore.ieee.org/document/9866814
10.1109/JIOT.2022.3201772
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A trust management system (TMS) is an integral component of any IoT network. A reliable trust management system must guarantee the network security, data integrity, and act as a referee that promotes legitimate devices, and punishes any malicious activities. Trust scores assigned by TMSs reflect devices' reputations, which can help predict the future behaviours of network entities and subsequently judge the reliability of different network entities in IoT networks. Many TMSs have been proposed in the literature, these systems are designed for small-scale trust attacks, and can deal with attacks where a malicious device tries to undermine TMS by spreading fake trust reports. However, these systems are prone to large-scale trust attacks. To address this problem, in this paper, we propose a TMS for large-scale IoT systems called Trust2Vec, which can manage trust relationships in large-scale IoT systems and can mitigate large-scale trust attacks that are performed by hundreds of malicious devices. Trust2Vec leverages a random-walk network exploration algorithm that navigates the trust relationship among devices and computes trust network embeddings, which enables it to analyze the latent network structure of trust relationships, even if there is no direct trust rating between two malicious devices. To detect large-scale attacks, suck as self-promoting and bad-mouthing, we propose a network embeddings community detection algorithm that detects and blocks communities of malicious nodes. The effectiveness of Trust2Vec is validated through large-scale IoT network simulation. The results show that Trust2Vec can achieve up to 94\% mitigation rate in various network scenarios.
[ { "version": "v1", "created": "Thu, 14 Apr 2022 14:25:46 GMT" }, { "version": "v2", "created": "Sat, 27 Aug 2022 08:37:41 GMT" } ]
2022-08-30T00:00:00
[ [ "Dhelim", "Sahraoui", "" ], [ "Aung", "Nyothiri", "" ], [ "Kechadi", "Tahar", "" ], [ "Ning", "Huansheng", "" ], [ "Chen", "Liming", "" ], [ "Lakas", "Abderrahmane", "" ] ]
new_dataset
0.994921
2205.02895
Philipp Wiesner
Philipp Wiesner, Dominik Scheinert, Thorsten Wittkopp, Lauritz Thamsen, Odej Kao
Cucumber: Renewable-Aware Admission Control for Delay-Tolerant Cloud and Edge Workloads
Accepted at Euro-Par 2022. GitHub repository: https://github.com/dos-group/cucumber
null
10.1007/978-3-031-12597-3_14
null
cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The growing electricity demand of cloud and edge computing increases operational costs and will soon have a considerable impact on the environment. A possible countermeasure is equipping IT infrastructure directly with on-site renewable energy sources. Yet, particularly smaller data centers may not be able to use all generated power directly at all times, while feeding it into the public grid or energy storage is often not an option. To maximize the usage of renewable excess energy, we propose Cucumber, an admission control policy that accepts delay-tolerant workloads only if they can be computed within their deadlines without the use of grid energy. Using probabilistic forecasting of computational load, energy consumption, and energy production, Cucumber can be configured towards more optimistic or conservative admission. We evaluate our approach on two scenarios using real solar production forecasts for Berlin, Mexico City, and Cape Town in a simulation environment. For scenarios where excess energy was actually available, our results show that Cucumber's default configuration achieves acceptance rates close to the optimal case and causes 97.0% of accepted workloads to be powered using excess energy, while more conservative admission results in 18.5% reduced acceptance at almost zero grid power usage.
[ { "version": "v1", "created": "Thu, 5 May 2022 19:21:16 GMT" }, { "version": "v2", "created": "Tue, 23 Aug 2022 09:14:48 GMT" }, { "version": "v3", "created": "Sat, 27 Aug 2022 17:53:21 GMT" } ]
2022-08-30T00:00:00
[ [ "Wiesner", "Philipp", "" ], [ "Scheinert", "Dominik", "" ], [ "Wittkopp", "Thorsten", "" ], [ "Thamsen", "Lauritz", "" ], [ "Kao", "Odej", "" ] ]
new_dataset
0.999621
2206.05053
Debarpan Bhattacharya
Debarpan Bhattacharya, Debottam Dutta, Neeraj Kumar Sharma, Srikanth Raj Chetupalli, Pravin Mote, Sriram Ganapathy, Chandrakiran C, Sahiti Nori, Suhail K K, Sadhana Gonuguntla and Murali Alagesan
Coswara: A website application enabling COVID-19 screening by analysing respiratory sound samples and health symptoms
null
Interspeech, 2022
null
null
cs.HC cs.LG cs.SD eess.AS eess.SP
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic has accelerated research on design of alternative, quick and effective COVID-19 diagnosis approaches. In this paper, we describe the Coswara tool, a website application designed to enable COVID-19 detection by analysing respiratory sound samples and health symptoms. A user using this service can log into a website using any device connected to the internet, provide there current health symptom information and record few sound sampled corresponding to breathing, cough, and speech. Within a minute of analysis of this information on a cloud server the website tool will output a COVID-19 probability score to the user. As the COVID-19 pandemic continues to demand massive and scalable population level testing, we hypothesize that the proposed tool provides a potential solution towards this.
[ { "version": "v1", "created": "Thu, 9 Jun 2022 05:50:18 GMT" } ]
2022-08-30T00:00:00
[ [ "Bhattacharya", "Debarpan", "" ], [ "Dutta", "Debottam", "" ], [ "Sharma", "Neeraj Kumar", "" ], [ "Chetupalli", "Srikanth Raj", "" ], [ "Mote", "Pravin", "" ], [ "Ganapathy", "Sriram", "" ], [ "C", "Chandrakiran", "" ], [ "Nori", "Sahiti", "" ], [ "K", "Suhail K", "" ], [ "Gonuguntla", "Sadhana", "" ], [ "Alagesan", "Murali", "" ] ]
new_dataset
0.996118
2206.10779
Howard Zhang
Yunhao Ba, Howard Zhang, Ethan Yang, Akira Suzuki, Arnold Pfahnl, Chethan Chinder Chandrappa, Celso de Melo, Suya You, Stefano Soatto, Alex Wong, Achuta Kadambi
Not Just Streaks: Towards Ground Truth for Single Image Deraining
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a large-scale dataset of real-world rainy and clean image pairs and a method to remove degradations, induced by rain streaks and rain accumulation, from the image. As there exists no real-world dataset for deraining, current state-of-the-art methods rely on synthetic data and thus are limited by the sim2real domain gap; moreover, rigorous evaluation remains a challenge due to the absence of a real paired dataset. We fill this gap by collecting a real paired deraining dataset through meticulous control of non-rain variations. Our dataset enables paired training and quantitative evaluation for diverse real-world rain phenomena (e.g. rain streaks and rain accumulation). To learn a representation robust to rain phenomena, we propose a deep neural network that reconstructs the underlying scene by minimizing a rain-robust loss between rainy and clean images. Extensive experiments demonstrate that our model outperforms the state-of-the-art deraining methods on real rainy images under various conditions. Project website: https://visual.ee.ucla.edu/gt_rain.htm/.
[ { "version": "v1", "created": "Wed, 22 Jun 2022 00:10:06 GMT" }, { "version": "v2", "created": "Sun, 28 Aug 2022 18:27:27 GMT" } ]
2022-08-30T00:00:00
[ [ "Ba", "Yunhao", "" ], [ "Zhang", "Howard", "" ], [ "Yang", "Ethan", "" ], [ "Suzuki", "Akira", "" ], [ "Pfahnl", "Arnold", "" ], [ "Chandrappa", "Chethan Chinder", "" ], [ "de Melo", "Celso", "" ], [ "You", "Suya", "" ], [ "Soatto", "Stefano", "" ], [ "Wong", "Alex", "" ], [ "Kadambi", "Achuta", "" ] ]
new_dataset
0.987065
2207.04232
Ruhao Wan
Ruhao Wan, Shixin Zhu, Jin Li
Construction of MDS self-dual codes from generalized Reed-Solomon codes
24 pages,2 table
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
MDS codes and self-dual codes are important families of classical codes in coding theory. It is of interest to investigate MDS self-dual codes. The existence of MDS self-dual codes over finite field $F_q$ is completely solved for $q$ is even. In this paper, for finite field with odd characteristic, we construct some new classes of MDS self-dual codes by (extended) generalized Reed-Solomon codes.
[ { "version": "v1", "created": "Sat, 9 Jul 2022 09:26:42 GMT" }, { "version": "v2", "created": "Sat, 23 Jul 2022 14:15:24 GMT" }, { "version": "v3", "created": "Sat, 27 Aug 2022 07:32:46 GMT" } ]
2022-08-30T00:00:00
[ [ "Wan", "Ruhao", "" ], [ "Zhu", "Shixin", "" ], [ "Li", "Jin", "" ] ]
new_dataset
0.994482
2208.08425
Zhuqing Liu
Zhuqing Liu, Xin Zhang, Jia Liu
SYNTHESIS: A Semi-Asynchronous Path-Integrated Stochastic Gradient Method for Distributed Learning in Computing Clusters
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To increase the training speed of distributed learning, recent years have witnessed a significant amount of interest in developing both synchronous and asynchronous distributed stochastic variance-reduced optimization methods. However, all existing synchronous and asynchronous distributed training algorithms suffer from various limitations in either convergence speed or implementation complexity. This motivates us to propose an algorithm called STNTHESIS (semi-asynchronous path-integrated stochastic gradient search), which leverages the special structure of the variance-reduction framework to overcome the limitations of both synchronous and asynchronous distributed learning algorithms while retaining their salient features. We consider two implementations of STNTHESIS under distributed and shared memory architectures. We show that our STNTHESIS algorithms have $O(\sqrt{N}\epsilon^{-2}(\Delta+1)+N)$ and $O(\sqrt{N}\epsilon^{-2}(\Delta+1) d+N)$ computational complexities for achieving an $\epsilon$-stationary point in non-convex learning under distributed and shared memory architectures, respectively, where N denotes the total number of training samples and $\Delta$ represents the maximum delay of the workers. Moreover, we investigate the generalization performance of \algname by establishing algorithmic stability bounds for quadratic strongly convex and non-convex optimization. We further conduct extensive numerical experiments to verify our theoretical findings
[ { "version": "v1", "created": "Wed, 17 Aug 2022 17:42:33 GMT" }, { "version": "v2", "created": "Sat, 27 Aug 2022 15:46:48 GMT" } ]
2022-08-30T00:00:00
[ [ "Liu", "Zhuqing", "" ], [ "Zhang", "Xin", "" ], [ "Liu", "Jia", "" ] ]
new_dataset
0.986508
2208.08482
Huaishu Peng
Jiasheng Li, Zeyu Yan, Ebrima Jarjue, Ashrith Shetty, Huaishu Peng
TangibleGrid: Tangible Web Layout Design for Blind Users
null
UIST '22, October 29-November 2, 2022, Bend, OR, USA
10.1145/3526113.3545627
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present TangibleGrid, a novel device that allows blind users to understand and design the layout of a web page with real-time tangible feedback. We conducted semi-structured interviews and a series of co-design sessions with blind users to elicit insights that guided the design of TangibleGrid. Our final prototype contains shape-changing brackets representing the web elements and a baseboard representing the web page canvas. Blind users can design a web page layout through creating and editing web elements by snapping or adjusting tangible brackets on top of the baseboard. The baseboard senses the brackets' type, size, and location, verbalizes the information, and renders the web page on the client browser. Through a formative user study, we found that blind users could understand a web page layout through TangibleGrid. They were also able to design a new web layout from scratch without the help of sighted people.
[ { "version": "v1", "created": "Wed, 17 Aug 2022 18:51:18 GMT" }, { "version": "v2", "created": "Sat, 27 Aug 2022 21:14:18 GMT" } ]
2022-08-30T00:00:00
[ [ "Li", "Jiasheng", "" ], [ "Yan", "Zeyu", "" ], [ "Jarjue", "Ebrima", "" ], [ "Shetty", "Ashrith", "" ], [ "Peng", "Huaishu", "" ] ]
new_dataset
0.999699
2208.08502
Huaishu Peng
Zeyu Yan, Anup Sathya, Sahra Yusuf, Jyh-Ming Lien, Huaishu Peng
Fibercuit: Prototyping High-Resolution Flexible and Kirigami Circuits with a Fiber Laser Engraver
null
UIST '22, October 29-November 2, 2022, Bend, OR, USA
10.1145/3526113.3545652
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Prototyping compact devices with unique form factors often requires the PCB manufacturing process to be outsourced, which can be expensive and time-consuming. In this paper, we present Fibercuit, a set of rapid prototyping techniques to fabricate high-resolution, flexible circuits on-demand using a fiber laser engraver. We showcase techniques that can laser cut copper-based composites to form fine-pitch conductive traces, laser fold copper substrates that can form kirigami structures, and laser solder surface-mount electrical components using off-the-shelf soldering pastes. Combined with our software pipeline, an end user can design and fabricate flexible circuits which are dual-layer and three-dimensional, thereby exhibiting a wide range of form factors. We demonstrate Fibercuit by showcasing a set of examples, including a custom dice, flex cables, custom end-stop switches, electromagnetic coils, LED earrings and a circuit in the form of kirigami crane.
[ { "version": "v1", "created": "Wed, 17 Aug 2022 19:42:04 GMT" }, { "version": "v2", "created": "Sat, 27 Aug 2022 21:20:40 GMT" } ]
2022-08-30T00:00:00
[ [ "Yan", "Zeyu", "" ], [ "Sathya", "Anup", "" ], [ "Yusuf", "Sahra", "" ], [ "Lien", "Jyh-Ming", "" ], [ "Peng", "Huaishu", "" ] ]
new_dataset
0.999507
2208.09815
Pengqian Yu
Xinhan Di, Pengqian Yu
LWA-HAND: Lightweight Attention Hand for Interacting Hand Reconstruction
Accepted by ECCV 2022 Computer Vision for Metaverse Workshop (16 pages, 6 figures, 1 table)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent years have witnessed great success for hand reconstruction in real-time applications such as visual reality and augmented reality while interacting with two-hand reconstruction through efficient transformers is left unexplored. In this paper, we propose a method called lightweight attention hand (LWA-HAND) to reconstruct hands in low flops from a single RGB image. To solve the occlusion and interaction problem in efficient attention architectures, we propose three mobile attention modules in this paper. The first module is a lightweight feature attention module that extracts both local occlusion representation and global image patch representation in a coarse-to-fine manner. The second module is a cross image and graph bridge module which fuses image context and hand vertex. The third module is a lightweight cross-attention mechanism that uses element-wise operation for the cross-attention of two hands in linear complexity. The resulting model achieves comparable performance on the InterHand2.6M benchmark in comparison with the state-of-the-art models. Simultaneously, it reduces the flops to $0.47GFlops$ while the state-of-the-art models have heavy computations between $10GFlops$ and $20GFlops$.
[ { "version": "v1", "created": "Sun, 21 Aug 2022 06:25:56 GMT" }, { "version": "v2", "created": "Tue, 23 Aug 2022 03:54:47 GMT" }, { "version": "v3", "created": "Sat, 27 Aug 2022 13:06:34 GMT" } ]
2022-08-30T00:00:00
[ [ "Di", "Xinhan", "" ], [ "Yu", "Pengqian", "" ] ]
new_dataset
0.989354
2208.11090
Alessandra Rossi Dr
Alessandra Rossi, Patrick Holthaus, S\`ilvia Moros and Gabriella Lakatos
IEEE Trust, Acceptance and Social Cues in Human-Robot Interaction -- SCRITA 2022 Workshop
SCRITA 2022 workshop proceedings including 8 articles
31st IEEE International Conference on Robot & Human Interactive Communication, 29 August - 3 September 2022
null
SCRITA/2022
cs.RO
http://creativecommons.org/licenses/by/4.0/
The Trust, Acceptance and Social Cues in Human-Robot Interaction - SCRITA is the 5th edition of a series of workshops held in conjunction with the IEEE RO-MAN conference. This workshop focuses on addressing the challenges and development of the dynamics between people and robots in order to foster short interactions and long-lasting relationships in different fields, from educational, service, collaborative, companion, care-home and medical robotics. In particular, we aimed in investigating how robots can manipulate (i.e. creating, improving, and recovering) people's ability of accepting and trusting them for a fruitful and successful coexistence between humans and people. While advanced progresses are reached in studying and evaluating the factors affecting acceptance and trust of people in robots in controlled or short-term (repeated interactions) setting, developing service and personal robots, that are accepted and trusted by people where the supervision of operators is not possible, still presents an open challenge for scientists in robotics, AI and HRI fields. In such unstructured static and dynamic human-centred environments scenarios, robots should be able to learn and adapt their behaviours to the situational context, but also to people's prior experiences and learned associations, their expectations, and their and the robot's ability to predict and understand each other's behaviours. Although the previous editions valued the participation of leading researchers in the field and several exceptional invited speakers who tackled down some fundamental points in this research domains, we wish to continue to further explore the role of trust in robotics to present groundbreaking research to effectively design and develop socially acceptable and trustable robots to be deployed "in the wild". Website: https://scrita.herts.ac.uk
[ { "version": "v1", "created": "Mon, 22 Aug 2022 14:17:01 GMT" }, { "version": "v2", "created": "Sun, 28 Aug 2022 23:03:34 GMT" } ]
2022-08-30T00:00:00
[ [ "Rossi", "Alessandra", "" ], [ "Holthaus", "Patrick", "" ], [ "Moros", "Sìlvia", "" ], [ "Lakatos", "Gabriella", "" ] ]
new_dataset
0.994903
2208.11235
Colin Gordon
Sergey Matskevich, Colin S. Gordon
Preprocessing Source Code Comments for Linguistic Models
Correcting author name
null
null
null
cs.SE cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Comments are an important part of the source code and are a primary source of documentation. This has driven interest in using large bodies of comments to train or evaluate tools that consume or produce them -- such as generating oracles or even code from comments, or automatically generating code summaries. Most of this work makes strong assumptions about the structure and quality of comments, such as assuming they consist mostly of proper English sentences. However, we know little about the actual quality of existing comments for these use cases. Comments often contain unique structures and elements that are not seen in other types of text, and filtering or extracting information from them requires some extra care. This paper explores the contents and quality of Python comments drawn from 840 most popular open source projects from GitHub and 8422 projects from SriLab dataset, and the impact of na\"ive vs. in-depth filtering can have on the use of existing comments for training and evaluation of systems that generate comments.
[ { "version": "v1", "created": "Tue, 23 Aug 2022 23:44:09 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 23:46:49 GMT" } ]
2022-08-30T00:00:00
[ [ "Matskevich", "Sergey", "" ], [ "Gordon", "Colin S.", "" ] ]
new_dataset
0.967201
2208.11484
Aly Mostafa
Aly Mostafa, Omar Mohamed, Ali Ashraf, Ahmed Elbehery, Salma Jamal, Anas Salah, Amr S. Ghoneim
An End-to-End OCR Framework for Robust Arabic-Handwriting Recognition using a Novel Transformers-based Model and an Innovative 270 Million-Words Multi-Font Corpus of Classical Arabic with Diacritics
null
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
This research is the second phase in a series of investigations on developing an Optical Character Recognition (OCR) of Arabic historical documents and examining how different modeling procedures interact with the problem. The first research studied the effect of Transformers on our custom-built Arabic dataset. One of the downsides of the first research was the size of the training data, a mere 15000 images from our 30 million images, due to lack of resources. Also, we add an image enhancement layer, time and space optimization, and Post-Correction layer to aid the model in predicting the correct word for the correct context. Notably, we propose an end-to-end text recognition approach using Vision Transformers as an encoder, namely BEIT, and vanilla Transformer as a decoder, eliminating CNNs for feature extraction and reducing the model's complexity. The experiments show that our end-to-end model outperforms Convolutions Backbones. The model attained a CER of 4.46%.
[ { "version": "v1", "created": "Sat, 20 Aug 2022 22:21:19 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 21:02:07 GMT" } ]
2022-08-30T00:00:00
[ [ "Mostafa", "Aly", "" ], [ "Mohamed", "Omar", "" ], [ "Ashraf", "Ali", "" ], [ "Elbehery", "Ahmed", "" ], [ "Jamal", "Salma", "" ], [ "Salah", "Anas", "" ], [ "Ghoneim", "Amr S.", "" ] ]
new_dataset
0.999511
2208.12349
Tiago Guerreiro
Tiago Guerreiro, Ana Pires, Lu\'is Carri\c{c}o
Snooping on Snoopers: Logging as a Security Response to Physical Attacks on Mobile Devices
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
When users leave their mobile devices unattended, or let others use them momentarily, they are susceptible to privacy breaches. Existing technological defenses, such as unlock authentication or account switching, have proven to be unpopular. We conducted interviews to uncover practices users currently engage in to cope with the threat, and found that it is common for users to try to keep their devices under close supervision at all times. One obstacle to this strategy is that displaying such protective behavior can be detrimental to social relationships. To address these concerns, we built a software tool that gathers activity logs in the background. Logs can later be reviewed as a timeline of opened apps and the actions performed within each, with events decorated with pictures captured inconspicuously with the front-facing camera. We evaluated this approach in a user study, and found participants to be generally eager to adopt the technology, although in different ways. Most users foresaw using it as a deterrent, or to check if they were snooped on, if that suspicion were ever to arise. Yet, some voiced the intention of creating "honey traps". The results highlight both the opportunities and the potential dangers of the logging approach.
[ { "version": "v1", "created": "Thu, 25 Aug 2022 21:26:04 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2022 11:04:27 GMT" } ]
2022-08-30T00:00:00
[ [ "Guerreiro", "Tiago", "" ], [ "Pires", "Ana", "" ], [ "Carriço", "Luís", "" ] ]
new_dataset
0.972413
2208.12804
Maximilian Weininger
Florian J\"ungermann, Jan K\v{r}et\'insk\'y, and Maximilian Weininger
Algebraically Explainable Controllers: Decision Trees and Support Vector Machines Join Forces
null
null
null
null
cs.LG cs.AI cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, decision trees (DT) have been used as an explainable representation of controllers (a.k.a. strategies, policies, schedulers). Although they are often very efficient and produce small and understandable controllers for discrete systems, complex continuous dynamics still pose a challenge. In particular, when the relationships between variables take more complex forms, such as polynomials, they cannot be obtained using the available DT learning procedures. In contrast, support vector machines provide a more powerful representation, capable of discovering many such relationships, but not in an explainable form. Therefore, we suggest to combine the two frameworks in order to obtain an understandable representation over richer, domain-relevant algebraic predicates. We demonstrate and evaluate the proposed method experimentally on established benchmarks.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 17:57:37 GMT" }, { "version": "v2", "created": "Mon, 29 Aug 2022 11:28:10 GMT" } ]
2022-08-30T00:00:00
[ [ "Jüngermann", "Florian", "" ], [ "Křetínský", "Jan", "" ], [ "Weininger", "Maximilian", "" ] ]
new_dataset
0.996082
2208.12833
Francesca Favaro
Francesca Favaro, Keith Hutchings, Philip Nemec, Leticia Cavalcante, Trent Victor
Waymo's Fatigue Risk Management Framework: Prevention, Monitoring, and Mitigation of Fatigue-Induced Risks while Testing Automated Driving Systems
null
null
null
null
cs.RO cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This report presents Waymo's proposal for a systematic fatigue risk management framework that addresses prevention, monitoring, and mitigation of fatigue-induced risks during on-road testing of ADS technology. The proposed framework remains flexible to incorporate continuous improvements, and was informed by state of the art practices, research, learnings, and experience (both internal and external to Waymo). Fatigue is a recognized contributory factor in a substantial fraction of on-road crashes involving human drivers, and mitigation of fatigue-induced risks is still an open concern researched world-wide. While the proposed framework was specifically designed in relation to on-road testing of SAE Level 4 ADS technology, it has implications and applicability to lower levels of automation as well.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 18:22:50 GMT" } ]
2022-08-30T00:00:00
[ [ "Favaro", "Francesca", "" ], [ "Hutchings", "Keith", "" ], [ "Nemec", "Philip", "" ], [ "Cavalcante", "Leticia", "" ], [ "Victor", "Trent", "" ] ]
new_dataset
0.991209
2208.12850
Michael Baddeley Dr
Michael Baddeley, Yevgen Gyl, Markus Schuss, Xiaoyuan Ma, and Carlo Alberto Boano
OSF: An Open-Source Framework for Synchronous Flooding over Multiple Physical Layers
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Flooding protocols based on concurrent transmissions are regarded as the most reliable way to collect or disseminate data across a multi-hop low-power wireless mesh network. Recent works have shown that such protocols are effective for narrowband communication not only over IEEE 802.15.4, but also over the BLE 5 physical layers (PHYs). However, to date, existing literature has only built synchronous flooding solutions on top of a single PHY, and there has been no attempt to leverage different PHYs at runtime to increase performance. This paper fills this gap and presents OSF, an open-source framework that enables the design of multi-PHY synchronous flooding solutions thanks to a novel radio driver and middle-ware architecture capable of dynamically switching the underlying physical layer. This allows exploitation of the specific benefits of each PHY (e.g., higher data-rate, increased robustness) on-demand during each flood, increasing performance. We tailor OSF to the off-the-shelf nRF52840 platform, and showcase its benefits by comparing single-PHY and multi-PHY synchronous flooding solutions on a real-world testbed.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 19:40:29 GMT" } ]
2022-08-30T00:00:00
[ [ "Baddeley", "Michael", "" ], [ "Gyl", "Yevgen", "" ], [ "Schuss", "Markus", "" ], [ "Ma", "Xiaoyuan", "" ], [ "Boano", "Carlo Alberto", "" ] ]
new_dataset
0.984803
2208.12864
Nestaly Mar\'in
J.M. D\'iaz-B\'a\~nez (1), P. Horn (2), M.A. Lopez (3), N. Mar\'in (4), A. Ram\'irez-Vigueras (5), O. Sol\'e-Pi (6), A. Stevens (3), J. Urrutia (5) ((1) Departamento de Matem\'atica Aplicada II, Universidad de Sevilla, Spain. (2) Department of Mathematics, University of Denver, USA. (3) Department of Computer Science, University of Denver, USA. (4) Posgrado en Ciencia e Ingenier\'ia de la Computaci\'on, Universidad Nacional Aut\'onoma de M\'exico, Mexico., (5) Instituto de Matem\'aticas, Universidad Nacional Aut\'onoma de M\'exico, Mexico. (6) Facultad de Ciencias, Universidad Nacional Aut\'onoma de M\'exico, Mexico)
Ortho-unit polygons can be guarded with at most $\lfloor \frac{n-4}{8} \rfloor$ guards
9 pages, 8 figures
null
null
null
cs.CG math.CO
http://creativecommons.org/licenses/by/4.0/
An orthogonal polygon is called an ortho-unit polygon if its vertices have integer coordinates, and all of its edges have length one. In this paper we prove that any ortho-unit polygon with $n \geq 12$ vertices can be guarded with at most $\lfloor \frac{n-4}{8} \rfloor$ guards.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 20:43:36 GMT" } ]
2022-08-30T00:00:00
[ [ "Díaz-Báñez", "J. M.", "" ], [ "Horn", "P.", "" ], [ "Lopez", "M. A.", "" ], [ "Marín", "N.", "" ], [ "Ramírez-Vigueras", "A.", "" ], [ "Solé-Pi", "O.", "" ], [ "Stevens", "A.", "" ], [ "Urrutia", "J.", "" ] ]
new_dataset
0.991548
2208.12898
Myroslav Kryven
Reyan Ahmed, Stephen Kobourov, Myroslav Kryven
An FPT Algorithm for Bipartite Vertex Splitting
Appears in the Proceedings of the 30th International Symposium on Graph Drawing and Network Visualization (GD 2022)
null
null
null
cs.CG cs.DM
http://creativecommons.org/licenses/by/4.0/
Bipartite graphs model the relationship between two disjoint sets of objects. They have a wide range of applications and are often visualized as a 2-layered drawing, where each set of objects is visualized as a set of vertices (points) on one of the two parallel horizontal lines and the relationships are represented by edges (simple curves) between the two lines connecting the corresponding vertices. One of the common objectives in such drawings is to minimize the number of crossings this, however, is computationally expensive and may still result in drawings with so many crossings that they affect the readability of the drawing. We consider a recent approach to remove crossings in such visualizations by splitting vertices, where the goal is to find the minimum number of vertices to be split to obtain a planar drawing. We show that determining whether a planar drawing exists after splitting at most $k$ vertices is fixed parameter tractable in $k$.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 00:19:31 GMT" } ]
2022-08-30T00:00:00
[ [ "Ahmed", "Reyan", "" ], [ "Kobourov", "Stephen", "" ], [ "Kryven", "Myroslav", "" ] ]
new_dataset
0.995793
2208.12934
Astitva Srivastava
Astitva Srivastava, Chandradeep Pokhariya, Sai Sagar Jinka and Avinash Sharma
xCloth: Extracting Template-free Textured 3D Clothes from a Monocular Image
Accepted at ACM Multimedia-2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Existing approaches for 3D garment reconstruction either assume a predefined template for the garment geometry (restricting them to fixed clothing styles) or yield vertex colored meshes (lacking high-frequency textural details). Our novel framework co-learns geometric and semantic information of garment surface from the input monocular image for template-free textured 3D garment digitization. More specifically, we propose to extend PeeledHuman representation to predict the pixel-aligned, layered depth and semantic maps to extract 3D garments. The layered representation is further exploited to UV parametrize the arbitrary surface of the extracted garment without any human intervention to form a UV atlas. The texture is then imparted on the UV atlas in a hybrid fashion by first projecting pixels from the input image to UV space for the visible region, followed by inpainting the occluded regions. Thus, we are able to digitize arbitrarily loose clothing styles while retaining high-frequency textural details from a monocular image. We achieve high-fidelity 3D garment reconstruction results on three publicly available datasets and generalization on internet images.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 05:57:00 GMT" } ]
2022-08-30T00:00:00
[ [ "Srivastava", "Astitva", "" ], [ "Pokhariya", "Chandradeep", "" ], [ "Jinka", "Sai Sagar", "" ], [ "Sharma", "Avinash", "" ] ]
new_dataset
0.999525
2208.12961
Takahito Murakami
Takahito Murakami, Maya Grace Torii, Xanat Vargas Meza, Yoichi Ochiai
Kuchibashi: 3D-Printed Tweezers Bioinspired by the New Caledonian Crow's Beak
2 pages, 2figures,ACM SIGGRAPH2022
ACM SIGGRAPH 2022. Posters Article 18. 1-2
10.1145/3532719.3543254
null
cs.HC cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this study we implemented Kuchibashi, the New Caledonian Crow beak-like tweezers, and conducted a user study to evaluate the prototype's usability. We proved that Kuchibashi is superior in interacting with large spherical objects than hands and tweezers. Also, impressions of security and safeness were perceived positively by the participants.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 08:51:22 GMT" } ]
2022-08-30T00:00:00
[ [ "Murakami", "Takahito", "" ], [ "Torii", "Maya Grace", "" ], [ "Meza", "Xanat Vargas", "" ], [ "Ochiai", "Yoichi", "" ] ]
new_dataset
0.99609
2208.12970
Wang Chen
Yi Fang, Wang Chen, Pingping Chen, Yiwei Tao, Mohsen Guizani
SR-DCSK Cooperative Communication System with Code Index Modulation: A New Design for 6G New Radios
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper proposes a high-throughput short reference differential chaos shift keying cooperative communication system with the aid of code index modulation, referred to as CIM-SR-DCSK-CC system. In the proposed CIM-SR-DCSK-CC system, the source transmits information bits to both the relay and destination in the first time slot, while the relay not only forwards the source information bits but also sends new information bits to the destination in the second time slot. To be specific, the relay employs an $N$-order Walsh code to carry additional ${{\log }_{2}}N$ information bits, which are superimposed onto the SR-DCSK signal carrying the decoded source information bits. Subsequently, the superimposed signal carrying both the source and relay information bits is transmitted to the destination. Moreover, the theoretical bit error rate (BER) expressions of the proposed CIM-SR-DCSK-CC system are derived over additive white Gaussian noise (AWGN) and multipath Rayleigh fading channels. Compared with the conventional DCSK-CC system and SR-DCSK-CC system, the proposed CIM-SR-DCSK-CC system can significantly improve the throughput without deteriorating any BER performance. As a consequence, the proposed system is very promising for the applications of the 6G-enabled low-power and high-rate communication.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 09:39:38 GMT" } ]
2022-08-30T00:00:00
[ [ "Fang", "Yi", "" ], [ "Chen", "Wang", "" ], [ "Chen", "Pingping", "" ], [ "Tao", "Yiwei", "" ], [ "Guizani", "Mohsen", "" ] ]
new_dataset
0.999542
2208.12981
Sangho Suh
Sangho Suh, Jian Zhao, and Edith Law
CodeToon: Story Ideation, Auto Comic Generation, and Structure Mapping for Code-Driven Storytelling
null
null
10.1145/3526113.3545617
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work demonstrated how we can design and use coding strips, a form of comic strips with corresponding code, to enhance teaching and learning in programming. However, creating coding strips is a creative, time-consuming process. Creators have to generate stories from code (code->story) and design comics from stories (story->comic). We contribute CodeToon, a comic authoring tool that facilitates this code-driven storytelling process with two mechanisms: (1) story ideation from code using metaphor and (2) automatic comic generation from the story. We conducted a two-part user study that evaluates the tool and the comics generated by participants to test whether CodeToon facilitates the authoring process and helps generate quality comics. Our results show that CodeToon helps users create accurate, informative, and useful coding strips in a significantly shorter time. Overall, this work contributes methods and design guidelines for code-driven storytelling and opens up opportunities for using art to support computer science education.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 10:34:54 GMT" } ]
2022-08-30T00:00:00
[ [ "Suh", "Sangho", "" ], [ "Zhao", "Jian", "" ], [ "Law", "Edith", "" ] ]
new_dataset
0.9997
2208.12983
Michael Baddeley Dr
Chloe Bae, Shiwen Yang, Michael Baddeley, Atis Elsts, and Israat Haque
BlueTiSCH: A Multi-PHY Simulation of Low-Power 6TiSCH IoT Networks
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Low-power wireless IoT networks have traditionally operated over a single physical layer (PHY) -- many based on the IEEE 802.15.4 standard. However, recent low-power wireless chipsets offer both the IEEE 802.15.4 and all four PHYs of the Bluetooth 5 (BT 5) standard. This introduces the intriguing possibility that IoT solutions might not necessarily be bound by the limits of a single PHY, and could actively or proactively adapt their PHY depending on RF or networking conditions (e.g., to offer a higher throughput or a longer radio range). Several recent studies have explored such use-cases. However, these studies lack comprehensive evaluation over various metrics (such as reliability, latency, and energy) with regards to scalability and the Radio Frequency (RF) environment. In this work we evaluate the performance of IEEE 802.15.4 and the four BT 5 2.4GHz PHY options for the recently completed IETF 6TiSCH low-power wireless standard. To the best of our knowledge, this is the first work to directly compare these PHYs in identical settings. Specifically, we use a recently released 6TiSCH simulator, TSCH-Sim, to compare these PHY options in networks of up to 250 nodes over different RF environments (home, industrial, and outdoor), and highlight from these results how different PHY options might be better suited to particular application use-cases.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 10:52:20 GMT" } ]
2022-08-30T00:00:00
[ [ "Bae", "Chloe", "" ], [ "Yang", "Shiwen", "" ], [ "Baddeley", "Michael", "" ], [ "Elsts", "Atis", "" ], [ "Haque", "Israat", "" ] ]
new_dataset
0.998961
2208.12986
Bowen Fu
Bowen Fu, Sek Kun Leong, Xiaocong Lian and Xiangyang Ji
6D Robotic Assembly Based on RGB-only Object Pose Estimation
Accepted by IROS 2022
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-based robotic assembly is a crucial yet challenging task as the interaction with multiple objects requires high levels of precision. In this paper, we propose an integrated 6D robotic system to perceive, grasp, manipulate and assemble blocks with tight tolerances. Aiming to provide an off-the-shelf RGB-only solution, our system is built upon a monocular 6D object pose estimation network trained solely with synthetic images leveraging physically-based rendering. Subsequently, pose-guided 6D transformation along with collision-free assembly is proposed to construct any designed structure with arbitrary initial poses. Our novel 3-axis calibration operation further enhances the precision and robustness by disentangling 6D pose estimation and robotic assembly. Both quantitative and qualitative results demonstrate the effectiveness of our proposed 6D robotic assembly system.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 11:26:24 GMT" } ]
2022-08-30T00:00:00
[ [ "Fu", "Bowen", "" ], [ "Leong", "Sek Kun", "" ], [ "Lian", "Xiaocong", "" ], [ "Ji", "Xiangyang", "" ] ]
new_dataset
0.999403
2208.13054
Shreyas Kulkarni
Shreyas Kulkarni, Shreyas Singh, Dhananjay Balakrishnan, Siddharth Sharma, Saipraneeth Devunuri, Sai Chowdeswara Rao Korlapati
CrackSeg9k: A Collection and Benchmark for Crack Segmentation Datasets and Frameworks
null
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
The detection of cracks is a crucial task in monitoring structural health and ensuring structural safety. The manual process of crack detection is time-consuming and subjective to the inspectors. Several researchers have tried tackling this problem using traditional Image Processing or learning-based techniques. However, their scope of work is limited to detecting cracks on a single type of surface (walls, pavements, glass, etc.). The metrics used to evaluate these methods are also varied across the literature, making it challenging to compare techniques. This paper addresses these problems by combining previously available datasets and unifying the annotations by tackling the inherent problems within each dataset, such as noise and distortions. We also present a pipeline that combines Image Processing and Deep Learning models. Finally, we benchmark the results of proposed models on these metrics on our new dataset and compare them with state-of-the-art models in the literature.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 16:47:04 GMT" } ]
2022-08-30T00:00:00
[ [ "Kulkarni", "Shreyas", "" ], [ "Singh", "Shreyas", "" ], [ "Balakrishnan", "Dhananjay", "" ], [ "Sharma", "Siddharth", "" ], [ "Devunuri", "Saipraneeth", "" ], [ "Korlapati", "Sai Chowdeswara Rao", "" ] ]
new_dataset
0.99871
2208.13078
Xiaoyu Shen
Qingyu Zhang, Xiaoyu Shen, Ernie Chang, Jidong Ge and Pengke Chen
MDIA: A Benchmark for Multilingual Dialogue Generation in 46 Languages
The dataset and processing scripts are available in https://github.com/DoctorDream/mDIA
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Owing to the lack of corpora for low-resource languages, current works on dialogue generation have mainly focused on English. In this paper, we present mDIA, the first large-scale multilingual benchmark for dialogue generation across low- to high-resource languages. It covers real-life conversations in 46 languages across 19 language families. We present baseline results obtained by fine-tuning the multilingual, non-dialogue-focused pre-trained model mT5 as well as English-centric, dialogue-focused pre-trained chatbot DialoGPT. The results show that mT5-based models perform better on sacreBLEU and BertScore but worse on diversity. Even though promising results are found in few-shot and zero-shot scenarios, there is a large gap between the generation quality in English and other languages. We hope that the release of mDIA could encourage more works on multilingual dialogue generation to promote language diversity.
[ { "version": "v1", "created": "Sat, 27 Aug 2022 19:35:20 GMT" } ]
2022-08-30T00:00:00
[ [ "Zhang", "Qingyu", "" ], [ "Shen", "Xiaoyu", "" ], [ "Chang", "Ernie", "" ], [ "Ge", "Jidong", "" ], [ "Chen", "Pengke", "" ] ]
new_dataset
0.998404
2208.13169
Martin Molan
Martin Molan, Andrea Borghesi, Daniele Cesarini, Luca Benini, Andrea Bartolini
RUAD: unsupervised anomaly detection in HPC systems
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
The increasing complexity of modern high-performance computing (HPC) systems necessitates the introduction of automated and data-driven methodologies to support system administrators' effort toward increasing the system's availability. Anomaly detection is an integral part of improving the availability as it eases the system administrator's burden and reduces the time between an anomaly and its resolution. However, current state-of-the-art (SoA) approaches to anomaly detection are supervised and semi-supervised, so they require a human-labelled dataset with anomalies - this is often impractical to collect in production HPC systems. Unsupervised anomaly detection approaches based on clustering, aimed at alleviating the need for accurate anomaly data, have so far shown poor performance. In this work, we overcome these limitations by proposing RUAD, a novel Recurrent Unsupervised Anomaly Detection model. RUAD achieves better results than the current semi-supervised and unsupervised SoA approaches. This is achieved by considering temporal dependencies in the data and including long-short term memory cells in the model architecture. The proposed approach is assessed on a complete ten-month history of a Tier-0 system (Marconi100 from CINECA with 980 nodes). RUAD achieves an area under the curve (AUC) of 0.763 in semi-supervised training and an AUC of 0.767 in unsupervised training, which improves upon the SoA approach that achieves an AUC of 0.747 in semi-supervised training and an AUC of 0.734 in unsupervised training. It also vastly outperforms the current SoA unsupervised anomaly detection approach based on clustering, achieving the AUC of 0.548.
[ { "version": "v1", "created": "Sun, 28 Aug 2022 08:30:52 GMT" } ]
2022-08-30T00:00:00
[ [ "Molan", "Martin", "" ], [ "Borghesi", "Andrea", "" ], [ "Cesarini", "Daniele", "" ], [ "Benini", "Luca", "" ], [ "Bartolini", "Andrea", "" ] ]
new_dataset
0.980246
2208.13170
Raoul Blin
Raoul Blin and Fabien Cromi\`eres
CJaFr-v3 : A Freely Available Filtered Japanese-French Aligned Corpus
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present a free Japanese-French parallel corpus. It includes 15M aligned segments and is obtained by compiling and filtering several existing resources. In this paper, we describe the existing resources, their quantity and quality, the filtering we applied to improve the quality of the corpus, and the content of the ready-to-use corpus. We also evaluate the usefulness of this corpus and the quality of our filtering by training and evaluating some standard MT systems with it.
[ { "version": "v1", "created": "Sun, 28 Aug 2022 08:33:18 GMT" } ]
2022-08-30T00:00:00
[ [ "Blin", "Raoul", "" ], [ "Cromières", "Fabien", "" ] ]
new_dataset
0.998601
2208.13249
Jian Du
Jian Du and Tianxi Ji and Jamie Cui and Lei Zhang and Yufei Lu and Pu Duan
DP-PSI: Private and Secure Set Intersection
null
null
null
null
cs.CR cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
One way to classify private set intersection (PSI) for secure 2-party computation is whether the intersection is (a) revealed to both parties or (b) hidden from both parties while only the computing function of the matched payload is exposed. Both aim to provide cryptographic security while avoiding exposing the unmatched elements of the other. They may, however, be insufficient to achieve security and privacy in one practical scenario: when the intersection is required and the information leaked through the function's output must be considered for legal, ethical, and competitive reasons. Two parties, such as the advertiser and the ads supplier, hold sets of users for PSI computation, for example, to reveal common users to the ads supplier in joint marketing applications. In addition to the security guarantees required by standard PSIs to secure unmatched elements, neither party is allowed to "single out" whether an element/user belongs to the other party or not, even though common users are required for joint advertising. This is a fascinating problem for which none of the PSI techniques have provided a solution. In light of this shortcoming, we compose differential privacy (DP) and S2PC to provide the best of both worlds and propose differentially-private PSI (DP-PSI), a new privacy model that shares PSI's strong security protection while adhering to the GDPR's recent formalization of the notion of excluding "signaling out" attacks by each party except with very low probability.
[ { "version": "v1", "created": "Sun, 28 Aug 2022 16:50:22 GMT" } ]
2022-08-30T00:00:00
[ [ "Du", "Jian", "" ], [ "Ji", "Tianxi", "" ], [ "Cui", "Jamie", "" ], [ "Zhang", "Lei", "" ], [ "Lu", "Yufei", "" ], [ "Duan", "Pu", "" ] ]
new_dataset
0.99735
2208.13319
Krishna Vardhan
Daniel Minati, Ludwik Sams, Karen Li, Bo Ji and Krishna Vardhan
Minute ventilation measurement using Plethysmographic Imaging and lighting parameters
6 pages, 4 figures
null
null
null
cs.LG eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Breathing disorders such as sleep apnea is a critical disorder that affects a large number of individuals due to the insufficient capacity of the lungs to contain/exchange oxygen and carbon dioxide to ensure that the body is in the stable state of homeostasis. Respiratory Measurements such as minute ventilation can be used in correlation with other physiological measurements such as heart rate and heart rate variability for remote monitoring of health and detecting symptoms of such breathing related disorders. In this work, we formulate a deep learning based approach to measure remote ventilation on a private dataset. The dataset will be made public upon acceptance of this work. We use two versions of a deep neural network to estimate the minute ventilation from data streams obtained through wearable heart rate and respiratory devices. We demonstrate that the simple design of our pipeline - which includes lightweight deep neural networks - can be easily incorporate into real time health monitoring systems.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 00:42:48 GMT" } ]
2022-08-30T00:00:00
[ [ "Minati", "Daniel", "" ], [ "Sams", "Ludwik", "" ], [ "Li", "Karen", "" ], [ "Ji", "Bo", "" ], [ "Vardhan", "Krishna", "" ] ]
new_dataset
0.999736
2208.13333
Chen Cheng
Chen Cheng
Real-Time Mask Detection Based on SSD-MobileNetV2
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After the outbreak of COVID-19, mask detection, as the most convenient and effective means of prevention, plays a crucial role in epidemic prevention and control. An excellent automatic real-time mask detection system can reduce a lot of work pressure for relevant staff. However, by analyzing the existing mask detection approaches, we find that they are mostly resource-intensive and do not achieve a good balance between speed and accuracy. And there is no perfect face mask dataset at present. In this paper, we propose a new architecture for mask detection. Our system uses SSD as the mask locator and classifier, and further replaces VGG-16 with MobileNetV2 to extract the features of the image and reduce a lot of parameters. Therefore, our system can be deployed on embedded devices. Transfer learning methods are used to transfer pre-trained models from other domains to our model. Data enhancement methods in our system such as MixUp effectively prevent overfitting. It also effectively reduces the dependence on large-scale datasets. By doing experiments in practical scenarios, the results demonstrate that our system performed well in real-time mask detection.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 01:59:22 GMT" } ]
2022-08-30T00:00:00
[ [ "Cheng", "Chen", "" ] ]
new_dataset
0.999517
2208.13361
Yingjie Lao
Faysal Hossain Shezan, Yingjie Lao, Minlong Peng, Xin Wang, Mingming Sun, Ping Li
NL2GDPR: Automatically Develop GDPR Compliant Android Application Features from Natural Language
37 pages
null
null
null
cs.CR cs.CL
http://creativecommons.org/licenses/by/4.0/
The recent privacy leakage incidences and the more strict policy regulations demand a much higher standard of compliance for companies and mobile apps. However, such obligations also impose significant challenges on app developers for complying with these regulations that contain various perspectives, activities, and roles, especially for small companies and developers who are less experienced in this matter or with limited resources. To address these hurdles, we develop an automatic tool, NL2GDPR, which can generate policies from natural language descriptions from the developer while also ensuring the app's functionalities are compliant with General Data Protection Regulation (GDPR). NL2GDPR is developed by leveraging an information extraction tool, OIA (Open Information Annotation), developed by Baidu Cognitive Computing Lab. At the core, NL2GDPR is a privacy-centric information extraction model, appended with a GDPR policy finder and a policy generator. We perform a comprehensive study to grasp the challenges in extracting privacy-centric information and generating privacy policies, while exploiting optimizations for this specific task. With NL2GDPR, we can achieve 92.9%, 95.2%, and 98.4% accuracy in correctly identifying GDPR policies related to personal data storage, process, and share types, respectively. To the best of our knowledge, NL2GDPR is the first tool that allows a developer to automatically generate GDPR compliant policies, with only the need of entering the natural language for describing the app features. Note that other non-GDPR-related features might be integrated with the generated features to build a complex app.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 04:16:50 GMT" } ]
2022-08-30T00:00:00
[ [ "Shezan", "Faysal Hossain", "" ], [ "Lao", "Yingjie", "" ], [ "Peng", "Minlong", "" ], [ "Wang", "Xin", "" ], [ "Sun", "Mingming", "" ], [ "Li", "Ping", "" ] ]
new_dataset
0.999463
2208.13388
Fabrizio Montecchiani
Michael A. Bekos, Martin Gronemann, Fabrizio Montecchiani, Antonios Symvonis
Strictly-Convex Drawings of $3$-Connected Planar Graphs
Appears in the Proceedings of the 30th International Symposium on Graph Drawing and Network Visualization (GD 2022)
null
null
null
cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Strictly-convex straight-line drawings of $3$-connected planar graphs in small area form a classical research topic in Graph Drawing. Currently, the best-known area bound for such drawings is $O(n^2) \times O(n^2)$, as shown by B\'{a}r\'{a}ny and Rote by means of a sophisticated technique based on perturbing (non-strictly) convex drawings. Unfortunately, the hidden constants in such area bound are in the $10^4$ order. We present a new and easy-to-implement technique that yields strictly-convex straight-line planar drawings of $3$-connected planar graphs on an integer grid of size $2(n-1) \times (5n^3-4n^2)$.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 06:41:38 GMT" } ]
2022-08-30T00:00:00
[ [ "Bekos", "Michael A.", "" ], [ "Gronemann", "Martin", "" ], [ "Montecchiani", "Fabrizio", "" ], [ "Symvonis", "Antonios", "" ] ]
new_dataset
0.995266
2208.13424
Stefano Maria Nicoletti
Stefano M. Nicoletti and E. Moritz Hahn and Marielle Stoelinga
BFL: a Logic to Reason about Fault Trees
null
null
10.1109/DSN53405.2022.00051
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Safety-critical infrastructures must operate safely and reliably. Fault tree analysis is a widespread method used to assess risks in these systems: fault trees (FTs) are required - among others - by the Federal Aviation Authority, the Nuclear Regulatory Commission, in the ISO26262 standard for autonomous driving and for software development in aerospace systems. Although popular both in industry and academia, FTs lack a systematic way to formulate powerful and understandable analysis queries. In this paper, we aim to fill this gap and introduce Boolean Fault tree Logic (BFL), a logic to reason about FTs. BFL is a simple, yet expressive logic that supports easier formulation of complex scenarios and specification of FT properties. Alongside BFL, we present model checking algorithms based on binary decision diagrams (BDDs) to analyse specified properties in BFL, patterns and an algorithm to construct counterexamples. Finally, we propose a case-study application of BFL by analysing a COVID19-related FT.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 08:48:23 GMT" } ]
2022-08-30T00:00:00
[ [ "Nicoletti", "Stefano M.", "" ], [ "Hahn", "E. Moritz", "" ], [ "Stoelinga", "Marielle", "" ] ]
new_dataset
0.999352
2208.13427
Sun Woo Park
Sun Woo Park, Yun Young Choi, Dosang Joe, U Jin Choi, Youngho Woo
The PWLR Graph Representation: A Persistent Weisfeiler-Lehman scheme with Random Walks for Graph Classification
Accepted to the ICML 2022 Workshop on Topology, Algebra, and Geometry in Machine Learning
null
null
null
cs.LG math.AT
http://creativecommons.org/licenses/by/4.0/
This paper presents the Persistent Weisfeiler-Lehman Random walk scheme (abbreviated as PWLR) for graph representations, a novel mathematical framework which produces a collection of explainable low-dimensional representations of graphs with discrete and continuous node features. The proposed scheme effectively incorporates normalized Weisfeiler-Lehman procedure, random walks on graphs, and persistent homology. We thereby integrate three distinct properties of graphs, which are local topological features, node degrees, and global topological invariants, while preserving stability from graph perturbations. This generalizes many variants of Weisfeiler-Lehman procedures, which are primarily used to embed graphs with discrete node labels. Empirical results suggest that these representations can be efficiently utilized to produce comparable results to state-of-the-art techniques in classifying graphs with discrete node labels, and enhanced performances in classifying those with continuous node features.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 08:50:37 GMT" } ]
2022-08-30T00:00:00
[ [ "Park", "Sun Woo", "" ], [ "Choi", "Yun Young", "" ], [ "Joe", "Dosang", "" ], [ "Choi", "U Jin", "" ], [ "Woo", "Youngho", "" ] ]
new_dataset
0.991885
2208.13486
Sadra Sabouri
Sadra Sabouri, Elnaz Rahmati, Soroush Gooran, Hossein Sameti
naab: A ready-to-use plug-and-play corpus for Farsi
6 pages, 2 figures
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Huge corpora of textual data are always known to be a crucial need for training deep models such as transformer-based ones. This issue is emerging more in lower resource languages - like Farsi. We propose naab, the biggest cleaned and ready-to-use open-source textual corpus in Farsi. It contains about 130GB of data, 250 million paragraphs, and 15 billion words. The project name is derived from the Farsi word NAAB K which means pure and high grade. We also provide the raw version of the corpus called naab-raw and an easy-to-use preprocessor that can be employed by those who wanted to make a customized corpus.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 10:40:58 GMT" } ]
2022-08-30T00:00:00
[ [ "Sabouri", "Sadra", "" ], [ "Rahmati", "Elnaz", "" ], [ "Gooran", "Soroush", "" ], [ "Sameti", "Hossein", "" ] ]
new_dataset
0.994451
2208.13523
Zainab Zaidi
Zainab Zaidi, Mengbin Ye, Fergus John Samon, Abdisalam Jama, Binduja Gopalakrishnan, Chenhao Gu, Shanika Karunasekera, Jamie Evans, and Yoshihisa Kashima
Demystifying the COVID-19 vaccine discourse on Twitter
null
null
null
null
cs.SI cs.CL
http://creativecommons.org/licenses/by/4.0/
Developing an understanding of the public discourse on COVID-19 vaccination on social media is important not only for addressing the current COVID-19 pandemic, but also for future pathogen outbreaks. We examine a Twitter dataset containing 75 million English tweets discussing COVID-19 vaccination from March 2020 to March 2021. We train a stance detection algorithm using natural language processing (NLP) techniques to classify tweets as `anti-vax' or `pro-vax', and examine the main topics of discourse using topic modelling techniques. While pro-vax tweets (37 million) far outnumbered anti-vax tweets (10 million), a majority of tweets from both stances (63% anti-vax and 53% pro-vax tweets) came from dual-stance users who posted both pro- and anti-vax tweets during the observation period. Pro-vax tweets focused mostly on vaccine development, while anti-vax tweets covered a wide range of topics, some of which included genuine concerns, though there was a large dose of falsehoods. A number of topics were common to both stances, though pro- and anti-vax tweets discussed them from opposite viewpoints. Memes and jokes were amongst the most retweeted messages. Whereas concerns about polarisation and online prevalence of anti-vax discourse are unfounded, targeted countering of falsehoods is important.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 11:56:21 GMT" } ]
2022-08-30T00:00:00
[ [ "Zaidi", "Zainab", "" ], [ "Ye", "Mengbin", "" ], [ "Samon", "Fergus John", "" ], [ "Jama", "Abdisalam", "" ], [ "Gopalakrishnan", "Binduja", "" ], [ "Gu", "Chenhao", "" ], [ "Karunasekera", "Shanika", "" ], [ "Evans", "Jamie", "" ], [ "Kashima", "Yoshihisa", "" ] ]
new_dataset
0.998297
2208.13550
Snehasis Banerjee
Vivek Chandel, Snehasis Banerjee, Avik Ghose
ProxiTrak: Intelligent Enablement of Social Distancing & Contact Tracing for a Safer Workplace in the New Normal
CSI YITPA Region II Winning Paper
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper describes an innovative solution that enables the enterprises to bring their associates (or employees) back to physical workspaces for critical operations in a safe manner in the wake of current COVID-19 pandemic.
[ { "version": "v1", "created": "Thu, 25 Aug 2022 12:50:12 GMT" } ]
2022-08-30T00:00:00
[ [ "Chandel", "Vivek", "" ], [ "Banerjee", "Snehasis", "" ], [ "Ghose", "Avik", "" ] ]
new_dataset
0.984275
2208.13560
Marco Vassena
Marco Vassena, Alejandro Russo, Deepak Garg, Vineet Rajani, Deian Stefan
From Fine- to Coarse-Grained Dynamic Information Flow Control and Back, a Tutorial on Dynamic Information Flow
null
null
null
null
cs.PL cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
This tutorial provides a complete and homogeneous account of the latest advances in fine- and coarse-grained dynamic information-flow control (IFC) security. Since the 70s, the programming language and the operating system communities have proposed different IFC approaches. IFC operating systems track information flows in a coarse-grained fashion, at the granularity of a process. In contrast, traditional language-based approaches to IFC are fine-grained: they track information flows at the granularity of program variables. For decades, researchers believed coarse-grained IFC to be strictly less permissive than fine-grained IFC -- coarse-grained IFC systems seem inherently less precise because they track less information -- and so granularity appeared to be a fundamental feature of IFC systems. We show that the granularity of the tracking system does not fundamentally restrict how precise or permissive dynamic IFC systems can be. To this end, we mechanize two mostly standard languages, one with a fine-grained dynamic IFC system and the other with a coarse-grained dynamic IFC system, and prove a semantics-preserving translation from each language to the other. In addition, we derive the standard security property of non-interference of each language from that of the other via our verified translation. These translations stand to have important implications on the usability of IFC approaches. The coarse- to fine-grained direction can be used to remove the label annotation burden that fine-grained systems impose on developers, while the fine- to coarse-grained translation shows that coarse-grained systems -- which are easier to design and implement -- can track information as precisely as fine-grained systems and provides an algorithm for automatically retrofitting legacy applications to run on existing coarse-grained systems.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 12:48:20 GMT" } ]
2022-08-30T00:00:00
[ [ "Vassena", "Marco", "" ], [ "Russo", "Alejandro", "" ], [ "Garg", "Deepak", "" ], [ "Rajani", "Vineet", "" ], [ "Stefan", "Deian", "" ] ]
new_dataset
0.995478
2208.13626
Vasu Sharma
Vasu Sharma, Prasoon Goyal, Kaixiang Lin, Govind Thattai, Qiaozi Gao, Gaurav S. Sukhatme
CH-MARL: A Multimodal Benchmark for Cooperative, Heterogeneous Multi-Agent Reinforcement Learning
null
null
null
null
cs.AI cs.CV cs.LG cs.MA cs.RO
http://creativecommons.org/licenses/by/4.0/
We propose a multimodal (vision-and-language) benchmark for cooperative and heterogeneous multi-agent learning. We introduce a benchmark multimodal dataset with tasks involving collaboration between multiple simulated heterogeneous robots in a rich multi-room home environment. We provide an integrated learning framework, multimodal implementations of state-of-the-art multi-agent reinforcement learning techniques, and a consistent evaluation protocol. Our experiments investigate the impact of different modalities on multi-agent learning performance. We also introduce a simple message passing method between agents. The results suggest that multimodality introduces unique challenges for cooperative multi-agent learning and there is significant room for advancing multi-agent reinforcement learning methods in such settings.
[ { "version": "v1", "created": "Fri, 26 Aug 2022 02:21:31 GMT" } ]
2022-08-30T00:00:00
[ [ "Sharma", "Vasu", "" ], [ "Goyal", "Prasoon", "" ], [ "Lin", "Kaixiang", "" ], [ "Thattai", "Govind", "" ], [ "Gao", "Qiaozi", "" ], [ "Sukhatme", "Gaurav S.", "" ] ]
new_dataset
0.999316
2208.13679
Abtin Molavi
Abtin Molavi, Amanda Xu, Martin Diges, Lauren Pick, Swamit Tannu, Aws Albarghouthi
Qubit Mapping and Routing via MaxSAT
null
null
null
null
cs.AR quant-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
Near-term quantum computers will operate in a noisy environment, without error correction. A critical problem for near-term quantum computing is laying out a logical circuit onto a physical device with limited connectivity between qubits. This is known as the qubit mapping and routing (QMR) problem, an intractable combinatorial problem. It is important to solve QMR as optimally as possible to reduce the amount of added noise, which may render a quantum computation useless. In this paper, we present a novel approach for optimally solving the QMR problem via a reduction to maximum satisfiability (MAXSAT). Additionally, we present two novel relaxation ideas that shrink the size of the MAXSAT constraints by exploiting the structure of a quantum circuit. Our thorough empirical evaluation demonstrates (1) the scalability of our approach compared to state-of-the-art optimal QMR techniques (solves more than 3x benchmarks with 40x speedup), (2) the significant cost reduction compared to state-of-the-art heuristic approaches (an average of ~5x swap reduction), and (3) the power of our proposed constraint relaxations.
[ { "version": "v1", "created": "Mon, 29 Aug 2022 15:39:04 GMT" } ]
2022-08-30T00:00:00
[ [ "Molavi", "Abtin", "" ], [ "Xu", "Amanda", "" ], [ "Diges", "Martin", "" ], [ "Pick", "Lauren", "" ], [ "Tannu", "Swamit", "" ], [ "Albarghouthi", "Aws", "" ] ]
new_dataset
0.99708
1712.05474
Roozbeh Mottaghi
Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, Aniruddha Kembhavi, Abhinav Gupta, Ali Farhadi
AI2-THOR: An Interactive 3D Environment for Visual AI
null
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce The House Of inteRactions (THOR), a framework for visual AI research, available at http://ai2thor.allenai.org. AI2-THOR consists of near photo-realistic 3D indoor scenes, where AI agents can navigate in the scenes and interact with objects to perform tasks. AI2-THOR enables research in many different domains including but not limited to deep reinforcement learning, imitation learning, learning by interaction, planning, visual question answering, unsupervised representation learning, object detection and segmentation, and learning models of cognition. The goal of AI2-THOR is to facilitate building visually intelligent models and push the research forward in this domain.
[ { "version": "v1", "created": "Thu, 14 Dec 2017 23:17:24 GMT" }, { "version": "v2", "created": "Wed, 13 Mar 2019 23:45:48 GMT" }, { "version": "v3", "created": "Fri, 15 Mar 2019 18:29:15 GMT" }, { "version": "v4", "created": "Fri, 26 Aug 2022 17:12:17 GMT" } ]
2022-08-29T00:00:00
[ [ "Kolve", "Eric", "" ], [ "Mottaghi", "Roozbeh", "" ], [ "Han", "Winson", "" ], [ "VanderBilt", "Eli", "" ], [ "Weihs", "Luca", "" ], [ "Herrasti", "Alvaro", "" ], [ "Deitke", "Matt", "" ], [ "Ehsani", "Kiana", "" ], [ "Gordon", "Daniel", "" ], [ "Zhu", "Yuke", "" ], [ "Kembhavi", "Aniruddha", "" ], [ "Gupta", "Abhinav", "" ], [ "Farhadi", "Ali", "" ] ]
new_dataset
0.997938
1802.07944
Anthony Labarre
Laurent Bulteau and Danny Hermelin and Anthony Labarre and St\'ephane Vialette
The Clever Shopper Problem
15 pages, 3 figures, to appear at the 13th International Computer Science Symposium in Russia (CSR 2018)
null
10.1007/978-3-319-90530-3_6
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We investigate a variant of the so-called "Internet Shopping Problem" introduced by Blazewicz et al. (2010), where a customer wants to buy a list of products at the lowest possible total cost from shops which offer discounts when purchases exceed a certain threshold. Although the problem is NP-hard, we provide exact algorithms for several cases, e.g. when each shop sells only two items, and an FPT algorithm for the number of items, or for the number of shops when all prices are equal. We complement each result with hardness proofs in order to draw a tight boundary between tractable and intractable cases. Finally, we give an approximation algorithm and hardness results for the problem of maximising the sum of discounts.
[ { "version": "v1", "created": "Thu, 22 Feb 2018 08:58:30 GMT" } ]
2022-08-29T00:00:00
[ [ "Bulteau", "Laurent", "" ], [ "Hermelin", "Danny", "" ], [ "Labarre", "Anthony", "" ], [ "Vialette", "Stéphane", "" ] ]
new_dataset
0.992276
2110.02035
Adri\`a Salvador Palau
David Amat Ol\'ondriz and Pon\c{c} Palau Puigdevall and Adri\`a Salvador Palau
FooDI-ML: a large multi-language dataset of food, drinks and groceries images and descriptions
null
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper we introduce the FooDI-ML dataset. This dataset contains over 1.5M unique images and over 9.5M store names, product names descriptions, and collection sections gathered from the Glovo application. The data made available corresponds to food, drinks and groceries products from 37 countries in Europe, the Middle East, Africa and Latin America. The dataset comprehends 33 languages, including 870K samples of languages of countries from Eastern Europe and Western Asia such as Ukrainian and Kazakh, which have been so far underrepresented in publicly available visio-linguistic datasets. The dataset also includes widely spoken languages such as Spanish and English. To assist further research, we include benchmarks over two tasks: text-image retrieval and conditional image generation.
[ { "version": "v1", "created": "Tue, 5 Oct 2021 13:33:08 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 11:23:29 GMT" } ]
2022-08-29T00:00:00
[ [ "Olóndriz", "David Amat", "" ], [ "Puigdevall", "Ponç Palau", "" ], [ "Palau", "Adrià Salvador", "" ] ]
new_dataset
0.999893
2111.10970
Scott Davidoff
Rebecca Castano, Tiago Vaquero, Federico Rossi, Vandi Verma, Ellen Van Wyk, Dan Allard, Bennett Huffmann, Erin M. Murphy, Nihal Dhamani, Robert A. Hewitt, Scott Davidoff, Rashied Amini, Anthony Barrett, Julie Castillo-Rogez, Steve A. Chien, Mathieu Choukroun, Alain Dadaian, Raymond Francis, Benjamin Gorr, Mark Hofstadter, Mitch Ingham, Cristina Sorice and Iain Tierney
Operations for Autonomous Spacecraft
16 pages, 18 Figures, 1 Table, to be published in IEEE Aerospace 2022 (AeroConf 2022)
Proceedings of the 2022 IEEE Aerospace Conference (IEEE AERO 2022), 1-20
10.1109/AERO53065.2022.9843352
null
cs.RO cs.AI cs.HC cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Onboard autonomy technologies such as planning and scheduling, identification of scientific targets, and content-based data summarization, will lead to exciting new space science missions. However, the challenge of operating missions with such onboard autonomous capabilities has not been studied to a level of detail sufficient for consideration in mission concepts. These autonomy capabilities will require changes to current operations processes, practices, and tools. We have developed a case study to assess the changes needed to enable operators and scientists to operate an autonomous spacecraft by facilitating a common model between the ground personnel and the onboard algorithms. We assess the new operations tools and workflows necessary to enable operators and scientists to convey their desired intent to the spacecraft, and to be able to reconstruct and explain the decisions made onboard and the state of the spacecraft. Mock-ups of these tools were used in a user study to understand the effectiveness of the processes and tools in enabling a shared framework of understanding, and in the ability of the operators and scientists to effectively achieve mission science objectives.
[ { "version": "v1", "created": "Mon, 22 Nov 2021 03:26:22 GMT" } ]
2022-08-29T00:00:00
[ [ "Castano", "Rebecca", "" ], [ "Vaquero", "Tiago", "" ], [ "Rossi", "Federico", "" ], [ "Verma", "Vandi", "" ], [ "Van Wyk", "Ellen", "" ], [ "Allard", "Dan", "" ], [ "Huffmann", "Bennett", "" ], [ "Murphy", "Erin M.", "" ], [ "Dhamani", "Nihal", "" ], [ "Hewitt", "Robert A.", "" ], [ "Davidoff", "Scott", "" ], [ "Amini", "Rashied", "" ], [ "Barrett", "Anthony", "" ], [ "Castillo-Rogez", "Julie", "" ], [ "Chien", "Steve A.", "" ], [ "Choukroun", "Mathieu", "" ], [ "Dadaian", "Alain", "" ], [ "Francis", "Raymond", "" ], [ "Gorr", "Benjamin", "" ], [ "Hofstadter", "Mark", "" ], [ "Ingham", "Mitch", "" ], [ "Sorice", "Cristina", "" ], [ "Tierney", "Iain", "" ] ]
new_dataset
0.955969
2201.00589
Timo H\"ackel
Timo H\"ackel, Philipp Meyer, Franz Korf, Thomas C. Schmidt
Secure Time-Sensitive Software-Defined Networking in Vehicles
null
null
10.1109/TVT.2022.3202368
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current designs of future In-Vehicle Networks (IVN) prepare for switched Ethernet backbones, which can host advanced LAN technologies such as IEEE Time-Sensitive Networking (TSN) and Software-Defined Networking (SDN). In this paper, we present an integrated Time-Sensitive Software-Defined Networking (TSSDN) architecture that simultaneously enables control of synchronous and asynchronous real-time and best-effort communication for all IVN traffic classes. Despite the central SDN controller, we can validate that control can operate without a delay penalty for TSN traffic, provided protocols are properly mapped. We demonstrate how TSSDN adaptably and reliably enhances network security for in-vehicle communication. A systematic investigation of the possible control flow integrations with switched Ether-networks reveals that these strategies allow for shaping the attack surface of a software-defined IVN. We discuss embeddings of control flow identifiers on different layers, covering the range from a fully exposed mapping to deep encapsulation. We experimentally evaluate these strategies in a production vehicle, which we map to a modern Ethernet topology. Our findings indicate that visibility of automotive control flows on lower network layers enables isolation and access control throughout the network infrastructure. Such a TSSDN backbone can establish and survey trust zones within the IVN and reduce the attack surface of connected cars in various attack scenarios.
[ { "version": "v1", "created": "Mon, 3 Jan 2022 11:27:28 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 10:05:55 GMT" } ]
2022-08-29T00:00:00
[ [ "Häckel", "Timo", "" ], [ "Meyer", "Philipp", "" ], [ "Korf", "Franz", "" ], [ "Schmidt", "Thomas C.", "" ] ]
new_dataset
0.98911
2203.13296
Kevis-Kokitsi Maninis
Micha{\l} J. Tyszkiewicz, Kevis-Kokitsi Maninis, Stefan Popov, Vittorio Ferrari
RayTran: 3D pose estimation and shape reconstruction of multiple objects from videos with ray-traced transformers
ECCV 2022 camera ready
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a transformer-based neural network architecture for multi-object 3D reconstruction from RGB videos. It relies on two alternative ways to represent its knowledge: as a global 3D grid of features and an array of view-specific 2D grids. We progressively exchange information between the two with a dedicated bidirectional attention mechanism. We exploit knowledge about the image formation process to significantly sparsify the attention weight matrix, making our architecture feasible on current hardware, both in terms of memory and computation. We attach a DETR-style head on top of the 3D feature grid in order to detect the objects in the scene and to predict their 3D pose and 3D shape. Compared to previous methods, our architecture is single stage, end-to-end trainable, and it can reason holistically about a scene from multiple video frames without needing a brittle tracking step. We evaluate our method on the challenging Scan2CAD dataset, where we outperform (1) recent state-of-the-art methods for 3D object pose estimation from RGB videos; and (2) a strong alternative method combining Multi-view Stereo with RGB-D CAD alignment. We plan to release our source code.
[ { "version": "v1", "created": "Thu, 24 Mar 2022 18:49:12 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 08:18:52 GMT" } ]
2022-08-29T00:00:00
[ [ "Tyszkiewicz", "Michał J.", "" ], [ "Maninis", "Kevis-Kokitsi", "" ], [ "Popov", "Stefan", "" ], [ "Ferrari", "Vittorio", "" ] ]
new_dataset
0.999758
2203.15448
H\"armel Nestra
Dan Bogdanov (1), Joosep J\"a\"ager (1), Peeter Laud (1), H\"armel Nestra (1), Martin Pettai (1), Jaak Randmets (1), Ville Sokk (1), Kert Tali (1), Sandhra-Mirella Valdma (1) ((1) Cybernetica AS)
ZK-SecreC: a Domain-Specific Language for Zero Knowledge Proofs
75 pp
null
null
null
cs.PL cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ZK-SecreC, a domain-specific language for zero-knowledge proofs. We present the rationale for its design, its syntax and semantics, and demonstrate its usefulness on the basis of a number of non-trivial examples. The design features a type system, where each piece of data is assigned both a confidentiality and an integrity type, which are not orthogonal to each other. We perform an empiric evaluation of the statements produced by its compiler in terms of their size. We also show the integration of the compiler with the implementation of a zero-knowledge proof technique, and evaluate the running time of both Prover and Verifier.
[ { "version": "v1", "created": "Tue, 29 Mar 2022 11:35:11 GMT" }, { "version": "v2", "created": "Fri, 26 Aug 2022 13:43:41 GMT" } ]
2022-08-29T00:00:00
[ [ "Bogdanov", "Dan", "", "Cybernetica AS" ], [ "Jääger", "Joosep", "", "Cybernetica AS" ], [ "Laud", "Peeter", "", "Cybernetica AS" ], [ "Nestra", "Härmel", "", "Cybernetica AS" ], [ "Pettai", "Martin", "", "Cybernetica AS" ], [ "Randmets", "Jaak", "", "Cybernetica AS" ], [ "Sokk", "Ville", "", "Cybernetica AS" ], [ "Tali", "Kert", "", "Cybernetica AS" ], [ "Valdma", "Sandhra-Mirella", "", "Cybernetica AS" ] ]
new_dataset
0.999801
2204.00907
Antoine Lavault
Antoine Lavault and Axel Roebel and Matthieu Voiry
StyleWaveGAN: Style-based synthesis of drum sounds with extensive controls using generative adversarial networks
Accepted for publication in Sound and Music Computing 2022
null
10.5281/zenodo.6573360
null
cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In this paper we introduce StyleWaveGAN, a style-based drum sound generator that is a variation of StyleGAN, a state-of-the-art image generator. By conditioning StyleWaveGAN on both the type of drum and several audio descriptors, we are able to synthesize waveforms faster than real-time on a GPU directly in CD quality up to a duration of 1.5s while retaining a considerable amount of control over the generation. We also introduce an alternative to the progressive growing of GANs and experimented on the effect of dataset balancing for generative tasks. The experiments are carried out on an augmented subset of a publicly available dataset comprised of different drums and cymbals. We evaluate against two recent drum generators, WaveGAN and NeuroDrum, demonstrating significantly improved generation quality (measured with the Frechet Audio Distance) and interesting results with perceptual features.
[ { "version": "v1", "created": "Sat, 2 Apr 2022 17:27:17 GMT" } ]
2022-08-29T00:00:00
[ [ "Lavault", "Antoine", "" ], [ "Roebel", "Axel", "" ], [ "Voiry", "Matthieu", "" ] ]
new_dataset
0.999106
2205.03911
Orian Leitersdorf
Adir Kobovich, Orian Leitersdorf, Daniella Bar-Lev, Eitan Yaakobi
Codes for Constrained Periodicity
Accepted to The International Symposium on Information Theory and Its Applications (ISITA) 2022
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-nc-nd/4.0/
Reliability is an inherent challenge for the emerging nonvolatile technology of racetrack memories, and there exists a fundamental relationship between codes designed for racetrack memories and codes with constrained periodicity. Previous works have sought to construct codes that avoid periodicity in windows, yet have either only provided existence proofs or required high redundancy. This paper provides the first constructions for avoiding periodicity that are both efficient (average-linear time) and with low redundancy (near the lower bound). The proposed algorithms are based on iteratively repairing windows which contain periodicity until all the windows are valid. Intuitively, such algorithms should not converge as there is no monotonic progression; yet, we prove convergence with average-linear time complexity by exploiting subtle properties of the encoder. Overall, we both provide constructions that avoid periodicity in all windows, and we also study the cardinality of such constraints.
[ { "version": "v1", "created": "Sun, 8 May 2022 16:32:17 GMT" }, { "version": "v2", "created": "Thu, 25 Aug 2022 22:31:20 GMT" } ]
2022-08-29T00:00:00
[ [ "Kobovich", "Adir", "" ], [ "Leitersdorf", "Orian", "" ], [ "Bar-Lev", "Daniella", "" ], [ "Yaakobi", "Eitan", "" ] ]
new_dataset
0.97066
2205.07403
Guangsheng Shi
Guangsheng Shi, Ruifeng Li and Chao Ma
PillarNet: Real-Time and High-Performance Pillar-based 3D Object Detection
ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Real-time and high-performance 3D object detection is of critical importance for autonomous driving. Recent top-performing 3D object detectors mainly rely on point-based or 3D voxel-based convolutions, which are both computationally inefficient for onboard deployment. In contrast, pillar-based methods use solely 2D convolutions, which consume less computation resources, but they lag far behind their voxel-based counterparts in detection accuracy. In this paper, by examining the primary performance gap between pillar- and voxel-based detectors, we develop a real-time and high-performance pillar-based detector, dubbed PillarNet.The proposed PillarNet consists of a powerful encoder network for effective pillar feature learning, a neck network for spatial-semantic feature fusion and the commonly used detect head. Using only 2D convolutions, PillarNet is flexible to an optional pillar size and compatible with classical 2D CNN backbones, such as VGGNet and ResNet. Additionally, PillarNet benefits from our designed orientation-decoupled IoU regression loss along with the IoU-aware prediction branch. Extensive experimental results on the large-scale nuScenes Dataset and Waymo Open Dataset demonstrate that the proposed PillarNet performs well over state-of-the-art 3D detectors in terms of effectiveness and efficiency. Code is available at \url{https://github.com/agent-sgs/PillarNet}.
[ { "version": "v1", "created": "Mon, 16 May 2022 00:14:50 GMT" }, { "version": "v2", "created": "Thu, 19 May 2022 07:37:11 GMT" }, { "version": "v3", "created": "Tue, 31 May 2022 07:52:07 GMT" }, { "version": "v4", "created": "Tue, 14 Jun 2022 14:02:33 GMT" }, { "version": "v5", "created": "Fri, 26 Aug 2022 03:21:15 GMT" } ]
2022-08-29T00:00:00
[ [ "Shi", "Guangsheng", "" ], [ "Li", "Ruifeng", "" ], [ "Ma", "Chao", "" ] ]
new_dataset
0.98363