id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2301.10523
Ilias Zosimadis
Ilias Zosimadis and Ioannis Stamelos
A Novel IoT-Based System for Ten Pin Bowling
20 pages, 6 figures
null
null
null
cs.OH
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bowling is a target sport that is popular among all age groups with professionals and amateur players. Delivering an accurate and consistent bowling throw into the lane requires the incorporation of motion techniques. Consequently, this research presents a novel IoT-Cloud based system for providing real-time monitoring and coaching services to bowling athletes. The system includes two inertial measurement units (IMUs) sensors for capturing motion data, a mobile application and a cloud server for processing the data. First, the quality of each phase of a throw is assessed using a Dynamic Time Wrapping (DTW) based algorithm. Second, an on device-level technique is proposed to identify common bowling errors. Finally, an SVM classification model is employed for assessing the skill level of bowler athletes. We recruited nine right-handed bowlers to perform 50 throws wearing the two sensors and using the proposed system. The results of our experiments suggest that the proposed system can effectively and efficiently assess the quality of the throw, detect common bowling errors and classify the skill level of the bowler.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 11:04:31 GMT" } ]
2023-01-26T00:00:00
[ [ "Zosimadis", "Ilias", "" ], [ "Stamelos", "Ioannis", "" ] ]
new_dataset
0.985768
2301.10577
Sushil Awale
Debayan Banerjee, Seid Muhie Yimam, Sushil Awale and Chris Biemann
ARDIAS: AI-Enhanced Research Management, Discovery, and Advisory System
null
null
null
null
cs.CL cs.AI cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
In this work, we present ARDIAS, a web-based application that aims to provide researchers with a full suite of discovery and collaboration tools. ARDIAS currently allows searching for authors and articles by name and gaining insights into the research topics of a particular researcher. With the aid of AI-based tools, ARDIAS aims to recommend potential collaborators and topics to researchers. In the near future, we aim to add tools that allow researchers to communicate with each other and start new projects.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 13:30:10 GMT" } ]
2023-01-26T00:00:00
[ [ "Banerjee", "Debayan", "" ], [ "Yimam", "Seid Muhie", "" ], [ "Awale", "Sushil", "" ], [ "Biemann", "Chris", "" ] ]
new_dataset
0.992389
2301.10604
Veronika Solopova
Veronika Solopova, Oana-Iuliana Popescu, Christoph Benzm\"uller and Tim Landgraf
Automated multilingual detection of Pro-Kremlin propaganda in newspapers and Telegram posts
9 pages, 3 figures
null
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
The full-scale conflict between the Russian Federation and Ukraine generated an unprecedented amount of news articles and social media data reflecting opposing ideologies and narratives. These polarized campaigns have led to mutual accusations of misinformation and fake news, shaping an atmosphere of confusion and mistrust for readers worldwide. This study analyses how the media affected and mirrored public opinion during the first month of the war using news articles and Telegram news channels in Ukrainian, Russian, Romanian and English. We propose and compare two methods of multilingual automated pro-Kremlin propaganda identification, based on Transformers and linguistic features. We analyse the advantages and disadvantages of both methods, their adaptability to new genres and languages, and ethical considerations of their usage for content moderation. With this work, we aim to lay the foundation for further development of moderation tools tailored to the current conflict.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 14:25:37 GMT" } ]
2023-01-26T00:00:00
[ [ "Solopova", "Veronika", "" ], [ "Popescu", "Oana-Iuliana", "" ], [ "Benzmüller", "Christoph", "" ], [ "Landgraf", "Tim", "" ] ]
new_dataset
0.991323
2301.10704
Kacper Wardega
Kacper Wardega, Max von Hippel, Roberto Tron, Cristina Nita-Rotaru, Wenchao Li
HoLA Robots: Mitigating Plan-Deviation Attacks in Multi-Robot Systems with Co-Observations and Horizon-Limiting Announcements
This is the long version of our paper accepted as an extended abstract to AAMAS'23
null
null
null
cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Emerging multi-robot systems rely on cooperation between humans and robots, with robots following automatically generated motion plans to service application-level tasks. Given the safety requirements associated with operating in proximity to humans and expensive infrastructure, it is important to understand and mitigate the security vulnerabilities of such systems caused by compromised robots who diverge from their assigned plans. We focus on centralized systems, where a *central entity* (CE) is responsible for determining and transmitting the motion plans to the robots, which report their location as they move following the plan. The CE checks that robots follow their assigned plans by comparing their expected location to the location they self-report. We show that this self-reporting monitoring mechanism is vulnerable to *plan-deviation attacks* where compromised robots don't follow their assigned plans while trying to conceal their movement by mis-reporting their location. We propose a two-pronged mitigation for plan-deviation attacks: (1) an attack detection technique leveraging both the robots' local sensing capabilities to report observations of other robots and *co-observation schedules* generated by the CE, and (2) a prevention technique where the CE issues *horizon-limiting announcements* to the robots, reducing their instantaneous knowledge of forward lookahead steps in the global motion plan. On a large-scale automated warehouse benchmark, we show that our solution enables attack prevention guarantees from a stealthy attacker that has compromised multiple robots.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 17:11:14 GMT" } ]
2023-01-26T00:00:00
[ [ "Wardega", "Kacper", "" ], [ "von Hippel", "Max", "" ], [ "Tron", "Roberto", "" ], [ "Nita-Rotaru", "Cristina", "" ], [ "Li", "Wenchao", "" ] ]
new_dataset
0.979318
2301.10732
Pan He
Aotian Wu, Pan He, Xiao Li, Ke Chen, Sanjay Ranka, Anand Rangarajan
An Efficient Semi-Automated Scheme for Infrastructure LiDAR Annotation
Submitted to IEEE Intelligent Transportation Systems Transactions
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Most existing perception systems rely on sensory data acquired from cameras, which perform poorly in low light and adverse weather conditions. To resolve this limitation, we have witnessed advanced LiDAR sensors become popular in perception tasks in autonomous driving applications. Nevertheless, their usage in traffic monitoring systems is less ubiquitous. We identify two significant obstacles in cost-effectively and efficiently developing such a LiDAR-based traffic monitoring system: (i) public LiDAR datasets are insufficient for supporting perception tasks in infrastructure systems, and (ii) 3D annotations on LiDAR point clouds are time-consuming and expensive. To fill this gap, we present an efficient semi-automated annotation tool that automatically annotates LiDAR sequences with tracking algorithms while offering a fully annotated infrastructure LiDAR dataset -- FLORIDA (Florida LiDAR-based Object Recognition and Intelligent Data Annotation) -- which will be made publicly available. Our advanced annotation tool seamlessly integrates multi-object tracking (MOT), single-object tracking (SOT), and suitable trajectory post-processing techniques. Specifically, we introduce a human-in-the-loop schema in which annotators recursively fix and refine annotations imperfectly predicted by our tool and incrementally add them to the training dataset to obtain better SOT and MOT models. By repeating the process, we significantly increase the overall annotation speed by three to four times and obtain better qualitative annotations than a state-of-the-art annotation tool. The human annotation experiments verify the effectiveness of our annotation tool. In addition, we provide detailed statistics and object detection evaluation results for our dataset in serving as a benchmark for perception tasks at traffic intersections.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 17:42:15 GMT" } ]
2023-01-26T00:00:00
[ [ "Wu", "Aotian", "" ], [ "He", "Pan", "" ], [ "Li", "Xiao", "" ], [ "Chen", "Ke", "" ], [ "Ranka", "Sanjay", "" ], [ "Rangarajan", "Anand", "" ] ]
new_dataset
0.993061
2301.10733
Thien-Nam Dinh
Thien-Nam Dinh, Nicholas Pattengale, Steven Elliott
The Synchronic Web
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The Synchronic Web is a distributed network for securing data provenance on the World Wide Web. By enabling clients around the world to freely commit digital information into a single shared view of history, it provides a foundational basis of truth on which to build decentralized and scalable trust across the Internet. Its core cryptographical capability allows mutually distrusting parties to create and verify statements of the following form: "I commit to this information--and only this information--at this moment in time." The backbone of the Synchronic Web infrastructure is a simple, small, and semantic-free blockchain that is accessible to any Internet-enabled entity. The infrastructure is maintained by a permissioned network of well-known servers, called notaries, and accessed by a permissionless group of clients, called journals. Through an evolving stack of flexible and composable semantic specifications, the parties cooperate to generate synchronic commitments over arbitrary data. When integrated with existing infrastructures, adapted to diverse domains, and scaled across the breadth of cyberspace, the Synchronic Web provides a ubiquitous mechanism to lock the world's data into unique points in discrete time and digital space.
[ { "version": "v1", "created": "Wed, 25 Jan 2023 17:48:37 GMT" } ]
2023-01-26T00:00:00
[ [ "Dinh", "Thien-Nam", "" ], [ "Pattengale", "Nicholas", "" ], [ "Elliott", "Steven", "" ] ]
new_dataset
0.998874
1904.07088
Frederik Hauser
Frederik Hauser and Mark Schmidt and Marco H\"aberle and Michael Menth
P4-MACsec: Dynamic Topology Monitoring and Data Layer Protection with MACsec in P4-SDN
Submitted to JSAC "Series on Network Softwarization & Enablers" on 04/15/2019
null
10.1109/ACCESS.2020.2982859
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose P4-MACsec to protect network links between P4 switches through automated deployment of MACsec, a widespread IEEE standard for securing Layer 2 infrastructures. It is supported by switches and routers from major manufacturers and has only little performance limitations compared to VPN technologies such as IPsec. P4-MACsec introduces a data plane implementation of MACsec including AES-GCM encryption and decryption directly on P4 switches. P4-MACsec features a two-tier control plane structure where local controllers running on the P4 switches interact with a central controller. We propose a novel secure link discovery mechanism that leverages protected LLDP frames and the two-tier control plane structure for secure and efficient management of a global link map. Automated deployment of MACsec creates secure channel, generates keying material, and configures the P4 switches for each detected link between two P4 switches. It detects link changes and performs rekeying to provide a secure, configuration-free operation of MACsec. In this paper, we review the technological background of P4-MACsec and explain its architecture. To demonstrate the feasibility of P4-MACsec, we implement it on the BMv2 P4 software switch and validate the prototype through experiments. We evaluate its performance through experiments that focus on TCP throughput and round-trip time. We publish the prototype and experiment setups on Github.
[ { "version": "v1", "created": "Mon, 15 Apr 2019 14:49:57 GMT" } ]
2023-01-25T00:00:00
[ [ "Hauser", "Frederik", "" ], [ "Schmidt", "Mark", "" ], [ "Häberle", "Marco", "" ], [ "Menth", "Michael", "" ] ]
new_dataset
0.996137
1907.03544
Frederik Hauser
Frederik Hauser, Mark Schmidt, Michael Menth
xRAC: Execution and Access Control for Restricted Application Containers on Managed Hosts
null
null
10.1109/NOMS47738.2020.9110380
null
cs.NI cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose xRAC to permit users to run special applications on managed hosts and to grant them access to protected network resources. We use restricted application containers (RACs) for that purpose. A RAC is a virtualization container with only a selected set of applications. Authentication verifies the RAC user's identity and the integrity of the RAC image. If the user is permitted to use the RAC on a managed host, launching the RAC is authorized and access to protected network resources may be given, e.g., to internal networks, servers, or the Internet. xRAC simplifies traffic control as the traffic of a RAC has a unique IPv6 address so that it can be easily identified in the network. The architecture of xRAC reuses standard technologies, protocols, and infrastructure. Those are the Docker virtualization platform and 802.1X including EAP-over-UDP and RADIUS. Thus, xRAC improves network security without modifying core parts of applications, hosts, and infrastructure. In this paper, we review the technological background of xRAC, explain its architecture, discuss selected use cases, and investigate on the performance. To demonstrate the feasibility of xRAC, we implement it based on standard components with only a few modifications. Finally, we validate xRAC through experiments.
[ { "version": "v1", "created": "Mon, 8 Jul 2019 12:11:26 GMT" } ]
2023-01-25T00:00:00
[ [ "Hauser", "Frederik", "" ], [ "Schmidt", "Mark", "" ], [ "Menth", "Michael", "" ] ]
new_dataset
0.986794
1907.03593
Frederik Hauser
Frederik Hauser, Marco H\"aberle, Mark Schmidt, Michael Menth
P4-IPsec: Site-to-Site and Host-to-Site VPN with IPsec in P4-Based SDN
null
null
10.1109/ACCESS.2020.3012738
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we present P4-IPsec, a concept for IPsec in software-defined networks (SDN) using P4 programmable data planes. The prototype implementation features ESP in tunnel mode and supports different cipher suites. P4-capable switches are programmed to serve as IPsec tunnel endpoints. We also provide a client agent to configure tunnel endpoints on Linux hosts so that site-to-site and host-to-site application scenarios can be supported which are the base for virtual private networks (VPNs). While traditional VPNs require complex key exchange protocols like IKE to set up and renew tunnel endpoints, P4-IPsec benefits from an SDN controller to accomplish these tasks. One goal of this experimental work is to investigate how well P4-IPsec can be implemented on existing P4 switches. We present a prototype for the BMv2 P4 software switch, evaluate its performance, and publish its source code on GitHub. We explain why we could not provide a useful implementation with the NetFPGA SUME board. For the Edgecore Wedge 100BF-32X Tofino-based switch, we presented two prototype implementations to cope with a missing crypto unit. As another contribution of this paper, we provide technological background of P4 and IPsec and give a comprehensive review of security applications in P4, IPsec in SDN, and IPsec data plane implementations. According to our knowledge, P4-IPsec is the first implementation of IPsec for P4-based SDN.
[ { "version": "v1", "created": "Mon, 8 Jul 2019 13:18:46 GMT" }, { "version": "v2", "created": "Sun, 5 Jul 2020 14:18:46 GMT" } ]
2023-01-25T00:00:00
[ [ "Hauser", "Frederik", "" ], [ "Häberle", "Marco", "" ], [ "Schmidt", "Mark", "" ], [ "Menth", "Michael", "" ] ]
new_dataset
0.999593
2003.06880
Liat Peterfreund
Liat Peterfreund
Grammars for Document Spanners
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
We propose a new grammar-based language for defining information-extractors from documents (text) that is built upon the well-studied framework of document spanners for extracting structured data from text. While previously studied formalisms for document spanners are mainly based on regular expressions, we use an extension of context-free grammars, called {extraction grammars}, to define the new class of context-free spanners. Extraction grammars are simply context-free grammars extended with variables that capture interval positions of the document, namely spans. While regular expressions are efficient for tokenizing and tagging, context-free grammars are also efficient for capturing structural properties. Indeed, we show that context-free spanners are strictly more expressive than their regular counterparts. We reason about the expressive power of our new class and present a pushdown-automata model that captures it. We show that extraction grammars can be evaluated with polynomial data complexity. Nevertheless, as the degree of the polynomial depends on the query, we present an enumeration algorithm for unambiguous extraction grammars that, after quintic preprocessing, outputs the results sequentially, without repetitions, with a constant delay between every two consecutive ones.
[ { "version": "v1", "created": "Sun, 15 Mar 2020 17:50:18 GMT" }, { "version": "v2", "created": "Tue, 24 Mar 2020 11:36:38 GMT" }, { "version": "v3", "created": "Mon, 20 Apr 2020 17:00:06 GMT" }, { "version": "v4", "created": "Thu, 12 Nov 2020 11:10:52 GMT" }, { "version": "v5", "created": "Sat, 13 Mar 2021 10:15:15 GMT" }, { "version": "v6", "created": "Tue, 24 Jan 2023 15:32:26 GMT" } ]
2023-01-25T00:00:00
[ [ "Peterfreund", "Liat", "" ] ]
new_dataset
0.981969
2101.07095
Andrew Lewis-Pye
Andrew Lewis-Pye, Tim Roughgarden
Byzantine Generals in the Permissionless Setting
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Consensus protocols have traditionally been studied in a setting where all participants are known to each other from the start of the protocol execution. In the parlance of the 'blockchain' literature, this is referred to as the permissioned setting. What differentiates Bitcoin from these previously studied protocols is that it operates in a permissionless setting, i.e. it is a protocol for establishing consensus over an unknown network of participants that anybody can join, with as many identities as they like in any role. The arrival of this new form of protocol brings with it many questions. Beyond Bitcoin, what can we prove about permissionless protocols in a general sense? How does recent work on permissionless protocols in the blockchain literature relate to the well-developed history of research on permissioned protocols in distributed computing? To answer these questions, we describe a formal framework for the analysis of both permissioned and permissionless systems. Our framework allows for "apples-to-apples" comparisons between different categories of protocols and, in turn, the development of theory to formally discuss their relative merits. A major benefit of the framework is that it facilitates the application of a rich history of proofs and techniques in distributed computing to problems in blockchain and the study of permissionless systems. Within our framework, we then address the questions above. We consider the Byzantine Generals Problem as a formalisation of the problem of reaching consensus, and address a programme of research that asks, "Under what adversarial conditions, and for what types of permissionless protocol, is consensus possible?" We prove a number of results for this programme, our main result being that deterministic consensus is not possible for decentralised permissionless protocols. To close, we give a list of eight open questions.
[ { "version": "v1", "created": "Mon, 18 Jan 2021 14:36:36 GMT" }, { "version": "v2", "created": "Thu, 4 Feb 2021 12:49:22 GMT" }, { "version": "v3", "created": "Mon, 8 Feb 2021 14:54:43 GMT" }, { "version": "v4", "created": "Thu, 11 Feb 2021 09:09:51 GMT" }, { "version": "v5", "created": "Fri, 12 Feb 2021 09:35:41 GMT" }, { "version": "v6", "created": "Mon, 15 Feb 2021 09:06:37 GMT" }, { "version": "v7", "created": "Sun, 10 Oct 2021 05:32:27 GMT" }, { "version": "v8", "created": "Tue, 8 Feb 2022 15:37:22 GMT" }, { "version": "v9", "created": "Tue, 24 Jan 2023 10:05:30 GMT" } ]
2023-01-25T00:00:00
[ [ "Lewis-Pye", "Andrew", "" ], [ "Roughgarden", "Tim", "" ] ]
new_dataset
0.994089
2101.07578
Quan Quan
Quan Quan, Rao Fu, Mengxin Li, Donghui Wei, Yan Gao and Kai-Yuan Cai
Practical Distributed Control for VTOL UAVs to Pass a Virtual Tube
null
null
10.1109/TIV.2021.3123110
null
cs.RO cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned Aerial Vehicles (UAVs) are now becoming increasingly accessible to amateur and commercial users alike. An air traffic management (ATM) system is needed to help ensure that this newest entrant into the skies does not collide with others. In an ATM, airspace can be composed of airways, intersections and nodes. In this paper, for simplicity, distributed coordinating the motions of Vertical TakeOff and Landing (VTOL) UAVs to pass an airway is focused. This is formulated as a virtual tube passing problem, which includes passing a virtual tube, inter-agent collision avoidance and keeping within the virtual tube. Lyapunov-like functions are designed elaborately, and formal analysis based on invariant set theorem is made to show that all UAVs can pass the virtual tube without getting trapped, avoid collision and keep within the virtual tube. What is more, by the proposed distributed control, a VTOL UAV can keep away from another VTOL UAV or return back to the virtual tube as soon as possible, once it enters into the safety area of another or has a collision with the virtual tube during it is passing the virtual tube. Simulations and experiments are carried out to show the effectiveness of the proposed method and the comparison with other methods.
[ { "version": "v1", "created": "Tue, 19 Jan 2021 11:52:30 GMT" }, { "version": "v2", "created": "Fri, 30 Jul 2021 18:37:56 GMT" } ]
2023-01-25T00:00:00
[ [ "Quan", "Quan", "" ], [ "Fu", "Rao", "" ], [ "Li", "Mengxin", "" ], [ "Wei", "Donghui", "" ], [ "Gao", "Yan", "" ], [ "Cai", "Kai-Yuan", "" ] ]
new_dataset
0.999546
2108.03990
Zhengyi Liu
Zhengyi Liu, Yuan Wang, Zhengzheng Tu, Yun Xiao, Bin Tang
TriTransNet: RGB-D Salient Object Detection with a Triplet Transformer Embedding Network
null
null
10.1145/3474085.3475601
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Salient object detection is the pixel-level dense prediction task which can highlight the prominent object in the scene. Recently U-Net framework is widely used, and continuous convolution and pooling operations generate multi-level features which are complementary with each other. In view of the more contribution of high-level features for the performance, we propose a triplet transformer embedding module to enhance them by learning long-range dependencies across layers. It is the first to use three transformer encoders with shared weights to enhance multi-level features. By further designing scale adjustment module to process the input, devising three-stream decoder to process the output and attaching depth features to color features for the multi-modal fusion, the proposed triplet transformer embedding network (TriTransNet) achieves the state-of-the-art performance in RGB-D salient object detection, and pushes the performance to a new level. Experimental results demonstrate the effectiveness of the proposed modules and the competition of TriTransNet.
[ { "version": "v1", "created": "Mon, 9 Aug 2021 12:42:56 GMT" } ]
2023-01-25T00:00:00
[ [ "Liu", "Zhengyi", "" ], [ "Wang", "Yuan", "" ], [ "Tu", "Zhengzheng", "" ], [ "Xiao", "Yun", "" ], [ "Tang", "Bin", "" ] ]
new_dataset
0.999451
2112.01006
Yan Gao
Quan Quan, Yan Gao, Chenggang Bai
Distributed Control for a Robotic Swarm to Pass through a Curve Virtual Tube
18 pages, 21 figures
null
10.1016/j.robot.2023.104368
null
cs.RO cs.MA cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Robotic swarm systems are now becoming increasingly attractive for many challenging applications. The main task for any robot is to reach the destination while keeping a safe separation from other robots and obstacles. In many scenarios, robots need to move within a narrow corridor, through a window or a doorframe. In order to guide all robots to move in a cluttered environment, a curve virtual tube with no obstacle inside is carefully designed in this paper. There is no obstacle inside the tube, namely the area inside the tube can be seen as a safety zone. Then, a distributed swarm controller is proposed with three elaborate control terms: a line approaching term, a robot avoidance term and a tube keeping term. Formal analysis and proofs are made to show that the curve virtual tube passing problem can be solved in a finite time. For the convenience in practical use, a modified controller with an approximate control performance is put forward. Finally, the effectiveness of the proposed method is validated by numerical simulations and real experiments. To show the advantages of the proposed method, the comparison between our method and the control barrier function method is also presented in terms of calculation speed.
[ { "version": "v1", "created": "Thu, 2 Dec 2021 06:33:36 GMT" }, { "version": "v2", "created": "Tue, 17 May 2022 12:39:06 GMT" } ]
2023-01-25T00:00:00
[ [ "Quan", "Quan", "" ], [ "Gao", "Yan", "" ], [ "Bai", "Chenggang", "" ] ]
new_dataset
0.999022
2207.06666
Yan Gao
Yan Gao, Chenggang Bai, Quan Quan
Distributed Control for a Multi-Agent System to Pass through a Connected Quadrangle Virtual Tube
12 pages,14 figures. arXiv admin note: substantial text overlap with arXiv:2112.01006
null
10.1109/TCNS.2022.3203936
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In order to guide the multi-agent system in a cluttered environment, a connected quadrangle virtual tube is designed for all agents to keep moving within it, whose basis is called the single trapezoid virtual tube. There is no obstacle inside the tube, namely the area inside the tube can be seen as a safety zone. Then, a distributed swarm controller is proposed for the single trapezoid virtual tube passing problem. This issue is resolved by a gradient vector field method with no local minima. Formal analyses and proofs are made to show that all agents are able to pass the single trapezoid virtual tube. Finally, a modified controller is put forward for convenience in practical use. For the connected quadrangle virtual tube, a modified switching logic is proposed to avoid the deadlock and prevent agents from moving outside the virtual tube. Finally, the effectiveness of the proposed method is validated by numerical simulations and real experiments.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 05:35:17 GMT" } ]
2023-01-25T00:00:00
[ [ "Gao", "Yan", "" ], [ "Bai", "Chenggang", "" ], [ "Quan", "Quan", "" ] ]
new_dataset
0.996924
2301.00328
Rajarshi Roy Chowdhury
Rajarshi Roy Chowdhury, Azam Che Idris and Pg Emeroylariffion Abas
Internet of Things: Digital Footprints Carry A Device Identity
8th Brunei International Conference on Engineering and Technology (BICET 2021), Universiti Teknologi Brunei
null
10.1063/5.0111335
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
The usage of technologically advanced devices has seen a boom in many domains, including education, automation, and healthcare; with most of the services requiring Internet connectivity. To secure a network, device identification plays key role. In this paper, a device fingerprinting (DFP) model, which is able to distinguish between Internet of Things (IoT) and non-IoT devices, as well as uniquely identify individual devices, has been proposed. Four statistical features have been extracted from the consecutive five device-originated packets, to generate individual device fingerprints. The method has been evaluated using the Random Forest (RF) classifier and different datasets. Experimental results have shown that the proposed method achieves up to 99.8% accuracy in distinguishing between IoT and non-IoT devices and over 97.6% in classifying individual devices. These signify that the proposed method is useful in assisting operators in making their networks more secure and robust to security breaches and unauthorized access.
[ { "version": "v1", "created": "Sun, 1 Jan 2023 02:18:02 GMT" } ]
2023-01-25T00:00:00
[ [ "Chowdhury", "Rajarshi Roy", "" ], [ "Idris", "Azam Che", "" ], [ "Abas", "Pg Emeroylariffion", "" ] ]
new_dataset
0.964855
2301.03036
Zhengyi Liu
Bin Tang, Zhengyi Liu, Yacheng Tan, and Qian He
HRTransNet: HRFormer-Driven Two-Modality Salient Object Detection
null
TCSVT2022
10.1109/TCSVT.2022.3202563
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The High-Resolution Transformer (HRFormer) can maintain high-resolution representation and share global receptive fields. It is friendly towards salient object detection (SOD) in which the input and output have the same resolution. However, two critical problems need to be solved for two-modality SOD. One problem is two-modality fusion. The other problem is the HRFormer output's fusion. To address the first problem, a supplementary modality is injected into the primary modality by using global optimization and an attention mechanism to select and purify the modality at the input level. To solve the second problem, a dual-direction short connection fusion module is used to optimize the output features of HRFormer, thereby enhancing the detailed representation of objects at the output level. The proposed model, named HRTransNet, first introduces an auxiliary stream for feature extraction of supplementary modality. Then, features are injected into the primary modality at the beginning of each multi-resolution branch. Next, HRFormer is applied to achieve forwarding propagation. Finally, all the output features with different resolutions are aggregated by intra-feature and inter-feature interactive transformers. Application of the proposed model results in impressive improvement for driving two-modality SOD tasks, e.g., RGB-D, RGB-T, and light field SOD.https://github.com/liuzywen/HRTransNet
[ { "version": "v1", "created": "Sun, 8 Jan 2023 13:09:01 GMT" } ]
2023-01-25T00:00:00
[ [ "Tang", "Bin", "" ], [ "Liu", "Zhengyi", "" ], [ "Tan", "Yacheng", "" ], [ "He", "Qian", "" ] ]
new_dataset
0.998112
2301.09680
Yulian Wu
Yulian Wu, Chaowen Guan, Vaneet Aggarwal and Di Wang
Quantum Heavy-tailed Bandits
Online learning; Quantum machine learning
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
In this paper, we study multi-armed bandits (MAB) and stochastic linear bandits (SLB) with heavy-tailed rewards and quantum reward oracle. Unlike the previous work on quantum bandits that assumes bounded/sub-Gaussian distributions for rewards, here we investigate the quantum bandits problem under a weaker assumption that the distributions of rewards only have bounded $(1+v)$-th moment for some $v\in (0,1]$. In order to achieve regret improvements for heavy-tailed bandits, we first propose a new quantum mean estimator for heavy-tailed distributions, which is based on the Quantum Monte Carlo Mean Estimator and achieves a quadratic improvement of estimation error compared to the classical one. Based on our quantum mean estimator, we focus on quantum heavy-tailed MAB and SLB and propose quantum algorithms based on the Upper Confidence Bound (UCB) framework for both problems with $\Tilde{O}(T^{\frac{1-v}{1+v}})$ regrets, polynomially improving the dependence in terms of $T$ as compared to classical (near) optimal regrets of $\Tilde{O}(T^{\frac{1}{1+v}})$, where $T$ is the number of rounds. Finally, experiments also support our theoretical results and show the effectiveness of our proposed methods.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 19:23:10 GMT" } ]
2023-01-25T00:00:00
[ [ "Wu", "Yulian", "" ], [ "Guan", "Chaowen", "" ], [ "Aggarwal", "Vaneet", "" ], [ "Wang", "Di", "" ] ]
new_dataset
0.986377
2301.09717
Qingchao Li
Qingchao Li, Mohammed El-Hajjar, Ibrahim Hemadeh, Arman Shojaeifard, Alain A. M. Mourad, Lajos Hanzo
Reconfigurable Intelligent Surface Aided Amplitude- and Phase-Modulated Downlink Transmission
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
New reconfigurable intelligent surface (RIS) based amplitude and phase modulation schemes are proposed as an evolution how the phase-only modulation schemes available in the literature. Explicitly, both the amplitude-phase shift keying (A-PSK) and quadrature amplitude-phase shift keying (QA-PSK) are conceived, where the RIS is assumed to be part of a transmitter to deliver information to the multi-antenna aided downlink receiver. In the proposed design, the RIS is partitioned into multiple blocks, and the information bits are conveyed by controlling both the ON-OFF state and the phase shift of the RIS elements in each block. Since the propagation paths spanning from each RIS block to the receiver can be coherently combined as a benefit of appropriately configuring the phase of the RIS elements, the received signal constellations can be designed by controlling both the ON-OFF pattern of the RIS blocks as well as the phase shift of the RIS elements. Both the theoretical analysis and the simulation results show that our proposed RIS-aided modulation schemes outperform the state-of-the-art RIS-based PSK modulation both in terms of its discrete-input-continuous-output memoryless channel (DCMC) capacity and its symbol error probability, especially in the high signal-to-noise-ratio (SNR) region, when considering realistic finite resolution RIS phase shifts.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 20:47:06 GMT" } ]
2023-01-25T00:00:00
[ [ "Li", "Qingchao", "" ], [ "El-Hajjar", "Mohammed", "" ], [ "Hemadeh", "Ibrahim", "" ], [ "Shojaeifard", "Arman", "" ], [ "Mourad", "Alain A. M.", "" ], [ "Hanzo", "Lajos", "" ] ]
new_dataset
0.999389
2301.09757
Bernardo Anibal Subercaseaux Roa
Bernardo Subercaseaux and Marijn J. H. Heule
The Packing Chromatic Number of the Infinite Square Grid is 15
null
null
null
null
cs.DM cs.AI math.CO
http://creativecommons.org/licenses/by/4.0/
A packing $k$-coloring is a natural variation on the standard notion of graph $k$-coloring, where vertices are assigned numbers from $\{1, \ldots, k\}$, and any two vertices assigned a common color $c \in \{1, \ldots, k\}$ need to be at a distance greater than $c$ (as opposed to $1$, in standard graph colorings). Despite a sequence of incremental work, determining the packing chromatic number of the infinite square grid has remained an open problem since its introduction in 2002. We culminate the search by proving this number to be 15. We achieve this result by improving the best-known method for this problem by roughly two orders of magnitude. The most important technique to boost performance is a novel, surprisingly effective propositional encoding for packing colorings. Additionally, we developed an alternative symmetry-breaking method. Since both new techniques are more complex than existing techniques for this problem, a verified approach is required to trust them. We include both techniques in a proof of unsatisfiability, reducing the trusted core to the correctness of the direct encoding.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 23:27:41 GMT" } ]
2023-01-25T00:00:00
[ [ "Subercaseaux", "Bernardo", "" ], [ "Heule", "Marijn J. H.", "" ] ]
new_dataset
0.983529
2301.09878
Mathias Zinnen
Mathias Zinnen, Prathmesh Madhu, Ronak Kosti, Peter Bell, Andreas Maier, Vincent Christlein
ODOR: The ICPR2022 ODeuropa Challenge on Olfactory Object Recognition
6 pages, 6 figures
2022 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 2022, pp. 4989-4994
10.1109/ICPR56361.2022.9956542
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The Odeuropa Challenge on Olfactory Object Recognition aims to foster the development of object detection in the visual arts and to promote an olfactory perspective on digital heritage. Object detection in historical artworks is particularly challenging due to varying styles and artistic periods. Moreover, the task is complicated due to the particularity and historical variance of predefined target objects, which exhibit a large intra-class variance, and the long tail distribution of the dataset labels, with some objects having only very few training examples. These challenges should encourage participants to create innovative approaches using domain adaptation or few-shot learning. We provide a dataset of 2647 artworks annotated with 20 120 tightly fit bounding boxes that are split into a training and validation set (public). A test set containing 1140 artworks and 15 480 annotations is kept private for the challenge evaluation.
[ { "version": "v1", "created": "Tue, 24 Jan 2023 09:35:43 GMT" } ]
2023-01-25T00:00:00
[ [ "Zinnen", "Mathias", "" ], [ "Madhu", "Prathmesh", "" ], [ "Kosti", "Ronak", "" ], [ "Bell", "Peter", "" ], [ "Maier", "Andreas", "" ], [ "Christlein", "Vincent", "" ] ]
new_dataset
0.997056
2301.09957
Alessandro Traspadini
Alessandro Traspadini, Marco Giordani, Giovanni Giambene and Michele Zorzi
Real-Time HAP-Assisted Vehicular Edge Computing for Rural Areas
6 pages, 2 figures. This paper has been accepted for publication at IEEE Wireless Communications Letters (WCL). Copyright IEEE 2023. Please cite it as: A. Traspadini, M. Giordani, G. Giambene and M. Zorzi, "Real-Time HAP-Assistes Vehicular Edge Computing for Rural Areas," in IEEE Wireless Communications Letters, doi: 10.1109/LWC.2023.3238851
null
10.1109/LWC.2023.3238851
null
cs.NI eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Non-Terrestrial Networks (NTNs) are expected to be a key component of 6th generation (6G) networks to support broadband seamless Internet connectivity and expand the coverage even in rural and remote areas. In this context, High Altitude Platforms (HAPs) can act as edge servers to process computational tasks offloaded by energy-constrained terrestrial devices such as Internet of Things (IoT) sensors and ground vehicles (GVs). In this paper, we analyze the opportunity to support Vehicular Edge Computing (VEC) via HAP in a rural scenario where GVs can decide whether to process data onboard or offload them to a HAP. We characterize the system as a set of queues in which computational tasks arrive according to a Poisson arrival process. Then, we assess the optimal VEC offloading factor to maximize the probability of real-time service, given latency and computational capacity constraints.
[ { "version": "v1", "created": "Tue, 24 Jan 2023 12:39:25 GMT" } ]
2023-01-25T00:00:00
[ [ "Traspadini", "Alessandro", "" ], [ "Giordani", "Marco", "" ], [ "Giambene", "Giovanni", "" ], [ "Zorzi", "Michele", "" ] ]
new_dataset
0.997694
2301.09992
Tariq Alhindi
Tariq Alhindi, Tuhin Chakrabarty, Elena Musi and Smaranda Muresan
Multitask Instruction-based Prompting for Fallacy Recognition
In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8172 - 8187
Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing, pages 8172 - 8187
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Fallacies are used as seemingly valid arguments to support a position and persuade the audience about its validity. Recognizing fallacies is an intrinsically difficult task both for humans and machines. Moreover, a big challenge for computational models lies in the fact that fallacies are formulated differently across the datasets with differences in the input format (e.g., question-answer pair, sentence with fallacy fragment), genre (e.g., social media, dialogue, news), as well as types and number of fallacies (from 5 to 18 types per dataset). To move towards solving the fallacy recognition task, we approach these differences across datasets as multiple tasks and show how instruction-based prompting in a multitask setup based on the T5 model improves the results against approaches built for a specific dataset such as T5, BERT or GPT-3. We show the ability of this multitask prompting approach to recognize 28 unique fallacies across domains and genres and study the effect of model size and prompt choice by analyzing the per-class (i.e., fallacy type) results. Finally, we analyze the effect of annotation quality on model performance, and the feasibility of complementing this approach with external knowledge.
[ { "version": "v1", "created": "Tue, 24 Jan 2023 13:39:23 GMT" } ]
2023-01-25T00:00:00
[ [ "Alhindi", "Tariq", "" ], [ "Chakrabarty", "Tuhin", "" ], [ "Musi", "Elena", "" ], [ "Muresan", "Smaranda", "" ] ]
new_dataset
0.993796
2301.10001
Eric Weber
Shannon B. Harper and Eric S. Weber
Fiduciary Responsibility: Facilitating Public Trust in Automated Decision Making
null
null
null
null
cs.CY cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Automated decision-making systems are being increasingly deployed and affect the public in a multitude of positive and negative ways. Governmental and private institutions use these systems to process information according to certain human-devised rules in order to address social problems or organizational challenges. Both research and real-world experience indicate that the public lacks trust in automated decision-making systems and the institutions that deploy them. The recreancy theorem argues that the public is more likely to trust and support decisions made or influenced by automated decision-making systems if the institutions that administer them meet their fiduciary responsibility. However, often the public is never informed of how these systems operate and resultant institutional decisions are made. A ``black box'' effect of automated decision-making systems reduces the public's perceptions of integrity and trustworthiness. The result is that the public loses the capacity to identify, challenge, and rectify unfairness or the costs associated with the loss of public goods or benefits. The current position paper defines and explains the role of fiduciary responsibility within an automated decision-making system. We formulate an automated decision-making system as a data science lifecycle (DSL) and examine the implications of fiduciary responsibility within the context of the DSL. Fiduciary responsibility within DSLs provides a methodology for addressing the public's lack of trust in automated decision-making systems and the institutions that employ them to make decisions affecting the public. We posit that fiduciary responsibility manifests in several contexts of a DSL, each of which requires its own mitigation of sources of mistrust. To instantiate fiduciary responsibility, a Los Angeles Police Department (LAPD) predictive policing case study is examined.
[ { "version": "v1", "created": "Fri, 6 Jan 2023 18:19:01 GMT" } ]
2023-01-25T00:00:00
[ [ "Harper", "Shannon B.", "" ], [ "Weber", "Eric S.", "" ] ]
new_dataset
0.962339
2301.10165
Oriana Riva
Pratyay Banerjee, Shweti Mahajan, Kushal Arora, Chitta Baral, Oriana Riva
Lexi: Self-Supervised Learning of the UI Language
EMNLP (Findings) 2022
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Humans can learn to operate the user interface (UI) of an application by reading an instruction manual or how-to guide. Along with text, these resources include visual content such as UI screenshots and images of application icons referenced in the text. We explore how to leverage this data to learn generic visio-linguistic representations of UI screens and their components. These representations are useful in many real applications, such as accessibility, voice navigation, and task automation. Prior UI representation models rely on UI metadata (UI trees and accessibility labels), which is often missing, incompletely defined, or not accessible. We avoid such a dependency, and propose Lexi, a pre-trained vision and language model designed to handle the unique features of UI screens, including their text richness and context sensitivity. To train Lexi we curate the UICaption dataset consisting of 114k UI images paired with descriptions of their functionality. We evaluate Lexi on four tasks: UI action entailment, instruction-based UI image retrieval, grounding referring expressions, and UI entity recognition.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 09:05:49 GMT" } ]
2023-01-25T00:00:00
[ [ "Banerjee", "Pratyay", "" ], [ "Mahajan", "Shweti", "" ], [ "Arora", "Kushal", "" ], [ "Baral", "Chitta", "" ], [ "Riva", "Oriana", "" ] ]
new_dataset
0.975878
2301.10180
Javad Peymanfard
Javad Peymanfard, Samin Heydarian, Ali Lashini, Hossein Zeinali, Mohammad Reza Mohammadi, Nasser Mozayani
A Multi-Purpose Audio-Visual Corpus for Multi-Modal Persian Speech Recognition: the Arman-AV Dataset
null
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
In recent years, significant progress has been made in automatic lip reading. But these methods require large-scale datasets that do not exist for many low-resource languages. In this paper, we have presented a new multipurpose audio-visual dataset for Persian. This dataset consists of almost 220 hours of videos with 1760 corresponding speakers. In addition to lip reading, the dataset is suitable for automatic speech recognition, audio-visual speech recognition, and speaker recognition. Also, it is the first large-scale lip reading dataset in Persian. A baseline method was provided for each mentioned task. In addition, we have proposed a technique to detect visemes (a visual equivalent of a phoneme) in Persian. The visemes obtained by this method increase the accuracy of the lip reading task by 7% relatively compared to the previously proposed visemes, which can be applied to other languages as well.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 05:13:30 GMT" } ]
2023-01-25T00:00:00
[ [ "Peymanfard", "Javad", "" ], [ "Heydarian", "Samin", "" ], [ "Lashini", "Ali", "" ], [ "Zeinali", "Hossein", "" ], [ "Mohammadi", "Mohammad Reza", "" ], [ "Mozayani", "Nasser", "" ] ]
new_dataset
0.999828
2301.10235
Qiangyu Pei
Fangming Liu, Qiangyu Pei, Shutong Chen, Yongjie Yuan, Lin Wang, Max Muhlhauser
When the Metaverse Meets Carbon Neutrality: Ongoing Efforts and Directions
24 pages
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The metaverse has recently gained increasing attention from the public. It builds up a virtual world where we can live as a new role regardless of the role we play in the physical world. However, building and operating this virtual world will generate an extraordinary amount of carbon emissions for computing, communicating, displaying, and so on. This inevitably hinders the realization of carbon neutrality as a priority of our society, adding heavy burden to our earth. In this survey, we first present a green viewpoint of the metaverse by investigating the carbon issues in its three core layers, namely the infrastructure layer, the interaction layer, and the economy layer, and estimate their carbon footprints in the near future. Next, we analyze a range of current and emerging applicable green techniques for the purpose of reducing energy usage and carbon emissions of the metaverse, and discuss their limitations in supporting metaverse workloads. Then, in view of these limitations, we discuss important implications and bring forth several insights and future directions to make each metaverse layer greener. After that, we investigate green solutions from the governance perspective, including both public policies in the physical world and regulation of users in the virtual world, and propose an indicator Carbon Utility (CU) to quantify the service quality brought by an user activity per unit of carbon emissions. Finally, we identify an issue for the metaverse as a whole and summarize three directions: (1) a comprehensive consideration of necessary performance metrics, (2) a comprehensive consideration of involved layers and multiple internal components, and (3) a new assessing, recording, and regulating mechanism on carbon footprints of user activities. Our proposed quantitative indicator CU would be helpful in regulating user activities in the metaverse world.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 16:25:18 GMT" } ]
2023-01-25T00:00:00
[ [ "Liu", "Fangming", "" ], [ "Pei", "Qiangyu", "" ], [ "Chen", "Shutong", "" ], [ "Yuan", "Yongjie", "" ], [ "Wang", "Lin", "" ], [ "Muhlhauser", "Max", "" ] ]
new_dataset
0.997395
2105.08123
Ataberk Olgun
Nandita Vijaykumar, Ataberk Olgun, Konstantinos Kanellopoulos, Nisa Bostanc{\i}, Hasan Hassan, Mehrshad Lotfi, Phillip B. Gibbons, Onur Mutlu
MetaSys: A Practical Open-Source Metadata Management System to Implement and Evaluate Cross-Layer Optimizations
A shorter version of this work is to appear at the ACM Transactions on Architecture and Code Optimization (TACO). 27 pages, 15 figures
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
This paper introduces the first open-source FPGA-based infrastructure, MetaSys, with a prototype in a RISC-V core, to enable the rapid implementation and evaluation of a wide range of cross-layer techniques in real hardware. Hardware-software cooperative techniques are powerful approaches to improve the performance, quality of service, and security of general-purpose processors. They are however typically challenging to rapidly implement and evaluate in real hardware as they require full-stack changes to the hardware, OS, system software, and instruction-set architecture (ISA). MetaSys implements a rich hardware-software interface and lightweight metadata support that can be used as a common basis to rapidly implement and evaluate new cross-layer techniques. We demonstrate MetaSys's versatility and ease-of-use by implementing and evaluating three cross-layer techniques for: (i) prefetching for graph analytics; (ii) bounds checking in memory unsafe languages, and (iii) return address protection in stack frames; each technique only requiring ~100 lines of Chisel code over MetaSys. Using MetaSys, we perform the first detailed experimental study to quantify the performance overheads of using a single metadata management system to enable multiple cross-layer optimizations in CPUs. We identify the key sources of bottlenecks and system inefficiency of a general metadata management system. We design MetaSys to minimize these inefficiencies and provide increased versatility compared to previously-proposed metadata systems. Using three use cases and a detailed characterization, we demonstrate that a common metadata management system can be used to efficiently support diverse cross-layer techniques in CPUs.
[ { "version": "v1", "created": "Mon, 17 May 2021 19:27:48 GMT" }, { "version": "v2", "created": "Wed, 19 May 2021 08:41:33 GMT" }, { "version": "v3", "created": "Sun, 2 Jan 2022 08:09:50 GMT" }, { "version": "v4", "created": "Wed, 18 Jan 2023 07:51:20 GMT" }, { "version": "v5", "created": "Sat, 21 Jan 2023 10:00:56 GMT" } ]
2023-01-24T00:00:00
[ [ "Vijaykumar", "Nandita", "" ], [ "Olgun", "Ataberk", "" ], [ "Kanellopoulos", "Konstantinos", "" ], [ "Bostancı", "Nisa", "" ], [ "Hassan", "Hasan", "" ], [ "Lotfi", "Mehrshad", "" ], [ "Gibbons", "Phillip B.", "" ], [ "Mutlu", "Onur", "" ] ]
new_dataset
0.995368
2109.13392
Volker Tresp
Volker Tresp, Sahand Sharifzadeh, Hang Li, Dario Konopatzki, Yunpu Ma
The Tensor Brain: A Unified Theory of Perception, Memory and Semantic Decoding
Neural Computation, Volume 35, Issue 2, February 2023
Neural Computation, Volume 35, Issue 2, February 2023
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We present a unified computational theory of an agent's perception and memory. In our model, perception, episodic memory, and semantic memory are realized by different operational modes of the oscillating interactions between a symbolic index layer and a subsymbolic representation layer. The two layers form a bilayer tensor network (BTN). Although memory appears to be about the past, its main purpose is to support the agent in the present and the future. Recent episodic memory provides the agent with a sense of the here and now. Remote episodic memory retrieves relevant past experiences to provide information about possible future scenarios. This aids the agent in decision-making. "Future" episodic memory, based on expected future events, guides planning and action. Semantic memory retrieves specific information, which is not delivered by current perception, and defines priors for future observations. We argue that it is important for the agent to encode individual entities, not just classes and attributes. We demonstrate that a form of self-supervised learning can acquire new concepts and refine existing ones. We test our model on a standard benchmark data set, which we expanded to contain richer representations for attributes, classes, and individuals. Our key hypothesis is that obtaining a better understanding of perception and memory is a crucial prerequisite to comprehending human-level intelligence.
[ { "version": "v1", "created": "Mon, 27 Sep 2021 23:32:44 GMT" }, { "version": "v2", "created": "Wed, 6 Oct 2021 17:08:26 GMT" }, { "version": "v3", "created": "Tue, 11 Oct 2022 15:12:00 GMT" }, { "version": "v4", "created": "Wed, 12 Oct 2022 17:26:49 GMT" }, { "version": "v5", "created": "Mon, 17 Oct 2022 20:42:08 GMT" }, { "version": "v6", "created": "Sun, 22 Jan 2023 20:22:16 GMT" } ]
2023-01-24T00:00:00
[ [ "Tresp", "Volker", "" ], [ "Sharifzadeh", "Sahand", "" ], [ "Li", "Hang", "" ], [ "Konopatzki", "Dario", "" ], [ "Ma", "Yunpu", "" ] ]
new_dataset
0.989341
2112.06560
F\'abio Malcher Miranda MSc.
F\'abio M. Miranda, Niklas K\"ohnecke and Bernhard Y. Renard
HiClass: a Python library for local hierarchical classification compatible with scikit-learn
17 pages, 9 figures, 7 tables
Journal of Machine Learning Research 24 (2023) 1-17
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
HiClass is an open-source Python library for local hierarchical classification entirely compatible with scikit-learn. It contains implementations of the most common design patterns for hierarchical machine learning models found in the literature, that is, the local classifiers per node, per parent node and per level. Additionally, the package contains implementations of hierarchical metrics, which are more appropriate for evaluating classification performance on hierarchical data. The documentation includes installation and usage instructions, examples within tutorials and interactive notebooks, and a complete description of the API. HiClass is released under the simplified BSD license, encouraging its use in both academic and commercial environments. Source code and documentation are available at https://github.com/scikit-learn-contrib/hiclass.
[ { "version": "v1", "created": "Mon, 13 Dec 2021 11:04:17 GMT" }, { "version": "v2", "created": "Tue, 14 Dec 2021 12:08:16 GMT" }, { "version": "v3", "created": "Wed, 15 Dec 2021 06:08:42 GMT" }, { "version": "v4", "created": "Mon, 20 Dec 2021 12:08:17 GMT" }, { "version": "v5", "created": "Tue, 12 Jul 2022 20:20:19 GMT" }, { "version": "v6", "created": "Mon, 25 Jul 2022 09:55:00 GMT" }, { "version": "v7", "created": "Thu, 1 Dec 2022 23:19:31 GMT" }, { "version": "v8", "created": "Mon, 5 Dec 2022 10:16:08 GMT" }, { "version": "v9", "created": "Tue, 3 Jan 2023 17:51:02 GMT" } ]
2023-01-24T00:00:00
[ [ "Miranda", "Fábio M.", "" ], [ "Köhnecke", "Niklas", "" ], [ "Renard", "Bernhard Y.", "" ] ]
new_dataset
0.974184
2203.04751
Kshitij Tiwari
Kshitij Tiwari, Basak Sakcak, Prasanna Routray, Manivannan M., and Steven M. LaValle
Visibility-Inspired Models of Touch Sensors for Navigation
Accepted at IEEE IROS 2022
null
10.1109/IROS47612.2022.9981084
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
This paper introduces mathematical models of \sensors\ for mobile robots based on visibility. Serving a purpose similar to the pinhole camera model for computer vision, the introduced models are expected to provide a useful, idealized characterization of task-relevant information that can be inferred from their outputs or observations. Possible tasks include navigation, localization and mapping when a mobile robot is deployed in an unknown environment. These models allow direct comparisons to be made between traditional depth sensors, highlighting cases in which touch sensing may be interchangeable with time of flight or vision sensors, and characterizing unique advantages provided by touch sensing. The models include contact detection, compression, load bearing, and deflection. The results could serve as a basic building block for innovative touch sensor designs for mobile robot sensor fusion systems.
[ { "version": "v1", "created": "Fri, 4 Mar 2022 08:23:01 GMT" }, { "version": "v2", "created": "Thu, 28 Jul 2022 07:42:32 GMT" } ]
2023-01-24T00:00:00
[ [ "Tiwari", "Kshitij", "" ], [ "Sakcak", "Basak", "" ], [ "Routray", "Prasanna", "" ], [ "M.", "Manivannan", "" ], [ "LaValle", "Steven M.", "" ] ]
new_dataset
0.954848
2204.10704
Runzhe Zhu
Runzhe Zhu, Ling Yin, Mingze Yang, Fei Wu, Yuncheng Yang, Wenbo Hu
SUES-200: A Multi-height Multi-scene Cross-view Image Benchmark Across Drone and Satellite
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cross-view image matching aims to match images of the same target scene acquired from different platforms. With the rapid development of drone technology, cross-view matching by neural network models has been a widely accepted choice for drone position or navigation. However, existing public datasets do not include images obtained by drones at different heights, and the types of scenes are relatively homogeneous, which yields issues in assessing a model's capability to adapt to complex and changing scenes. In this end, we present a new cross-view dataset called SUES-200 to address these issues. SUES-200 contains 24120 images acquired by the drone at four different heights and corresponding satellite view images of the same target scene. To the best of our knowledge, SUES-200 is the first public dataset that considers the differences generated in aerial photography captured by drones flying at different heights. In addition, we developed an evaluation for efficient training, testing and evaluation of cross-view matching models, under which we comprehensively analyze the performance of nine architectures. Then, we propose a robust baseline model for use with SUES-200. Experimental results show that SUES-200 can help the model to learn highly discriminative features of the height of the drone.
[ { "version": "v1", "created": "Fri, 22 Apr 2022 13:49:52 GMT" }, { "version": "v2", "created": "Sun, 22 Jan 2023 01:49:00 GMT" } ]
2023-01-24T00:00:00
[ [ "Zhu", "Runzhe", "" ], [ "Yin", "Ling", "" ], [ "Yang", "Mingze", "" ], [ "Wu", "Fei", "" ], [ "Yang", "Yuncheng", "" ], [ "Hu", "Wenbo", "" ] ]
new_dataset
0.998007
2206.09372
Lalith Sharan
Lalith Sharan, Halvar Kelm, Gabriele Romano, Matthias Karck, Raffaele De Simone, Sandy Engelhardt
mvHOTA: A multi-view higher order tracking accuracy metric to measure spatial and temporal associations in multi-point detection
16 pages, 9 figures
Computer Methods in Biomechanics and Biomedical Engineering: Imaging & Visualization (2022) 1-9
10.1080/21681163.2022.2159535
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multi-point tracking is a challenging task that involves detecting points in the scene and tracking them across a sequence of frames. Computing detection-based measures like the F-measure on a frame-by-frame basis is not sufficient to assess the overall performance, as it does not interpret performance in the temporal domain. The main evaluation metric available comes from Multi-object tracking (MOT) methods to benchmark performance on datasets such as KITTI with the recently proposed higher order tracking accuracy (HOTA) metric, which is capable of providing a better description of the performance over metrics such as MOTA, DetA, and IDF1. While the HOTA metric takes into account temporal associations, it does not provide a tailored means to analyse the spatial associations of a dataset in a multi-camera setup. Moreover, there are differences in evaluating the detection task for points when compared to objects (point distances vs. bounding box overlap). Therefore in this work, we propose a multi-view higher order tracking metric (mvHOTA) to determine the accuracy of multi-point (multi-instance and multi-class) tracking methods, while taking into account temporal and spatial associations.mvHOTA can be interpreted as the geometric mean of detection, temporal, and spatial associations, thereby providing equal weighting to each of the factors. We demonstrate the use of this metric to evaluate the tracking performance on an endoscopic point detection dataset from a previously organised surgical data science challenge. Furthermore, we compare with other adjusted MOT metrics for this use-case, discuss the properties of mvHOTA, and show how the proposed multi-view Association and the Occlusion index (OI) facilitate analysis of methods with respect to handling of occlusions. The code is available at https://github.com/Cardio-AI/mvhota.
[ { "version": "v1", "created": "Sun, 19 Jun 2022 10:31:53 GMT" }, { "version": "v2", "created": "Mon, 23 Jan 2023 10:44:12 GMT" } ]
2023-01-24T00:00:00
[ [ "Sharan", "Lalith", "" ], [ "Kelm", "Halvar", "" ], [ "Romano", "Gabriele", "" ], [ "Karck", "Matthias", "" ], [ "De Simone", "Raffaele", "" ], [ "Engelhardt", "Sandy", "" ] ]
new_dataset
0.998241
2208.08100
Shangqing Liu
Shangqing Liu, Yanzhou Li, Xiaofei Xie, Yang Liu
CommitBART: A Large Pre-trained Model for GitHub Commits
null
null
null
null
cs.SE cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
GitHub commits, which record the code changes with natural language messages for description, play a critical role for software developers to comprehend the software evolution. To promote the development of the open-source software community, we collect a commit benchmark including over 7.99 million commits across 7 programming languages. Based on this benchmark, we present CommitBART, a large pre-trained encoder-decoder Transformer model for GitHub commits. The model is pre-trained by three categories (i.e., denoising objectives, cross-modal generation and contrastive learning) for six pre-training tasks to learn commit fragment representations. Furthermore, we unify a ``commit intelligence'' framework with one understanding task and three generation tasks for commits. The comprehensive experiments on these tasks demonstrate that CommitBARTsignificantly outperforms previous pre-trained works for code. Further analysis also reveals each pre-training task enhances the model performance.
[ { "version": "v1", "created": "Wed, 17 Aug 2022 06:35:57 GMT" }, { "version": "v2", "created": "Sun, 22 Jan 2023 07:14:03 GMT" } ]
2023-01-24T00:00:00
[ [ "Liu", "Shangqing", "" ], [ "Li", "Yanzhou", "" ], [ "Xie", "Xiaofei", "" ], [ "Liu", "Yang", "" ] ]
new_dataset
0.9996
2209.00398
Mouna Dhaouadi
Mouna Dhaouadi, Bentley James Oakes, Michalis Famelis
End-to-End Rationale Reconstruction
null
ASE '22: Proceedings of the 37th IEEE/ACM International Conference on Automated Software Engineering 2022
10.1145/3551349.3559547
null
cs.SE
http://creativecommons.org/licenses/by-sa/4.0/
The logic behind design decisions, called design rationale, is very valuable. In the past, researchers have tried to automatically extract and exploit this information, but prior techniques are only applicable to specific contexts and there is insufficient progress on an end-to-end rationale information extraction pipeline. Here we outline a path towards such a pipeline that leverages several Machine Learning (ML) and Natural Language Processing (NLP) techniques. Our proposed context-independent approach, called Kantara, produces a knowledge graph representation of decisions and of their rationales, which considers their historical evolution and traceability. We also propose validation mechanisms to ensure the correctness of the extracted information and the coherence of the development process. We conducted a preliminary evaluation of our proposed approach on a small example sourced from the Linux Kernel, which shows promising results.
[ { "version": "v1", "created": "Wed, 31 Aug 2022 13:19:30 GMT" } ]
2023-01-24T00:00:00
[ [ "Dhaouadi", "Mouna", "" ], [ "Oakes", "Bentley James", "" ], [ "Famelis", "Michalis", "" ] ]
new_dataset
0.987824
2301.01586
Pedro Hecht
Hugo Daniel Scolnik and Juan Pedro Hecht
Post-Quantum Key Agreement Protocol based on Non-Square Integer Matrices
12 pages, 2 tables, 29 references
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
We present in this paper an algorithm for exchanging session keys, coupled with a hashing encryption module. We show schemes designed for their potential invulnerability to classical and quantum attacks. In turn, if the parameters included were appropriate, brute-force attacks exceed the (five) security levels used in the NIST competition of new post-quantum standards. The original idea consists of products of rectangular matrices in Zp as public values and whose factorization is proved to be an NP-complete problem. We present running times as a function of the explored parameters and their link with operational safety. To our knowledge there are no classical and quantum attacks of polynomial complexity available at hand, remaining only the systematic exploration of the private-key space.
[ { "version": "v1", "created": "Wed, 4 Jan 2023 13:03:15 GMT" }, { "version": "v2", "created": "Thu, 5 Jan 2023 13:13:34 GMT" }, { "version": "v3", "created": "Sun, 22 Jan 2023 14:56:44 GMT" } ]
2023-01-24T00:00:00
[ [ "Scolnik", "Hugo Daniel", "" ], [ "Hecht", "Juan Pedro", "" ] ]
new_dataset
0.953551
2301.08714
Yangge Li
Yangge Li, Haoqing Zhu, Katherine Braught, Keyi Shen, Sayan Mitra
Verse: A Python library for reasoning about multi-agent hybrid system scenarios
26 pages, 16 figures
null
null
null
cs.SE cs.FL cs.MA
http://creativecommons.org/licenses/by/4.0/
We present the Verse library with the aim of making hybrid system verification more usable for multi-agent scenarios. In Verse, decision making agents move in a map and interact with each other through sensors. The decision logic for each agent is written in a subset of Python and the continuous dynamics is given by a black-box simulator. Multiple agents can be instantiated and they can be ported to different maps for creating scenarios. Verse provides functions for simulating and verifying such scenarios using existing reachability analysis algorithms. We illustrate several capabilities and use cases of the library with heterogeneous agents, incremental verification, different sensor models, and the flexibility of plugging in different subroutines for post computations.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 18:18:09 GMT" }, { "version": "v2", "created": "Mon, 23 Jan 2023 04:49:26 GMT" } ]
2023-01-24T00:00:00
[ [ "Li", "Yangge", "" ], [ "Zhu", "Haoqing", "" ], [ "Braught", "Katherine", "" ], [ "Shen", "Keyi", "" ], [ "Mitra", "Sayan", "" ] ]
new_dataset
0.981332
2301.08730
Changan Chen
Changan Chen, Alexander Richard, Roman Shapovalov, Vamsi Krishna Ithapu, Natalia Neverova, Kristen Grauman, Andrea Vedaldi
Novel-View Acoustic Synthesis
Project page: https://vision.cs.utexas.edu/projects/nvas
null
null
null
cs.CV cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce the novel-view acoustic synthesis (NVAS) task: given the sight and sound observed at a source viewpoint, can we synthesize the sound of that scene from an unseen target viewpoint? We propose a neural rendering approach: Visually-Guided Acoustic Synthesis (ViGAS) network that learns to synthesize the sound of an arbitrary point in space by analyzing the input audio-visual cues. To benchmark this task, we collect two first-of-their-kind large-scale multi-view audio-visual datasets, one synthetic and one real. We show that our model successfully reasons about the spatial cues and synthesizes faithful audio on both datasets. To our knowledge, this work represents the very first formulation, dataset, and approach to solve the novel-view acoustic synthesis task, which has exciting potential applications ranging from AR/VR to art and design. Unlocked by this work, we believe that the future of novel-view synthesis is in multi-modal learning from videos.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 18:49:58 GMT" }, { "version": "v2", "created": "Mon, 23 Jan 2023 17:11:30 GMT" } ]
2023-01-24T00:00:00
[ [ "Chen", "Changan", "" ], [ "Richard", "Alexander", "" ], [ "Shapovalov", "Roman", "" ], [ "Ithapu", "Vamsi Krishna", "" ], [ "Neverova", "Natalia", "" ], [ "Grauman", "Kristen", "" ], [ "Vedaldi", "Andrea", "" ] ]
new_dataset
0.999297
2301.08774
Zhenkun Zhou
Chong Zhang, Zhenkun Zhou, Xingyu Peng, Ke Xu
DoubleH: Twitter User Stance Detection via Bipartite Graph Neural Networks
null
null
null
null
cs.SI cs.AI
http://creativecommons.org/licenses/by/4.0/
Given the development and abundance of social media, studying the stance of social media users is a challenging and pressing issue. Social media users express their stance by posting tweets and retweeting. Therefore, the homogeneous relationship between users and the heterogeneous relationship between users and tweets are relevant for the stance detection task. Recently, graph neural networks (GNNs) have developed rapidly and have been applied to social media research. In this paper, we crawl a large-scale dataset of the 2020 US presidential election and automatically label all users by manually tagged hashtags. Subsequently, we propose a bipartite graph neural network model, DoubleH, which aims to better utilize homogeneous and heterogeneous information in user stance detection tasks. Specifically, we first construct a bipartite graph based on posting and retweeting relations for two kinds of nodes, including users and tweets. We then iteratively update the node's representation by extracting and separately processing heterogeneous and homogeneous information in the node's neighbors. Finally, the representations of user nodes are used for user stance classification. Experimental results show that DoubleH outperforms the state-of-the-art methods on popular benchmarks. Further analysis illustrates the model's utilization of information and demonstrates stability and efficiency at different numbers of layers.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 19:20:10 GMT" } ]
2023-01-24T00:00:00
[ [ "Zhang", "Chong", "" ], [ "Zhou", "Zhenkun", "" ], [ "Peng", "Xingyu", "" ], [ "Xu", "Ke", "" ] ]
new_dataset
0.999233
2301.08783
Andrew Freeman
Andrew C. Freeman, Montek Singh, Ketan Mayer-Patel
An Asynchronous Intensity Representation for Framed and Event Video Sources
10 pages
null
null
null
cs.CV cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neuromorphic "event" cameras, designed to mimic the human vision system with asynchronous sensing, unlock a new realm of high-speed and high dynamic range applications. However, researchers often either revert to a framed representation of event data for applications, or build bespoke applications for a particular camera's event data type. To usher in the next era of video systems, accommodate new event camera designs, and explore the benefits to asynchronous video in classical applications, we argue that there is a need for an asynchronous, source-agnostic video representation. In this paper, we introduce a novel, asynchronous intensity representation for both framed and non-framed data sources. We show that our representation can increase intensity precision and greatly reduce the number of samples per pixel compared to grid-based representations. With framed sources, we demonstrate that by permitting a small amount of loss through the temporal averaging of similar pixel values, we can reduce our representational sample rate by more than half, while incurring a drop in VMAF quality score of only 4.5. We also demonstrate lower latency than the state-of-the-art method for fusing and transcoding framed and event camera data to an intensity representation, while maintaining $2000\times$ the temporal resolution. We argue that our method provides the computational efficiency and temporal granularity necessary to build real-time intensity-based applications for event cameras.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 19:46:23 GMT" } ]
2023-01-24T00:00:00
[ [ "Freeman", "Andrew C.", "" ], [ "Singh", "Montek", "" ], [ "Mayer-Patel", "Ketan", "" ] ]
new_dataset
0.99713
2301.08806
Nikolay Ivanov
Nikolay Ivanov, Qiben Yan and Anurag Kompalli
TxT: Real-time Transaction Encapsulation for Ethereum Smart Contracts
To appear in IEEE Transactions on Information Forensics and Security
null
10.1109/TIFS.2023.3234895
null
cs.CR cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Ethereum is a permissionless blockchain ecosystem that supports execution of smart contracts, the key enablers of decentralized finance (DeFi) and non-fungible tokens (NFT). However, the expressiveness of Ethereum smart contracts is a double-edged sword: while it enables blockchain programmability, it also introduces security vulnerabilities, i.e., the exploitable discrepancies between expected and actual behaviors of the contract code. To address these discrepancies and increase the vulnerability coverage, we propose a new smart contract security testing approach called transaction encapsulation. The core idea lies in the local execution of transactions on a fully-synchronized yet isolated Ethereum node, which creates a preview of outcomes of transaction sequences on the current state of blockchain. This approach poses a critical technical challenge -- the well-known time-of-check/time-of-use (TOCTOU) problem, i.e., the assurance that the final transactions will exhibit the same execution paths as the encapsulated test transactions. In this work, we determine the exact conditions for guaranteed execution path replicability of the tested transactions, and implement a transaction testing tool, TxT, which reveals the actual outcomes of Ethereum transactions. To ensure the correctness of testing, TxT deterministically verifies whether a given sequence of transactions ensues an identical execution path on the current state of blockchain. We analyze over 1.3 billion Ethereum transactions and determine that 96.5% of them can be verified by TxT. We further show that TxT successfully reveals the suspicious behaviors associated with 31 out of 37 vulnerabilities (83.8% coverage) in the smart contract weakness classification (SWC) registry. In comparison, the vulnerability coverage of all the existing defense approaches combined only reaches 40.5%.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 21:14:15 GMT" } ]
2023-01-24T00:00:00
[ [ "Ivanov", "Nikolay", "" ], [ "Yan", "Qiben", "" ], [ "Kompalli", "Anurag", "" ] ]
new_dataset
0.959419
2301.08810
Yinghao Aaron Li
Yinghao Aaron Li, Cong Han, Xilin Jiang, Nima Mesgarani
Phoneme-Level BERT for Enhanced Prosody of Text-to-Speech with Grapheme Predictions
null
null
null
null
cs.CL cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale pre-trained language models have been shown to be helpful in improving the naturalness of text-to-speech (TTS) models by enabling them to produce more naturalistic prosodic patterns. However, these models are usually word-level or sup-phoneme-level and jointly trained with phonemes, making them inefficient for the downstream TTS task where only phonemes are needed. In this work, we propose a phoneme-level BERT (PL-BERT) with a pretext task of predicting the corresponding graphemes along with the regular masked phoneme predictions. Subjective evaluations show that our phoneme-level BERT encoder has significantly improved the mean opinion scores (MOS) of rated naturalness of synthesized speech compared with the state-of-the-art (SOTA) StyleTTS baseline on out-of-distribution (OOD) texts.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 21:36:16 GMT" } ]
2023-01-24T00:00:00
[ [ "Li", "Yinghao Aaron", "" ], [ "Han", "Cong", "" ], [ "Jiang", "Xilin", "" ], [ "Mesgarani", "Nima", "" ] ]
new_dataset
0.986248
2301.08828
Thanveer Shaik Mr
Thanveer Shaik, Xiaohui Tao, Niall Higgins, Haoran Xie, Raj Gururajan, Xujuan Zhou
AI enabled RPM for Mental Health Facility
null
In 1st ACM Workshop on Mobile and Wireless Sensing for Smart Healthcare (MWMSSH 2022), October 21, 2022, Sydney, NSW, Australia. ACM, New York, NY, USA, 7 pages
10.1145/3556551.3561191
null
cs.HC cs.AI cs.CY
http://creativecommons.org/licenses/by/4.0/
Mental healthcare is one of the prominent parts of the healthcare industry with alarming concerns related to patients depression, stress leading to self-harm and threat to fellow patients and medical staff. To provide a therapeutic environment for both patients and staff, aggressive or agitated patients need to be monitored remotely and track their vital signs and physical activities continuously. Remote patient monitoring (RPM) using non-invasive technology could enable contactless monitoring of acutely ill patients in a mental health facility. Enabling the RPM system with AI unlocks a predictive environment in which future vital signs of the patients can be forecasted. This paper discusses an AI-enabled RPM system framework with a non-invasive digital technology RFID using its in-built NCS mechanism to retrieve vital signs and physical actions of patients. Based on the retrieved time series data, future vital signs of patients for the upcoming 3 hours and classify their physical actions into 10 labelled physical activities. This framework assists to avoid any unforeseen clinical disasters and take precautionary measures with medical intervention at right time. A case study of a middle-aged PTSD patient treated with the AI-enabled RPM system is demonstrated in this study.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 23:47:16 GMT" } ]
2023-01-24T00:00:00
[ [ "Shaik", "Thanveer", "" ], [ "Tao", "Xiaohui", "" ], [ "Higgins", "Niall", "" ], [ "Xie", "Haoran", "" ], [ "Gururajan", "Raj", "" ], [ "Zhou", "Xujuan", "" ] ]
new_dataset
0.999407
2301.08838
Michael Alcorn
Michael A. Alcorn
AQuaMaM: An Autoregressive, Quaternion Manifold Model for Rapidly Estimating Complex SO(3) Distributions
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately modeling complex, multimodal distributions is necessary for optimal decision-making, but doing so for rotations in three-dimensions, i.e., the SO(3) group, is challenging due to the curvature of the rotation manifold. The recently described implicit-PDF (IPDF) is a simple, elegant, and effective approach for learning arbitrary distributions on SO(3) up to a given precision. However, inference with IPDF requires $N$ forward passes through the network's final multilayer perceptron (where $N$ places an upper bound on the likelihood that can be calculated by the model), which is prohibitively slow for those without the computational resources necessary to parallelize the queries. In this paper, I introduce AQuaMaM, a neural network capable of both learning complex distributions on the rotation manifold and calculating exact likelihoods for query rotations in a single forward pass. Specifically, AQuaMaM autoregressively models the projected components of unit quaternions as mixtures of uniform distributions that partition their geometrically-restricted domain of values. When trained on an "infinite" toy dataset with ambiguous viewpoints, AQuaMaM rapidly converges to a sampling distribution closely matching the true data distribution. In contrast, the sampling distribution for IPDF dramatically diverges from the true data distribution, despite IPDF approaching its theoretical minimum evaluation loss during training. When trained on a constructed dataset of 500,000 renders of a die in different rotations, AQuaMaM reaches a test log-likelihood 14% higher than IPDF. Further, compared to IPDF, AQuaMaM uses 24% fewer parameters, has a prediction throughput 52$\times$ faster on a single GPU, and converges in a similar amount of time during training.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 00:40:21 GMT" } ]
2023-01-24T00:00:00
[ [ "Alcorn", "Michael A.", "" ] ]
new_dataset
0.984381
2301.08969
Mario Grobler
Mario Grobler, Leif Sabellek, Sebastian Siebertz
Parikh Automata on Infinite Words
null
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
Parikh automata on finite words were first introduced by Klaedtke and Rue{\ss} [Automata, Languages and Programming, 2003]. In this paper, we introduce several variants of Parikh automata on infinite words and study their expressiveness. We show that one of our new models is equivalent to synchronous blind counter machines introduced by Fernau and Stiebe [Fundamenta Informaticae, 2008]. All our models admit {\epsilon}-elimination, which to the best of our knowledge is an open question for blind counter automata. We then study the classical decision problems of the new automata models.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 16:32:01 GMT" } ]
2023-01-24T00:00:00
[ [ "Grobler", "Mario", "" ], [ "Sabellek", "Leif", "" ], [ "Siebertz", "Sebastian", "" ] ]
new_dataset
0.968035
2301.09007
Hosein Barzekar
Hosein Barzekar, Yash Patel, Ling Tong, Zeyun Yu
MultiNet with Transformers: A Model for Cancer Diagnosis Using Images
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Cancer is a leading cause of death in many countries. An early diagnosis of cancer based on biomedical imaging ensures effective treatment and a better prognosis. However, biomedical imaging presents challenges to both clinical institutions and researchers. Physiological anomalies are often characterized by slight abnormalities in individual cells or tissues, making them difficult to detect visually. Traditionally, anomalies are diagnosed by radiologists and pathologists with extensive training. This procedure, however, demands the participation of professionals and incurs a substantial cost. The cost makes large-scale biological image classification impractical. In this study, we provide unique deep neural network designs for multiclass classification of medical images, in particular cancer images. We incorporated transformers into a multiclass framework to take advantage of data-gathering capability and perform more accurate classifications. We evaluated models on publicly accessible datasets using various measures to ensure the reliability of the models. Extensive assessment metrics suggest this method can be used for a multitude of classification tasks.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 20:53:57 GMT" } ]
2023-01-24T00:00:00
[ [ "Barzekar", "Hosein", "" ], [ "Patel", "Yash", "" ], [ "Tong", "Ling", "" ], [ "Yu", "Zeyun", "" ] ]
new_dataset
0.995423
2301.09015
Letian Zhang
Letian Zhang, Jie Xu
E$^3$Pose: Energy-Efficient Edge-assisted Multi-camera System for Multi-human 3D Pose Estimation
null
null
null
null
cs.CV cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Multi-human 3D pose estimation plays a key role in establishing a seamless connection between the real world and the virtual world. Recent efforts adopted a two-stage framework that first builds 2D pose estimations in multiple camera views from different perspectives and then synthesizes them into 3D poses. However, the focus has largely been on developing new computer vision algorithms on the offline video datasets without much consideration on the energy constraints in real-world systems with flexibly-deployed and battery-powered cameras. In this paper, we propose an energy-efficient edge-assisted multiple-camera system, dubbed E$^3$Pose, for real-time multi-human 3D pose estimation, based on the key idea of adaptive camera selection. Instead of always employing all available cameras to perform 2D pose estimations as in the existing works, E$^3$Pose selects only a subset of cameras depending on their camera view qualities in terms of occlusion and energy states in an adaptive manner, thereby reducing the energy consumption (which translates to extended battery lifetime) and improving the estimation accuracy. To achieve this goal, E$^3$Pose incorporates an attention-based LSTM to predict the occlusion information of each camera view and guide camera selection before cameras are selected to process the images of a scene, and runs a camera selection algorithm based on the Lyapunov optimization framework to make long-term adaptive selection decisions. We build a prototype of E$^3$Pose on a 5-camera testbed, demonstrate its feasibility and evaluate its performance. Our results show that a significant energy saving (up to 31.21%) can be achieved while maintaining a high 3D pose estimation accuracy comparable to state-of-the-art methods.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 21:53:33 GMT" } ]
2023-01-24T00:00:00
[ [ "Zhang", "Letian", "" ], [ "Xu", "Jie", "" ] ]
new_dataset
0.957725
2301.09025
Elisabeth Andre
Kathrin Janowski, Elisabeth Andr\'e
Nichtverbales Verhalten sozialer Roboter: Bewegungen, deren Bedeutung und die Technik dahinter
12 pages, in German language, 4 figures
null
10.1007/978-3-658-31114-8_15
null
cs.RO cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Nichtverbale Signale sind ein elementarer Bestandteil der menschlichen Kommunikation. Sie erf\"ullen eine Vielzahl von Funktionen bei der Kl\"arung von Mehrdeutigkeiten, der subtilen Aushandlung von Rollen oder dem Ausdruck dessen, was im Inneren der Gespr\"achspartner vorgeht. Viele Studien mit sozial-interaktiven Robotern zeigen, dass vom Menschen inspirierte Bewegungsmuster \"ahnlich interpretiert werden wie die von realen Personen. Dieses Kapitel erl\"autert daher die wichtigsten Funktionen, welche die jeweiligen Bewegungsmuster in der Kommunikation erf\"ullen, und gibt einen \"Uberblick dar\"uber, wie sie auf Roboter \"ubertragen werden k\"onnen. -- Non-verbal signals are a fundamental part of human communication. They serve a variety of functions in clarifying ambiguities, subtly negotiating roles, or expressing what is going on inside the interlocutors. Many studies with socially-interactive robots show that human-inspired movement patterns are interpreted similarly to those of real people. This chapter therefore explains the most important functions that the respective movement patterns fulfill in communication and gives an overview of how they can be transferred to robots.
[ { "version": "v1", "created": "Sat, 21 Jan 2023 23:31:36 GMT" } ]
2023-01-24T00:00:00
[ [ "Janowski", "Kathrin", "" ], [ "André", "Elisabeth", "" ] ]
new_dataset
0.995998
2301.09055
Ryan White
Andrew Ekblad and Trupti Mahendrakar and Ryan T. White and Markus Wilde and Isaac Silver and Brooke Wheeler
Resource-constrained FPGA Design for Satellite Component Feature Extraction
9 pages, 7 figures, 4 tables, Accepted at IEEE Aerospace Conference 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The effective use of computer vision and machine learning for on-orbit applications has been hampered by limited computing capabilities, and therefore limited performance. While embedded systems utilizing ARM processors have been shown to meet acceptable but low performance standards, the recent availability of larger space-grade field programmable gate arrays (FPGAs) show potential to exceed the performance of microcomputer systems. This work proposes use of neural network-based object detection algorithm that can be deployed on a comparably resource-constrained FPGA to automatically detect components of non-cooperative, satellites on orbit. Hardware-in-the-loop experiments were performed on the ORION Maneuver Kinematics Simulator at Florida Tech to compare the performance of the new model deployed on a small, resource-constrained FPGA to an equivalent algorithm on a microcomputer system. Results show the FPGA implementation increases the throughput and decreases latency while maintaining comparable accuracy. These findings suggest future missions should consider deploying computer vision algorithms on space-grade FPGAs.
[ { "version": "v1", "created": "Sun, 22 Jan 2023 04:49:04 GMT" } ]
2023-01-24T00:00:00
[ [ "Ekblad", "Andrew", "" ], [ "Mahendrakar", "Trupti", "" ], [ "White", "Ryan T.", "" ], [ "Wilde", "Markus", "" ], [ "Silver", "Isaac", "" ], [ "Wheeler", "Brooke", "" ] ]
new_dataset
0.954094
2301.09059
Ryan White
Trupti Mahendrakar and Steven Holmberg and Andrew Ekblad and Emma Conti and Ryan T. White and Markus Wilde and Isaac Silver
Autonomous Rendezvous with Non-cooperative Target Objects with Swarm Chasers and Observers
Presented at AAS/AIAA Spaceflight Mechanics Meeting 2023, 17 pages, 9 figures, 3 tables
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
Space debris is on the rise due to the increasing demand for spacecraft for com-munication, navigation, and other applications. The Space Surveillance Network (SSN) tracks over 27,000 large pieces of debris and estimates the number of small, un-trackable fragments at over 1,00,000. To control the growth of debris, the for-mation of further debris must be reduced. Some solutions include deorbiting larger non-cooperative resident space objects (RSOs) or servicing satellites in or-bit. Both require rendezvous with RSOs, and the scale of the problem calls for autonomous missions. This paper introduces the Multipurpose Autonomous Ren-dezvous Vision-Integrated Navigation system (MARVIN) developed and tested at the ORION Facility at Florida Institution of Technology. MARVIN consists of two sub-systems: a machine vision-aided navigation system and an artificial po-tential field (APF) guidance algorithm which work together to command a swarm of chasers to safely rendezvous with the RSO. We present the MARVIN architec-ture and hardware-in-the-loop experiments demonstrating autonomous, collabo-rative swarm satellite operations successfully guiding three drones to rendezvous with a physical mockup of a non-cooperative satellite in motion.
[ { "version": "v1", "created": "Sun, 22 Jan 2023 05:22:11 GMT" } ]
2023-01-24T00:00:00
[ [ "Mahendrakar", "Trupti", "" ], [ "Holmberg", "Steven", "" ], [ "Ekblad", "Andrew", "" ], [ "Conti", "Emma", "" ], [ "White", "Ryan T.", "" ], [ "Wilde", "Markus", "" ], [ "Silver", "Isaac", "" ] ]
new_dataset
0.999131
2301.09093
Mohamad Assaad
Charbel Bou Chaaya, Mohamad Assaad, Tijani Chahed
RIS-assisted Cell-Free MIMO with Dynamic Arrivals and Departures of Users: A Novel Network Stability Approach
null
null
null
null
cs.IT cs.NI math.IT
http://creativecommons.org/licenses/by/4.0/
Reconfigurable Intelligent Surfaces (RIS) have recently emerged as a hot research topic, being widely advocated as a candidate technology for next generation wireless communications. These surfaces passively alter the behavior of propagation environments enhancing the performance of wireless communication systems. In this paper, we study the use of RIS in cell-free multiple-input multiple-output (MIMO) setting where distributed service antennas, called Access Points (APs), simultaneously serve the users in the network. While most existing works focus on the physical layer improvements RIS carry, less attention has been paid to the impact of dynamic arrivals and departures of the users. In such a case, ensuring the stability of the network is the main goal. For that, we propose an optimization framework of the phase shifts, for which we derived a low-complexity solution. We then provide a theoretical analysis of the network stability and show that our framework stabilizes the network whenever it is possible. We also prove that a low complexity solution of our framework stabilizes a guaranteed fraction (higher than 78.5%) of the stability region. We provide also numerical results that corroborate the theoretical claims.
[ { "version": "v1", "created": "Sun, 22 Jan 2023 10:21:44 GMT" } ]
2023-01-24T00:00:00
[ [ "Chaaya", "Charbel Bou", "" ], [ "Assaad", "Mohamad", "" ], [ "Chahed", "Tijani", "" ] ]
new_dataset
0.992971
2301.09251
Kush Bhatia
Pranjal Awasthi, Kush Bhatia, Sreenivas Gollapudi, Kostas Kollias
Congested Bandits: Optimal Routing via Short-term Resets
Published at ICML 2022
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For traffic routing platforms, the choice of which route to recommend to a user depends on the congestion on these routes -- indeed, an individual's utility depends on the number of people using the recommended route at that instance. Motivated by this, we introduce the problem of Congested Bandits where each arm's reward is allowed to depend on the number of times it was played in the past $\Delta$ timesteps. This dependence on past history of actions leads to a dynamical system where an algorithm's present choices also affect its future pay-offs, and requires an algorithm to plan for this. We study the congestion aware formulation in the multi-armed bandit (MAB) setup and in the contextual bandit setup with linear rewards. For the multi-armed setup, we propose a UCB style algorithm and show that its policy regret scales as $\tilde{O}(\sqrt{K \Delta T})$. For the linear contextual bandit setup, our algorithm, based on an iterative least squares planner, achieves policy regret $\tilde{O}(\sqrt{dT} + \Delta)$. From an experimental standpoint, we corroborate the no-regret properties of our algorithms via a simulation study.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 03:11:06 GMT" } ]
2023-01-24T00:00:00
[ [ "Awasthi", "Pranjal", "" ], [ "Bhatia", "Kush", "" ], [ "Gollapudi", "Sreenivas", "" ], [ "Kollias", "Kostas", "" ] ]
new_dataset
0.999197
2301.09310
Jinho Lee
Seongyeon Park, Hajin Kim, Tanveer Ahmad, Nauman Ahmed, Zaid Al-Ars, H. Peter Hofstee, Youngsok Kim, and Jinho Lee
SaLoBa: Maximizing Data Locality and Workload Balance for Fast Sequence Alignment on GPUs
Published at IPDPS'22
null
null
null
cs.DB cs.DC
http://creativecommons.org/licenses/by/4.0/
Sequence alignment forms an important backbone in many sequencing applications. A commonly used strategy for sequence alignment is an approximate string matching with a two-dimensional dynamic programming approach. Although some prior work has been conducted on GPU acceleration of a sequence alignment, we identify several shortcomings that limit exploiting the full computational capability of modern GPUs. This paper presents SaLoBa, a GPU-accelerated sequence alignment library focused on seed extension. Based on the analysis of previous work with real-world sequencing data, we propose techniques to exploit the data locality and improve workload balancing. The experimental results reveal that SaLoBa significantly improves the seed extension kernel compared to state-of-the-art GPU-based methods.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 08:14:40 GMT" } ]
2023-01-24T00:00:00
[ [ "Park", "Seongyeon", "" ], [ "Kim", "Hajin", "" ], [ "Ahmad", "Tanveer", "" ], [ "Ahmed", "Nauman", "" ], [ "Al-Ars", "Zaid", "" ], [ "Hofstee", "H. Peter", "" ], [ "Kim", "Youngsok", "" ], [ "Lee", "Jinho", "" ] ]
new_dataset
0.999656
2301.09339
Khalid Alnujaidi
Khalid Alnujaidi and Ghadah Alhabib
Computer Vision for a Camel-Vehicle Collision Mitigation System
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
As the population grows and more land is being used for urbanization, ecosystems are disrupted by our roads and cars. This expansion of infrastructure cuts through wildlife territories, leading to many instances of Wildlife-Vehicle Collision (WVC). These instances of WVC are a global issue that is having a global socio-economic impact, resulting in billions of dollars in property damage and, at times, fatalities for vehicle occupants. In Saudi Arabia, this issue is similar, with instances of Camel-Vehicle Collision (CVC) being particularly deadly due to the large size of camels, which results in a 25% fatality rate [4]. The focus of this work is to test different object detection models on the task of detecting camels on the road. The Deep Learning (DL) object detection models used in the experiments are: CenterNet, EfficientDet, Faster R-CNN, and SSD. Results of the experiments show that CenterNet performed the best in terms of accuracy and was the most efficient in training. In the future, the plan is to expand on this work by developing a system to make countryside roads safer.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 09:45:31 GMT" } ]
2023-01-24T00:00:00
[ [ "Alnujaidi", "Khalid", "" ], [ "Alhabib", "Ghadah", "" ] ]
new_dataset
0.998667
2301.09378
Xavier Salleras
Xavier Salleras
Citadel: Self-Sovereign Identities on Dusk Network
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
The amount of sensitive information that service providers handle about their users has become a concerning fact in many use cases, where users have no other option but to trust that those companies will not misuse their personal information. To solve that, Self-Sovereign Identity (SSI) systems have become a hot topic of research in recent years: SSI systems allow users to manage their identities transparently. Recent solutions represent the rights of users to use services as Non-Fungible Tokens (NFTs) stored on Blockchains, and users prove possession of these rights using Zero-Knowledge Proofs (ZKPs). However, even when ZKPs do not leak any information about the rights, the NFTs are stored as public values linked to known accounts, and thus, they can be traced. In this paper, we design a native privacy-preserving NFT model for the Dusk Network Blockchain, and on top of it, we deploy Citadel: our novel full-privacy-preserving SSI system, where the rights of the users are privately stored on the Dusk Network Blockchain, and users can prove their ownership in a fully private manner.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 11:47:38 GMT" } ]
2023-01-24T00:00:00
[ [ "Salleras", "Xavier", "" ] ]
new_dataset
0.997028
2301.09396
Diego Silva
Josue Rivera, Julio Garrido, Enrique Riveiro, Diego Silva
Environment for the Design and Automation of New CDPR Architectures
8 pages, 7 figures, preprint, FAIM 2023 conference
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a design and automation environment to study the control trajectory for new CDPR architectures, for instance CDPRs with an unusual number of cables or different motor location in the robot frame. In order to test the environment capabilities, an architecture of a planar under-constrained CDPR was designed, simulated, and implemented using standard industrial hardware. Both the simulated model and industrial prototype were running the same trajectories to determine the time delay and the error position between them. The tests have demonstrated that the simulated model of the CDPR reproduces the trajectories of the equivalent industrial prototype with a maximum deviation of 0.35% under loading and different speed conditions, despite the time delays produced by the data transmission and the non-deterministic communication protocols used to connect the industrial automation controller with the simulated model. The results have shown that the environment is suitable for trajectory control and workspace analysis of new CDPR architectures under different dynamic conditions.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 12:32:42 GMT" } ]
2023-01-24T00:00:00
[ [ "Rivera", "Josue", "" ], [ "Garrido", "Julio", "" ], [ "Riveiro", "Enrique", "" ], [ "Silva", "Diego", "" ] ]
new_dataset
0.99276
2301.09404
Dipak Kumar Bhunia
Dipak K. Bhunia, Cristina Fern\'andez-C\'ordoba, Merc\`e Villanueva
$\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-Additive Hadamard Codes
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
The $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive codes are subgroups of $\mathbb{Z}_2^{\alpha_1} \times \mathbb{Z}_4^{\alpha_2} \times \mathbb{Z}_8^{\alpha_3}$, and can be seen as linear codes over $\mathbb{Z}_2$ when $\alpha_2=\alpha_3=0$, $\mathbb{Z}_4$-additive or $\mathbb{Z}_8$-additive codes when $\alpha_1=\alpha_3=0$ or $\alpha_1=\alpha_2=0$, respectively, or $\mathbb{Z}_2\mathbb{Z}_4$-additive codes when $\alpha_3=0$. A $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code is a Hadamard code which is the Gray map image of a $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive code. In this paper, we generalize some known results for $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard codes to $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes with $\alpha_1 \neq 0$, $\alpha_2 \neq 0$, and $\alpha_3 \neq 0$. First, we give a recursive construction of $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-additive Hadamard codes of type $(\alpha_1,\alpha_2, \alpha_3;t_1,t_2, t_3)$ with $t_1\geq 1$, $t_2 \geq 0$, and $t_3\geq 1$. Then, we show that in general the $\mathbb{Z}_4$-linear, $\mathbb{Z}_8$-linear and $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard codes are not included in the family of $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes with $\alpha_1 \neq 0$, $\alpha_2 \neq 0$, and $\alpha_3 \neq 0$. Actually, we point out that none of these nonlinear $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard codes of length $2^{11}$ is equivalent to a $\mathbb{Z}_2\mathbb{Z}_4\mathbb{Z}_8$-linear Hadamard code of any other type, a $\mathbb{Z}_2\mathbb{Z}_4$-linear Hadamard code, or a $\mathbb{Z}_{2^s}$-linear Hadamard code, with $s\geq 2$, of the same length $2^{11}$.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 12:56:26 GMT" } ]
2023-01-24T00:00:00
[ [ "Bhunia", "Dipak K.", "" ], [ "Fernández-Córdoba", "Cristina", "" ], [ "Villanueva", "Mercè", "" ] ]
new_dataset
0.997666
2301.09440
Ana\"is Villedieu
Martin Gronemann and Martin N\"ollenburg and Ana\"is Villedieu
Splitting Plane Graphs to Outerplanarity
12 pages, 4 figures, appears in the proceedings of WALCOM 2023
null
null
null
cs.CG
http://creativecommons.org/licenses/by/4.0/
Vertex splitting replaces a vertex by two copies and partitions its incident edges amongst the copies. This problem has been studied as a graph editing operation to achieve desired properties with as few splits as possible, most often planarity, for which the problem is NP-hard. Here we study how to minimize the number of splits to turn a plane graph into an outerplane one. We tackle this problem by establishing a direct connection between splitting a plane graph to outerplanarity, finding a connected face cover, and finding a feedback vertex set in its dual. We prove NP-completeness for plane biconnected graphs, while we show that a polynomial-time algorithm exists for maximal planar graphs. Finally, we provide upper and lower bounds for certain families of maximal planar graphs.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 14:02:10 GMT" } ]
2023-01-24T00:00:00
[ [ "Gronemann", "Martin", "" ], [ "Nöllenburg", "Martin", "" ], [ "Villedieu", "Anaïs", "" ] ]
new_dataset
0.996265
2301.09460
Michael Ying Yang
Kun Li, George Vosselman, Michael Ying Yang
HRVQA: A Visual Question Answering Benchmark for High-Resolution Aerial Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Visual question answering (VQA) is an important and challenging multimodal task in computer vision. Recently, a few efforts have been made to bring VQA task to aerial images, due to its potential real-world applications in disaster monitoring, urban planning, and digital earth product generation. However, not only the huge variation in the appearance, scale and orientation of the concepts in aerial images, but also the scarcity of the well-annotated datasets restricts the development of VQA in this domain. In this paper, we introduce a new dataset, HRVQA, which provides collected 53512 aerial images of 1024*1024 pixels and semi-automatically generated 1070240 QA pairs. To benchmark the understanding capability of VQA models for aerial images, we evaluate the relevant methods on HRVQA. Moreover, we propose a novel model, GFTransformer, with gated attention modules and a mutual fusion module. The experiments show that the proposed dataset is quite challenging, especially the specific attribute related questions. Our method achieves superior performance in comparison to the previous state-of-the-art approaches. The dataset and the source code will be released at https://hrvqa.nl/.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 14:36:38 GMT" } ]
2023-01-24T00:00:00
[ [ "Li", "Kun", "" ], [ "Vosselman", "George", "" ], [ "Yang", "Michael Ying", "" ] ]
new_dataset
0.999561
2301.09545
Jesse Josua Benjamin
Jesse Josua Benjamin, Heidi Biggs, Arne Berger, Julija Rukanskait\.e, Michael Heidt, Nick Merrill, James Pierce, Joseph Lindley
The Entoptic Field Camera as Metaphor-Driven Research-through-Design with AI Technologies
To be published in Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (CHI '23), April 23--28, 2023, Hamburg, Germany
null
10.1145/3544548.3581175
null
cs.HC cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Artificial intelligence (AI) technologies are widely deployed in smartphone photography; and prompt-based image synthesis models have rapidly become commonplace. In this paper, we describe a Research-through-Design (RtD) project which explores this shift in the means and modes of image production via the creation and use of the Entoptic Field Camera. Entoptic phenomena usually refer to perceptions of floaters or bright blue dots stemming from the physiological interplay of the eye and brain. We use the term entoptic as a metaphor to investigate how the material interplay of data and models in AI technologies shapes human experiences of reality. Through our case study using first-person design and a field study, we offer implications for critical, reflective, more-than-human and ludic design to engage AI technologies; the conceptualisation of an RtD research space which contributes to AI literacy discourses; and outline a research trajectory concerning materiality and design affordances of AI technologies.
[ { "version": "v1", "created": "Mon, 23 Jan 2023 17:03:54 GMT" } ]
2023-01-24T00:00:00
[ [ "Benjamin", "Jesse Josua", "" ], [ "Biggs", "Heidi", "" ], [ "Berger", "Arne", "" ], [ "Rukanskaitė", "Julija", "" ], [ "Heidt", "Michael", "" ], [ "Merrill", "Nick", "" ], [ "Pierce", "James", "" ], [ "Lindley", "Joseph", "" ] ]
new_dataset
0.998632
2110.05668
Renbo Tu
Renbo Tu, Nicholas Roberts, Mikhail Khodak, Junhong Shen, Frederic Sala, Ameet Talwalkar
NAS-Bench-360: Benchmarking Neural Architecture Search on Diverse Tasks
NeurIPS 2022 Datasets and Benchmarks Track
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize well-studied tasks, e.g. image classification on CIFAR or ImageNet. This makes the performance of NAS approaches in more diverse areas poorly understood. In this paper, we present NAS-Bench-360, a benchmark suite to evaluate methods on domains beyond those traditionally studied in architecture search, and use it to address the following question: do state-of-the-art NAS methods perform well on diverse tasks? To construct the benchmark, we curate ten tasks spanning a diverse array of application domains, dataset sizes, problem dimensionalities, and learning objectives. Each task is carefully chosen to interoperate with modern CNN-based search methods while possibly being far-afield from its original development domain. To speed up and reduce the cost of NAS research, for two of the tasks we release the precomputed performance of 15,625 architectures comprising a standard CNN search space. Experimentally, we show the need for more robust NAS evaluation of the kind NAS-Bench-360 enables by showing that several modern NAS procedures perform inconsistently across the ten tasks, with many catastrophically poor results. We also demonstrate how NAS-Bench-360 and its associated precomputed results will enable future scientific discoveries by testing whether several recent hypotheses promoted in the NAS literature hold on diverse tasks. NAS-Bench-360 is hosted at https://nb360.ml.cmu.edu.
[ { "version": "v1", "created": "Tue, 12 Oct 2021 01:13:18 GMT" }, { "version": "v2", "created": "Sat, 16 Oct 2021 00:52:02 GMT" }, { "version": "v3", "created": "Tue, 26 Oct 2021 19:37:48 GMT" }, { "version": "v4", "created": "Wed, 6 Jul 2022 01:30:47 GMT" }, { "version": "v5", "created": "Wed, 26 Oct 2022 21:15:13 GMT" }, { "version": "v6", "created": "Thu, 19 Jan 2023 23:17:16 GMT" } ]
2023-01-23T00:00:00
[ [ "Tu", "Renbo", "" ], [ "Roberts", "Nicholas", "" ], [ "Khodak", "Mikhail", "" ], [ "Shen", "Junhong", "" ], [ "Sala", "Frederic", "" ], [ "Talwalkar", "Ameet", "" ] ]
new_dataset
0.970355
2204.01485
Edward Boyda
Caleb Kruse, Edward Boyda, Sully Chen, Krishna Karra, Tristan Bou-Nahra, Dan Hammer, Jennifer Mathis, Taylor Maddalene, Jenna Jambeck, Fabien Laurier
Satellite Monitoring of Terrestrial Plastic Waste
14 pages, 14 figures
PLoS ONE 18(1): e0278997 (2023)
10.1371/journal.pone.0278997
null
cs.CY cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Plastic waste is a significant environmental pollutant that is difficult to monitor. We created a system of neural networks to analyze spectral, spatial, and temporal components of Sentinel-2 satellite data to identify terrestrial aggregations of waste. The system works at continental scale. We evaluated performance in Indonesia and detected 374 waste aggregations, more than double the number of sites found in public databases. The same system deployed across twelve countries in Southeast Asia identifies 996 subsequently confirmed waste sites. For each detected site, we algorithmically monitor waste site footprints through time and cross-reference other datasets to generate physical and social metadata. 19% of detected waste sites are located within 200 m of a waterway. Numerous sites sit directly on riverbanks, with high risk of ocean leakage.
[ { "version": "v1", "created": "Thu, 24 Mar 2022 22:17:11 GMT" } ]
2023-01-23T00:00:00
[ [ "Kruse", "Caleb", "" ], [ "Boyda", "Edward", "" ], [ "Chen", "Sully", "" ], [ "Karra", "Krishna", "" ], [ "Bou-Nahra", "Tristan", "" ], [ "Hammer", "Dan", "" ], [ "Mathis", "Jennifer", "" ], [ "Maddalene", "Taylor", "" ], [ "Jambeck", "Jenna", "" ], [ "Laurier", "Fabien", "" ] ]
new_dataset
0.994431
2207.00658
Zhiwu Zheng
Zhiwu Zheng, Hsin Cheng, Prakhar Kumar, Sigurd Wagner, Minjie Chen, Naveen Verma and James C. Sturm
Wirelessly-Controlled Untethered Piezoelectric Planar Soft Robot Capable of Bidirectional Crawling and Rotation
Accepted to the 2023 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Electrostatic actuators provide a promising approach to creating soft robotic sheets, due to their flexible form factor, modular integration, and fast response speed. However, their control requires kilo-Volt signals and understanding of complex dynamics resulting from force interactions by on-board and environmental effects. In this work, we demonstrate an untethered planar five-actuator piezoelectric robot powered by batteries and on-board high-voltage circuitry, and controlled through a wireless link. The scalable fabrication approach is based on bonding different functional layers on top of each other (steel foil substrate, actuators, flexible electronics). The robot exhibits a range of controllable motions, including bidirectional crawling (up to ~0.6 cm/s), turning, and in-place rotation (at ~1 degree/s). High-speed videos and control experiments show that the richness of the motion results from the interaction of an asymmetric mass distribution in the robot and the associated dependence of the dynamics on the driving frequency of the piezoelectrics. The robot's speed can reach 6 cm/s with specific payload distribution.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 20:55:01 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 21:46:27 GMT" } ]
2023-01-23T00:00:00
[ [ "Zheng", "Zhiwu", "" ], [ "Cheng", "Hsin", "" ], [ "Kumar", "Prakhar", "" ], [ "Wagner", "Sigurd", "" ], [ "Chen", "Minjie", "" ], [ "Verma", "Naveen", "" ], [ "Sturm", "James C.", "" ] ]
new_dataset
0.999536
2207.01393
Hampus Gummesson Svensson
Hampus Gummesson Svensson, Esben Jannik Bjerrum, Christian Tyrchan, Ola Engkvist and Morteza Haghir Chehreghani
Autonomous Drug Design with Multi-Armed Bandits
null
null
null
null
cs.LG q-bio.BM
http://creativecommons.org/licenses/by/4.0/
Recent developments in artificial intelligence and automation support a new drug design paradigm: autonomous drug design. Under this paradigm, generative models can provide suggestions on thousands of molecules with specific properties, and automated laboratories can potentially make, test and analyze molecules with minimal human supervision. However, since still only a limited number of molecules can be synthesized and tested, an obvious challenge is how to efficiently select among provided suggestions in a closed-loop system. We formulate this task as a stochastic multi-armed bandit problem with multiple plays, volatile arms and similarity information. To solve this task, we adapt previous work on multi-armed bandits to this setting, and compare our solution with random sampling, greedy selection and decaying-epsilon-greedy selection strategies. According to our simulation results, our approach has the potential to perform better exploration and exploitation of the chemical space for autonomous drug design.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 13:21:31 GMT" }, { "version": "v2", "created": "Fri, 20 Jan 2023 13:33:13 GMT" } ]
2023-01-23T00:00:00
[ [ "Svensson", "Hampus Gummesson", "" ], [ "Bjerrum", "Esben Jannik", "" ], [ "Tyrchan", "Christian", "" ], [ "Engkvist", "Ola", "" ], [ "Chehreghani", "Morteza Haghir", "" ] ]
new_dataset
0.961153
2207.01814
Jeiyoon Park
Jeiyoon Park, Kiho Kwoun, Chanhee Lee, Heuiseok Lim
Multimodal Frame-Scoring Transformer for Video Summarization
preprint
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As the number of video content has mushroomed in recent years, automatic video summarization has come useful when we want to just peek at the content of the video. However, there are two underlying limitations in generic video summarization task. First, most previous approaches read in just visual features as input, leaving other modality features behind. Second, existing datasets for generic video summarization are relatively insufficient to train a caption generator used for extracting text information from a video and to train the multimodal feature extractors. To address these two problems, this paper proposes the Multimodal Frame-Scoring Transformer (MFST), a framework exploiting visual, text, and audio features and scoring a video with respect to frames. Our MFST framework first extracts each modality features (audio-visual-text) using pretrained encoders. Then, MFST trains the multimodal frame-scoring transformer that uses multimodal representation based on extracted features as inputs and predicts frame-level scores. Our extensive experiments with previous models and ablation studies on TVSum and SumMe datasets demonstrate the effectiveness and superiority of our proposed method by a large margin in both F1 score and Rank-based evaluation.
[ { "version": "v1", "created": "Tue, 5 Jul 2022 05:14:15 GMT" }, { "version": "v2", "created": "Mon, 21 Nov 2022 06:34:54 GMT" }, { "version": "v3", "created": "Fri, 20 Jan 2023 00:18:49 GMT" } ]
2023-01-23T00:00:00
[ [ "Park", "Jeiyoon", "" ], [ "Kwoun", "Kiho", "" ], [ "Lee", "Chanhee", "" ], [ "Lim", "Heuiseok", "" ] ]
new_dataset
0.950264
2207.13398
Manuel Guimar\~aes
Manuel Guimar\~aes, Pedro A. Santos, Arnav Jhala
Emergent social NPC interactions in the Social NPCs Skyrim mod and beyond
Originally a chapter for Game AI Pro, contains 14 pages, 3 figures
null
null
null
cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
This work presents an implementation of a social architecture model for authoring Non-Player Character (NPC) in open world games inspired in academic research on agentbased modeling. Believable NPC authoring is burdensome in terms of rich dialogue and responsive behaviors. We briefly present the characteristics and advantages of using a social agent architecture for this task and describe an implementation of a social agent architecture CiF-CK released as a mod Social NPCs for The Elder Scrolls V: Skyrim
[ { "version": "v1", "created": "Wed, 27 Jul 2022 09:30:23 GMT" }, { "version": "v2", "created": "Fri, 20 Jan 2023 16:15:12 GMT" } ]
2023-01-23T00:00:00
[ [ "Guimarães", "Manuel", "" ], [ "Santos", "Pedro A.", "" ], [ "Jhala", "Arnav", "" ] ]
new_dataset
0.95537
2209.06545
Junyuan Lu
Junyuan Lu, Zeyu Wan and Yu Zhang
Tac2Structure: Object Surface Reconstruction Only through Multi Times Touch
Accepted for publication in IEEE Robotics And Automation Letters
null
10.1109/LRA.2023.3238190
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Inspired by humans' ability to perceive the surface texture of unfamiliar objects without relying on vision, the sense of touch can play a crucial role in robots exploring the environment, particularly in scenes where vision is difficult to apply, or occlusion is inevitable. Existing tactile surface reconstruction methods rely on external sensors or have strong prior assumptions, making the operation complex and limiting their application scenarios. This paper presents a framework for low-drift surface reconstruction through multiple tactile measurements, Tac2Structure. Compared with existing algorithms, the proposed method uses only a new vision-based tactile sensor without relying on external devices. Aiming at the difficulty that reconstruction accuracy is easily affected by the pressure at contact, we propose a correction algorithm to adapt it. The proposed method also reduces the accumulative errors that occur easily during global object surface reconstruction. Multi-frame tactile measurements can accurately reconstruct object surfaces by jointly using the point cloud registration algorithm, loop-closure detection algorithm based on deep learning, and pose graph optimization algorithm. Experiments verify that Tac2Structure can achieve millimeter-level accuracy in reconstructing the surface of objects, providing accurate tactile information for the robot to perceive the surrounding environment.
[ { "version": "v1", "created": "Wed, 14 Sep 2022 10:48:30 GMT" }, { "version": "v2", "created": "Tue, 27 Dec 2022 06:22:16 GMT" }, { "version": "v3", "created": "Thu, 12 Jan 2023 15:20:40 GMT" } ]
2023-01-23T00:00:00
[ [ "Lu", "Junyuan", "" ], [ "Wan", "Zeyu", "" ], [ "Zhang", "Yu", "" ] ]
new_dataset
0.99936
2301.07853
Adam Norton
Adam Norton, Reza Ahmadzadeh, Kshitij Jerath, Paul Robinette, Jay Weitzen, Thanuka Wickramarathne, Holly Yanco, Minseop Choi, Ryan Donald, Brendan Donoghue, Christian Dumas, Peter Gavriel, Alden Giedraitis, Brendan Hertel, Jack Houle, Nathan Letteri, Edwin Meriaux, Zahra Rezaei Khavas, Rakshith Singh, Gregg Willcox, Naye Yoni (University of Massachusetts Lowell)
DECISIVE Benchmarking Data Report: sUAS Performance Results from Phase I
Approved for public release: PAO #PR2023_74172; arXiv admin note: substantial text overlap with arXiv:2211.01801
null
null
null
cs.RO cs.HC cs.SY eess.SY
http://creativecommons.org/publicdomain/zero/1.0/
This report reviews all results derived from performance benchmarking conducted during Phase I of the Development and Execution of Comprehensive and Integrated Subterranean Intelligent Vehicle Evaluations (DECISIVE) project by the University of Massachusetts Lowell, using the test methods specified in the DECISIVE Test Methods Handbook v1.1 for evaluating small unmanned aerial systems (sUAS) performance in subterranean and constrained indoor environments, spanning communications, field readiness, interface, obstacle avoidance, navigation, mapping, autonomy, trust, and situation awareness. Using those 20 test methods, over 230 tests were conducted across 8 sUAS platforms: Cleo Robotics Dronut X1P (P = prototype), FLIR Black Hornet PRS, Flyability Elios 2 GOV, Lumenier Nighthawk V3, Parrot ANAFI USA GOV, Skydio X2D, Teal Golden Eagle, and Vantage Robotics Vesper. Best in class criteria is specified for each applicable test method and the sUAS that match this criteria are named for each test method, including a high-level executive summary of their performance.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 02:50:40 GMT" }, { "version": "v2", "created": "Fri, 20 Jan 2023 14:05:23 GMT" } ]
2023-01-23T00:00:00
[ [ "Norton", "Adam", "", "University of Massachusetts Lowell" ], [ "Ahmadzadeh", "Reza", "", "University of Massachusetts Lowell" ], [ "Jerath", "Kshitij", "", "University of Massachusetts Lowell" ], [ "Robinette", "Paul", "", "University of Massachusetts Lowell" ], [ "Weitzen", "Jay", "", "University of Massachusetts Lowell" ], [ "Wickramarathne", "Thanuka", "", "University of Massachusetts Lowell" ], [ "Yanco", "Holly", "", "University of Massachusetts Lowell" ], [ "Choi", "Minseop", "", "University of Massachusetts Lowell" ], [ "Donald", "Ryan", "", "University of Massachusetts Lowell" ], [ "Donoghue", "Brendan", "", "University of Massachusetts Lowell" ], [ "Dumas", "Christian", "", "University of Massachusetts Lowell" ], [ "Gavriel", "Peter", "", "University of Massachusetts Lowell" ], [ "Giedraitis", "Alden", "", "University of Massachusetts Lowell" ], [ "Hertel", "Brendan", "", "University of Massachusetts Lowell" ], [ "Houle", "Jack", "", "University of Massachusetts Lowell" ], [ "Letteri", "Nathan", "", "University of Massachusetts Lowell" ], [ "Meriaux", "Edwin", "", "University of Massachusetts Lowell" ], [ "Khavas", "Zahra Rezaei", "", "University of Massachusetts Lowell" ], [ "Singh", "Rakshith", "", "University of Massachusetts Lowell" ], [ "Willcox", "Gregg", "", "University of Massachusetts Lowell" ], [ "Yoni", "Naye", "", "University of Massachusetts Lowell" ] ]
new_dataset
0.996572
2301.08295
Debarnab Mitra
Debarnab Mitra, Lev Tauz, and Lara Dolecek
Polar Coded Merkle Tree: Mitigating Data Availability Attacks in Blockchain Systems Using Informed Polar Code Design
36 pages, 10 figures, 2 tables, submitted to IEEE Journal on Selected Areas in Information Theory
null
null
null
cs.IT cs.CR math.IT
http://creativecommons.org/licenses/by/4.0/
Data availability (DA) attack is a well-known problem in certain blockchains where users accept an invalid block with unavailable portions. Previous works have used LDPC and 2-D Reed Solomon (2DRS) codes with Merkle trees to mitigate DA attacks. These codes perform well across various metrics such as DA detection probability and communication cost. However, these codes are difficult to apply to blockchains with large blocks due to large decoding complexity and coding fraud proof size (2D-RS codes), and intractable code guarantees for large code lengths (LDPC codes). In this paper, we focus on large block size applications and address the above challenges by proposing the novel Polar Coded Merkle Tree (PCMT): a Merkle tree encoded using the encoding graph of polar codes. We provide a specialized polar code design algorithm called Sampling Efficient Freezing and an algorithm to prune the polar encoding graph. We demonstrate that the PCMT built using the above techniques results in a better DA detection probability and communication cost compared to LDPC codes, has a lower coding fraud proof size compared to LDPC and 2D-RS codes, provides tractable code guarantees at large code lengths (similar to 2D-RS codes), and has comparable decoding complexity to 2D-RS and LDPC codes.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 20:12:28 GMT" } ]
2023-01-23T00:00:00
[ [ "Mitra", "Debarnab", "" ], [ "Tauz", "Lev", "" ], [ "Dolecek", "Lara", "" ] ]
new_dataset
0.997432
2301.08327
Frederike D\"umbgen
Frederike D\"umbgen, Adrien Hoffet, Mihailo Kolund\v{z}ija, Adam Scholefield, Martin Vetterli
Blind as a bat: audible echolocation on small robots
8 pages, 10 figures, published in IEEE Robotics and Automation Letters
null
10.1109/LRA.2022.3194669
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For safe and efficient operation, mobile robots need to perceive their environment, and in particular, perform tasks such as obstacle detection, localization, and mapping. Although robots are often equipped with microphones and speakers, the audio modality is rarely used for these tasks. Compared to the localization of sound sources, for which many practical solutions exist, algorithms for active echolocation are less developed and often rely on hardware requirements that are out of reach for small robots. We propose an end-to-end pipeline for sound-based localization and mapping that is targeted at, but not limited to, robots equipped with only simple buzzers and low-end microphones. The method is model-based, runs in real time, and requires no prior calibration or training. We successfully test the algorithm on the e-puck robot with its integrated audio hardware, and on the Crazyflie drone, for which we design a reproducible audio extension deck. We achieve centimeter-level wall localization on both platforms when the robots are static during the measurement process. Even in the more challenging setting of a flying drone, we can successfully localize walls, which we demonstrate in a proof-of-concept multi-wall localization and mapping demo.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 21:35:13 GMT" } ]
2023-01-23T00:00:00
[ [ "Dümbgen", "Frederike", "" ], [ "Hoffet", "Adrien", "" ], [ "Kolundžija", "Mihailo", "" ], [ "Scholefield", "Adam", "" ], [ "Vetterli", "Martin", "" ] ]
new_dataset
0.99828
2301.08343
Zixi Chen
Zixi Chen, Shixin Zhang, Shan Luo, Fuchun Sun, Bin Fang
Tacchi: A Pluggable and Low Computational Cost Elastomer Deformation Simulator for Optical Tactile Sensors
8 pages, 6 figures, accepted by IEEE Robotics and Automation Letters
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Simulation is widely applied in robotics research to save time and resources. There have been several works to simulate optical tactile sensors that leverage either a smoothing method or Finite Element Method (FEM). However, elastomer deformation physics is not considered in the former method, whereas the latter requires a massive amount of computational resources like a computer cluster. In this work, we propose a pluggable and low computational cost simulator using the Taichi programming language for simulating optical tactile sensors, named as Tacchi . It reconstructs elastomer deformation using particles, which allows deformed elastomer surfaces to be rendered into tactile images and reveals contact information without suffering from high computational costs. Tacchi facilitates creating realistic tactile images in simulation, e.g., ones that capture wear-and-tear defects on object surfaces. In addition, the proposed Tacchi can be integrated with robotics simulators for a robot system simulation. Experiment results showed that Tacchi can produce images with better similarity to real images and achieved higher Sim2Real accuracy compared to the existing methods. Moreover, it can be connected with MuJoCo and Gazebo with only the requirement of 1G memory space in GPU compared to a computer cluster applied for FEM. With Tacchi, physical robot simulation with optical tactile sensors becomes possible. All the materials in this paper are available at https://github.com/zixichen007115/Tacchi .
[ { "version": "v1", "created": "Thu, 19 Jan 2023 22:35:03 GMT" } ]
2023-01-23T00:00:00
[ [ "Chen", "Zixi", "" ], [ "Zhang", "Shixin", "" ], [ "Luo", "Shan", "" ], [ "Sun", "Fuchun", "" ], [ "Fang", "Bin", "" ] ]
new_dataset
0.994891
2301.08348
Samuel Epstein
Samuel Epstein
A Quantum EL Theorem
arXiv admin note: text overlap with arXiv:2102.03905
null
null
null
cs.CC quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we prove a quantum version of the EL Theorem. It states that non-exotic projections of large rank must have simple quantum states in their images. A consequence to this is there is no way to communicate a quantum source with corresponding large enough von Neumann entropy without using simple quantum states.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 22:44:45 GMT" } ]
2023-01-23T00:00:00
[ [ "Epstein", "Samuel", "" ] ]
new_dataset
0.995941
2301.08406
Xu Chen
Hao Wang and Hao Bao and Liekang Zeng and Ke Luo and Xu Chen
Real-Time High-Resolution Pedestrian Detection in Crowded Scenes via Parallel Edge Offloading
Accepted by IEEE ICC 2023
null
null
null
cs.NI cs.AI cs.DC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To identify dense and small-size pedestrians in surveillance systems, high-resolution cameras are widely deployed, where high-resolution images are captured and delivered to off-the-shelf pedestrian detection models. However, given the highly computation-intensive workload brought by the high resolution, the resource-constrained cameras fail to afford accurate inference in real time. To address that, we propose Hode, an offloaded video analytic framework that utilizes multiple edge nodes in proximity to expedite pedestrian detection with high-resolution inputs. Specifically, Hode can intelligently split high-resolution images into respective regions and then offload them to distributed edge nodes to perform pedestrian detection in parallel. A spatio-temporal flow filtering method is designed to enable context-aware region partitioning, as well as a DRL-based scheduling algorithm to allow accuracy-aware load balance among heterogeneous edge nodes. Extensive evaluation results using realistic prototypes show that Hode can achieve up to 2.01% speedup with very mild accuracy loss.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 02:51:53 GMT" } ]
2023-01-23T00:00:00
[ [ "Wang", "Hao", "" ], [ "Bao", "Hao", "" ], [ "Zeng", "Liekang", "" ], [ "Luo", "Ke", "" ], [ "Chen", "Xu", "" ] ]
new_dataset
0.969861
2301.08431
Kevin Haninger
Richard Matthias Hartisch and Kevin Haninger
Compliant finray-effect gripper for high-speed robotic assembly of electrical components
8 pages, 3 figures, video here: https://youtu.be/J7EGXtE54oYz, CAD here: https://github.com/richardhartisch/compliantfinray
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Fine assembly tasks such as electrical connector insertion have tight tolerances and sensitive components, limiting the speed and robustness of robot assembly, even when using vision, tactile, or force sensors. Connector insertion is a common industrial task, requiring horizontal alignment errors to be compensated with minimal force, then sufficient force to be brought in the insertion direction. The ability to handle a variety of objects, achieve high-speeds, and handle a wide range in object position variation are also desired. Soft grippers can allow the gripping of parts with variation in surface geometry, but often focus on gripping alone and may not be able to bring the assembly forces required. To achieve high-speed connector insertion, this paper proposes monolithic fingers with structured compliance and form-closure features. A finray-effect gripper is adapted to realize structured (i.e. directional) stiffness that allows high-speed mechanical search, self-alignment in insertion, and sufficient assembly force. The design of the finray ribs and fingertips are investigated, with a final design allowing plug insertion with a tolerance window of up to 7.5 mm at high speed.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 06:07:30 GMT" } ]
2023-01-23T00:00:00
[ [ "Hartisch", "Richard Matthias", "" ], [ "Haninger", "Kevin", "" ] ]
new_dataset
0.986012
2301.08517
Nicolas K\"uchler
Nicolas K\"uchler, Emanuel Opel, Hidde Lycklama, Alexander Viand, Anwar Hithnawi
Cohere: Privacy Management in Large Scale Systems
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The need for a privacy management layer in today's systems started to manifest with the emergence of new systems for privacy-preserving analytics and privacy compliance. As a result, we began to see many independent efforts emerge that try to provide system support for privacy. Recently, the scope of privacy solutions used in systems has expanded to encompass more complex techniques such as Differential Privacy (DP). The use of these solutions in large-scale systems imposes new challenges and requirements. Careful planning and coordination are necessary to ensure that privacy guarantees are maintained across a wide range of heterogeneous applications and data systems. This requires new solutions for managing shared application state and allocating scarce and non-replenishable privacy resources. In this paper, we introduce Cohere, a new data management system that simplifies the use of DP in large-scale systems. Cohere implements a unified interface that allows heterogeneous applications to operate on a unified view of users' data. Cohere further extends existing accounting systems with the ability to manage and optimally allocate shared privacy resources, i.e., budget, under complex preferences. We show that Cohere can effectively enable advanced privacy solutions in existing large-scale systems with minimal modifications to existing data management systems and with moderate overhead.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 11:27:02 GMT" } ]
2023-01-23T00:00:00
[ [ "Küchler", "Nicolas", "" ], [ "Opel", "Emanuel", "" ], [ "Lycklama", "Hidde", "" ], [ "Viand", "Alexander", "" ], [ "Hithnawi", "Anwar", "" ] ]
new_dataset
0.999649
2301.08571
Xudong Hong
Xudong Hong, Asad Sayeed, Khushboo Mehra, Vera Demberg, Bernt Schiele
Visual Writing Prompts: Character-Grounded Story Generation with Curated Image Sequences
Paper accepted by Transactions of the Association for Computational Linguistics (TACL). This is a pre-MIT Press publication version. 15 pages, 6 figures
null
null
null
cs.CL cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Current work on image-based story generation suffers from the fact that the existing image sequence collections do not have coherent plots behind them. We improve visual story generation by producing a new image-grounded dataset, Visual Writing Prompts (VWP). VWP contains almost 2K selected sequences of movie shots, each including 5-10 images. The image sequences are aligned with a total of 12K stories which were collected via crowdsourcing given the image sequences and a set of grounded characters from the corresponding image sequence. Our new image sequence collection and filtering process has allowed us to obtain stories that are more coherent and have more narrativity compared to previous work. We also propose a character-based story generation model driven by coherence as a strong baseline. Evaluations show that our generated stories are more coherent, visually grounded, and have more narrativity than stories generated with the current state-of-the-art model.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 13:38:24 GMT" } ]
2023-01-23T00:00:00
[ [ "Hong", "Xudong", "" ], [ "Sayeed", "Asad", "" ], [ "Mehra", "Khushboo", "" ], [ "Demberg", "Vera", "" ], [ "Schiele", "Bernt", "" ] ]
new_dataset
0.999185
2301.08604
Celine Jost
C\'eline Jost (CHART), Justin Debloos (CHART), Agn\`es Piquard-Kipffer (Grhapes), Caroline Barbot-Bouzit (Grhapes), Brigitte Le Pevedic (Lab-STICC)
PRIM project: what contributions for disabilities?
in French language, Handicap 2022, Jun 2022, Paris, France
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the PRIM project, we aim at giving people the power to create scenagrams (interaction scenarios between a human and digital devices) without the need to learn programming or to ask for computer scientists. In this project, software design follows an unconventional approach, far from classical codes, to embody human thinking (based on interactions) instead of computer logic (based on algorithms). The main idea rests on a new time representation using a PRIM-specific timeline instead of a standardized timeline. We evaluated acceptability and cognitive compatibility of this new timeline with 50 participants. Results are very promising. In this paper, we detail qualitative evaluation results about the interest of such software in the field of disability.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 14:34:00 GMT" } ]
2023-01-23T00:00:00
[ [ "Jost", "Céline", "", "CHART" ], [ "Debloos", "Justin", "", "CHART" ], [ "Piquard-Kipffer", "Agnès", "", "Grhapes" ], [ "Barbot-Bouzit", "Caroline", "", "Grhapes" ], [ "Pevedic", "Brigitte Le", "", "Lab-STICC" ] ]
new_dataset
0.956346
2301.08620
Lewin Stein
Mathias Lemke, Lewin Stein
Adjoint-Based Identification of Sound Sources for Sound Reinforcement and Source Localization
null
Notes on Numerical Fluid Mechanics and Multidisciplinary Design, vol 145. Springer (2021)
10.1007/978-3-030-52429-6_17
null
cs.SD eess.AS physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The identification of sound sources is a common problem in acoustics. Different parameters are sought, among these are signal and position of the sources. We present an adjoint-based approach for sound source identification, which employs computational aeroacoustic techniques. Two different applications are presented as a proof-of-concept: optimization of a sound reinforcement setup and the localization of (moving) sound sources.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 15:01:46 GMT" } ]
2023-01-23T00:00:00
[ [ "Lemke", "Mathias", "" ], [ "Stein", "Lewin", "" ] ]
new_dataset
0.996433
2301.08669
Moritz B\"ohle
Moritz B\"ohle, Mario Fritz, Bernt Schiele
Holistically Explainable Vision Transformers
null
null
null
null
cs.CV stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformers increasingly dominate the machine learning landscape across many tasks and domains, which increases the importance for understanding their outputs. While their attention modules provide partial insight into their inner workings, the attention scores have been shown to be insufficient for explaining the models as a whole. To address this, we propose B-cos transformers, which inherently provide holistic explanations for their decisions. Specifically, we formulate each model component - such as the multi-layer perceptrons, attention layers, and the tokenisation module - to be dynamic linear, which allows us to faithfully summarise the entire transformer via a single linear transform. We apply our proposed design to Vision Transformers (ViTs) and show that the resulting models, dubbed Bcos-ViTs, are highly interpretable and perform competitively to baseline ViTs on ImageNet. Code will be made available soon.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 16:45:34 GMT" } ]
2023-01-23T00:00:00
[ [ "Böhle", "Moritz", "" ], [ "Fritz", "Mario", "" ], [ "Schiele", "Bernt", "" ] ]
new_dataset
0.995165
2301.08695
Chirag Shetty
Beomyeol Jeon, Linda Cai, Chirag Shetty, Pallavi Srivastava, Jintao Jiang, Xiaolan Ke, Yitao Meng, Cong Xie, Indranil Gupta
Baechi: Fast Device Placement of Machine Learning Graphs
Extended version of SoCC 2020 paper: https://dl.acm.org/doi/10.1145/3419111.3421302
null
null
null
cs.DC cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine Learning graphs (or models) can be challenging or impossible to train when either devices have limited memory, or models are large. To split the model across devices, learning-based approaches are still popular. While these result in model placements that train fast on data (i.e., low step times), learning-based model-parallelism is time-consuming, taking many hours or days to create a placement plan of operators on devices. We present the Baechi system, the first to adopt an algorithmic approach to the placement problem for running machine learning training graphs on small clusters of memory-constrained devices. We integrate our implementation of Baechi into two popular open-source learning frameworks: TensorFlow and PyTorch. Our experimental results using GPUs show that: (i) Baechi generates placement plans 654 X - 206K X faster than state-of-the-art learning-based approaches, and (ii) Baechi-placed model's step (training) time is comparable to expert placements in PyTorch, and only up to 6.2% worse than expert placements in TensorFlow. We prove mathematically that our two algorithms are within a constant factor of the optimal. Our work shows that compared to learning-based approaches, algorithmic approaches can face different challenges for adaptation to Machine learning systems, but also they offer proven bounds, and significant performance benefits.
[ { "version": "v1", "created": "Fri, 20 Jan 2023 17:26:37 GMT" } ]
2023-01-23T00:00:00
[ [ "Jeon", "Beomyeol", "" ], [ "Cai", "Linda", "" ], [ "Shetty", "Chirag", "" ], [ "Srivastava", "Pallavi", "" ], [ "Jiang", "Jintao", "" ], [ "Ke", "Xiaolan", "" ], [ "Meng", "Yitao", "" ], [ "Xie", "Cong", "" ], [ "Gupta", "Indranil", "" ] ]
new_dataset
0.986951
2204.11367
Bilal Farooq
Kimia Kamal and Bilal Farooq and Mahwish Mudassar and Arash Kalatian
Ordered-logit pedestrian stress model for traffic flow with automated vehicles
In: IEEE Intelligent Vehicles Symposium Workshops (XXIV Workshops), 2022, Aachen, Germany
null
10.1109/IV51971.2022.9827316
null
cs.HC stat.AP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An ordered-logit model is developed to study the effects of Automated Vehicles (AVs) in the traffic mix on the average stress level of a pedestrian when crossing an urban street at mid-block. Information collected from a galvanic skin resistance sensor and virtual reality experiments are transformed into a dataset with interpretable average stress levels (low, medium, and high) and geometric, traffic, and environmental conditions. Modelling results indicate a decrease in average stress level with the increase in the percentage of AVs in the traffic mix.
[ { "version": "v1", "created": "Sun, 24 Apr 2022 21:59:47 GMT" } ]
2023-01-20T00:00:00
[ [ "Kamal", "Kimia", "" ], [ "Farooq", "Bilal", "" ], [ "Mudassar", "Mahwish", "" ], [ "Kalatian", "Arash", "" ] ]
new_dataset
0.997418
2210.00714
John Ousterhout
John Ousterhout
It's Time to Replace TCP in the Datacenter
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In spite of its long and successful history, TCP is a poor transport protocol for modern datacenters. Every significant element of TCP, from its stream orientation to its expectation of in-order packet delivery, is wrong for the datacenter. It is time to recognize that TCP's problems are too fundamental and interrelated to be fixed; the only way to harness the full performance potential of modern networks is to introduce a new transport protocol into the datacenter. Homa demonstrates that it is possible to create a transport protocol that avoids all of TCP's problems. Although Homa is not API-compatible with TCP, it should be possible to bring it into widespread usage by integrating it with RPC frameworks.
[ { "version": "v1", "created": "Mon, 3 Oct 2022 05:00:24 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 00:58:26 GMT" } ]
2023-01-20T00:00:00
[ [ "Ousterhout", "John", "" ] ]
new_dataset
0.984113
2210.12197
Oren Sultan
Oren Sultan, Dafna Shahaf
Life is a Circus and We are the Clowns: Automatically Finding Analogies between Situations and Processes
Accepted to EMNLP 2022 main conference (long paper)
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Analogy-making gives rise to reasoning, abstraction, flexible categorization and counterfactual inference -- abilities lacking in even the best AI systems today. Much research has suggested that analogies are key to non-brittle systems that can adapt to new domains. Despite their importance, analogies received little attention in the NLP community, with most research focusing on simple word analogies. Work that tackled more complex analogies relied heavily on manually constructed, hard-to-scale input representations. In this work, we explore a more realistic, challenging setup: our input is a pair of natural language procedural texts, describing a situation or a process (e.g., how the heart works/how a pump works). Our goal is to automatically extract entities and their relations from the text and find a mapping between the different domains based on relational similarity (e.g., blood is mapped to water). We develop an interpretable, scalable algorithm and demonstrate that it identifies the correct mappings 87% of the time for procedural texts and 94% for stories from cognitive-psychology literature. We show it can extract analogies from a large dataset of procedural texts, achieving 79% precision (analogy prevalence in data: 3%). Lastly, we demonstrate that our algorithm is robust to paraphrasing the input texts.
[ { "version": "v1", "created": "Fri, 21 Oct 2022 18:54:17 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 12:09:23 GMT" } ]
2023-01-20T00:00:00
[ [ "Sultan", "Oren", "" ], [ "Shahaf", "Dafna", "" ] ]
new_dataset
0.99159
2212.03504
Xiang Li
Xiang Li, Junbo Yin, Botian Shi, Yikang Li, Ruigang Yang, Jianbing Shen
LWSIS: LiDAR-guided Weakly Supervised Instance Segmentation for Autonomous Driving
AAAI2023
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image instance segmentation is a fundamental research topic in autonomous driving, which is crucial for scene understanding and road safety. Advanced learning-based approaches often rely on the costly 2D mask annotations for training. In this paper, we present a more artful framework, LiDAR-guided Weakly Supervised Instance Segmentation (LWSIS), which leverages the off-the-shelf 3D data, i.e., Point Cloud, together with the 3D boxes, as natural weak supervisions for training the 2D image instance segmentation models. Our LWSIS not only exploits the complementary information in multimodal data during training, but also significantly reduces the annotation cost of the dense 2D masks. In detail, LWSIS consists of two crucial modules, Point Label Assignment (PLA) and Graph-based Consistency Regularization (GCR). The former module aims to automatically assign the 3D point cloud as 2D point-wise labels, while the latter further refines the predictions by enforcing geometry and appearance consistency of the multimodal data. Moreover, we conduct a secondary instance segmentation annotation on the nuScenes, named nuInsSeg, to encourage further research on multimodal perception tasks. Extensive experiments on the nuInsSeg, as well as the large-scale Waymo, show that LWSIS can substantially improve existing weakly supervised segmentation models by only involving 3D data during training. Additionally, LWSIS can also be incorporated into 3D object detectors like PointPainting to boost the 3D detection performance for free. The code and dataset are available at https://github.com/Serenos/LWSIS.
[ { "version": "v1", "created": "Wed, 7 Dec 2022 08:08:01 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 08:41:18 GMT" } ]
2023-01-20T00:00:00
[ [ "Li", "Xiang", "" ], [ "Yin", "Junbo", "" ], [ "Shi", "Botian", "" ], [ "Li", "Yikang", "" ], [ "Yang", "Ruigang", "" ], [ "Shen", "Jianbing", "" ] ]
new_dataset
0.979078
2212.11459
Aldrin Montana
Aldrin Montana and Yuanqing Xue and Jeff LeFevre and Carlos Maltzahn and Josh Stuart and Philip Kufeldt and Peter Alvaro
A Moveable Beast: Partitioning Data and Compute for Computational Storage
14 pages, 7 figures, submitted to SIGMOD 2023 updated acknowledgements
null
null
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
Over the years, hardware trends have introduced various heterogeneous compute units while also bringing network and storage bandwidths within an order of magnitude of memory subsystems. In response, developers have used increasingly exotic solutions to extract more performance from hardware; typically relying on static, design-time partitioning of their programs which cannot keep pace with storage systems that are layering compute units throughout deepening hierarchies of storage devices. We argue that dynamic, just-in-time partitioning of computation offers a solution for emerging data-intensive systems to overcome ever-growing data sizes in the face of stalled CPU performance and memory bandwidth. In this paper, we describe our prototype computational storage system (CSS), Skytether, that adopts a database perspective to utilize computational storage drives (CSDs). We also present MSG Express, a data management system for single-cell gene expression data that sits on top of Skytether. We discuss four design principles that guide the design of our CSS: support scientific applications; maximize utilization of storage, network, and memory bandwidth; minimize data movement; and enable flexible program execution on autonomous CSDs. Skytether is designed for the extra layer of indirection that CSDs introduce to a storage system, using decomposable queries to take a new approach to computational storage that has been imagined but not yet explored. In this paper, we evaluate: partition strategies, the overhead of function execution, and the performance of selection and projection. We expected ~3-4x performance slowdown on the CSDs compared to a consumer-grade client CPU but we observe an unexpected slowdown of ~15x, however, our evaluation results help us set anchor points in the design space for developing a cost model for decomposable queries and partitioning data across many CSDs.
[ { "version": "v1", "created": "Thu, 22 Dec 2022 02:37:21 GMT" }, { "version": "v2", "created": "Wed, 18 Jan 2023 20:18:33 GMT" } ]
2023-01-20T00:00:00
[ [ "Montana", "Aldrin", "" ], [ "Xue", "Yuanqing", "" ], [ "LeFevre", "Jeff", "" ], [ "Maltzahn", "Carlos", "" ], [ "Stuart", "Josh", "" ], [ "Kufeldt", "Philip", "" ], [ "Alvaro", "Peter", "" ] ]
new_dataset
0.981069
2301.02749
Jihong Zhu
Jihong Zhu, Michael Gienger, Giovanni Franzese, and Jens Kober
Do You Need a Hand? -- a Bimanual Robotic Dressing Assistance Scheme
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Developing physically assistive robots capable of dressing assistance has the potential to significantly improve the lives of the elderly and disabled population. However, most robotics dressing strategies considered a single robot only, which greatly limited the performance of the dressing assistance. In fact, healthcare professionals perform the task bimanually. Inspired by them, we propose a bimanual cooperative scheme for robotic dressing assistance. In the scheme, an interactive robot joins hands with the human thus supporting/guiding the human in the dressing process, while the dressing robot performs the dressing task. We identify a key feature that affects the dressing action and propose an optimal strategy for the interactive robot using the feature. A dressing coordinate based on the posture of the arm is defined to better encode the dressing policy. We validate the interactive dressing scheme with extensive experiments and also an ablation study. The experiment video is available on https://sites.google.com/view/bimanualassitdressing/home
[ { "version": "v1", "created": "Fri, 6 Jan 2023 23:39:54 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 14:53:51 GMT" } ]
2023-01-20T00:00:00
[ [ "Zhu", "Jihong", "" ], [ "Gienger", "Michael", "" ], [ "Franzese", "Giovanni", "" ], [ "Kober", "Jens", "" ] ]
new_dataset
0.968012
2301.05739
Mingzhou Yang
Yan Li (1), Mingzhou Yang (1), Matthew Eagon (1), Majid Farhadloo (1), Yiqun Xie (2), William F. Northrop (1), Shashi Shekhar (1) ((1) University of Minnesota, (2) University of Maryland)
Eco-PiNN: A Physics-informed Neural Network for Eco-toll Estimation
Full version of the paper accepted for the SDM23 conference; Yan Li and Mingzhou Yang contributed equally to this paper
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The eco-toll estimation problem quantifies the expected environmental cost (e.g., energy consumption, exhaust emissions) for a vehicle to travel along a path. This problem is important for societal applications such as eco-routing, which aims to find paths with the lowest exhaust emissions or energy need. The challenges of this problem are three-fold: (1) the dependence of a vehicle's eco-toll on its physical parameters; (2) the lack of access to data with eco-toll information; and (3) the influence of contextual information (i.e. the connections of adjacent segments in the path) on the eco-toll of road segments. Prior work on eco-toll estimation has mostly relied on pure data-driven approaches and has high estimation errors given the limited training data. To address these limitations, we propose a novel Eco-toll estimation Physics-informed Neural Network framework (Eco-PiNN) using three novel ideas, namely, (1) a physics-informed decoder that integrates the physical laws of the vehicle engine into the network, (2) an attention-based contextual information encoder, and (3) a physics-informed regularization to reduce overfitting. Experiments on real-world heavy-duty truck data show that the proposed method can greatly improve the accuracy of eco-toll estimation compared with state-of-the-art methods.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 19:34:18 GMT" }, { "version": "v2", "created": "Thu, 19 Jan 2023 03:21:34 GMT" } ]
2023-01-20T00:00:00
[ [ "Li", "Yan", "" ], [ "Yang", "Mingzhou", "" ], [ "Eagon", "Matthew", "" ], [ "Farhadloo", "Majid", "" ], [ "Xie", "Yiqun", "" ], [ "Northrop", "William F.", "" ], [ "Shekhar", "Shashi", "" ] ]
new_dataset
0.998593
2301.05804
Ross Greer
Ross Greer, Akshay Gopalkrishnan, Nachiket Deo, Akshay Rangesh, Mohan Trivedi
Salient Sign Detection In Safe Autonomous Driving: AI Which Reasons Over Full Visual Context
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Detecting road traffic signs and accurately determining how they can affect the driver's future actions is a critical task for safe autonomous driving systems. However, various traffic signs in a driving scene have an unequal impact on the driver's decisions, making detecting the salient traffic signs a more important task. Our research addresses this issue, constructing a traffic sign detection model which emphasizes performance on salient signs, or signs that influence the decisions of a driver. We define a traffic sign salience property and use it to construct the LAVA Salient Signs Dataset, the first traffic sign dataset that includes an annotated salience property. Next, we use a custom salience loss function, Salience-Sensitive Focal Loss, to train a Deformable DETR object detection model in order to emphasize stronger performance on salient signs. Results show that a model trained with Salience-Sensitive Focal Loss outperforms a model trained without, with regards to recall of both salient signs and all signs combined. Further, the performance margin on salient signs compared to all signs is largest for the model trained with Salience-Sensitive Focal Loss.
[ { "version": "v1", "created": "Sat, 14 Jan 2023 01:47:09 GMT" }, { "version": "v2", "created": "Wed, 18 Jan 2023 19:48:16 GMT" } ]
2023-01-20T00:00:00
[ [ "Greer", "Ross", "" ], [ "Gopalkrishnan", "Akshay", "" ], [ "Deo", "Nachiket", "" ], [ "Rangesh", "Akshay", "" ], [ "Trivedi", "Mohan", "" ] ]
new_dataset
0.999488
2301.07747
Ond\v{r}ej Leng\'al
Yu-Fang Chen, Kai-Min Chung, Ond\v{r}ej Leng\'al, Jyun-Ao Lin, Wei-Lun Tsai, Di-De Yen
An Automata-based Framework for Verification and Bug Hunting in Quantum Circuits (Technical Report)
null
null
null
null
cs.LO cs.FL
http://creativecommons.org/licenses/by/4.0/
We introduce a new paradigm for analysing and finding bugs in quantum circuits. In our approach, the problem is given by a triple $\{P\}\,C\,\{Q\}$ and the question is whether, given a set $P$ of quantum states on the input of a circuit $C$, the set of quantum states on the output is equal to (or included in) a set $Q$. While this is not suitable to specify, e.g., functional correctness of a quantum circuit, it is sufficient to detect many bugs in quantum circuits. We propose a technique based on tree automata to compactly represent sets of quantum states and develop transformers to implement the semantics of quantum gates over this representation. Our technique computes with an algebraic representation of quantum states, avoiding the inaccuracy of working with floating-point numbers. We implemented the proposed approach in a prototype tool and evaluated its performance against various benchmarks from the literature. The evaluation shows that our approach is quite scalable, e.g., we managed to verify a large circuit with 40 qubits and 141,527 gates, or catch bugs injected into a circuit with 320 qubits and 1,758 gates, where all tools we compared with failed. In addition, our work establishes a connection between quantum program verification and automata, opening new possibilities to exploit the richness of automata theory and automata-based verification in the world of quantum computing.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 19:26:01 GMT" } ]
2023-01-20T00:00:00
[ [ "Chen", "Yu-Fang", "" ], [ "Chung", "Kai-Min", "" ], [ "Lengál", "Ondřej", "" ], [ "Lin", "Jyun-Ao", "" ], [ "Tsai", "Wei-Lun", "" ], [ "Yen", "Di-De", "" ] ]
new_dataset
0.983391
2301.07757
Lucas Silva
Lucas de Oliveira Silva
Freeze-Tag is NP-Hard in 3D with $L_1$ distance
null
null
null
null
cs.CG cs.CC
http://creativecommons.org/licenses/by/4.0/
Arkin et al. in 2002 introduced a scheduling-like problem called Freeze-Tag Problem (FTP) motivated by robot swarm activation. The input consists of the locations of n mobile punctual robots in some metric space or graph. Only one begins "active", while the others are initially "frozen". All active robots can move at unit speed and, upon reaching a frozen one's location, activates it. The goal is to activate all the robots in the minimum amount of time, the so-called makespan. Until 2017 the hardness of this problem in metric spaces was still open, but then Yu et al. proved it to be NP-Hard in the Euclidian plane, and in the same year, Demaine and Roudoy demonstrated that the FTP is also hard in 3D with any $L_p$ distance (with p > 1). However, we still don't know whether Demaine's and Roudoy's result could be translated to the plane. This paper fills the p=1 gap by showing that the FTP is NP-Hard in 3D with $L_1$ distance.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 19:45:31 GMT" } ]
2023-01-20T00:00:00
[ [ "Silva", "Lucas de Oliveira", "" ] ]
new_dataset
0.998745
2301.07775
Zhaoxu Zhang
Zhaoxu Zhang, Robert Winn, Yu Zhao, Tingting Yu and William G. J. Halfond
Automatically Reproducing Android Bug Reports Using Natural Language Processing and Reinforcement Learning
Accepted to the 32nd ACM SIGSOFT International Symposium on Software Testing and Analysis (ISSTA 2023)
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As part of the process of resolving issues submitted by users via bug reports, Android developers attempt to reproduce and observe the failures described by the bug report. Due to the low-quality of bug reports and the complexity of modern apps, the reproduction process is non-trivial and time-consuming. Therefore, automatic approaches that can help reproduce Android bug reports are in great need. However, current approaches to help developers automatically reproduce bug reports are only able to handle limited forms of natural language text and struggle to successfully reproduce failures for which the initial bug report had missing or imprecise steps. In this paper, we introduce a new fully automated Android bug report reproduction approach that addresses these limitations. Our approach accomplishes this by leveraging natural language process techniques to more holistically and accurately analyze the natural language in Android bug reports and designing new techniques, based on reinforcement learning, to guide the search for successful reproducing steps. We conducted an empirical evaluation of our approach on 77 real world bug reports. Our approach achieved 67% precision and 77% recall in accurately extracting reproduction steps from bug reports, and reproduced 74% of the bug reports, significantly outperforming state of the art techniques.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 20:32:49 GMT" } ]
2023-01-20T00:00:00
[ [ "Zhang", "Zhaoxu", "" ], [ "Winn", "Robert", "" ], [ "Zhao", "Yu", "" ], [ "Yu", "Tingting", "" ], [ "Halfond", "William G. J.", "" ] ]
new_dataset
0.998541
2301.07889
Rizwan Patan
Rizwan Patan, Reza M. Parizi, Mohsen Dorodchi, Seyedamin Pouriyeh, Audrey Rorrer
Blockchain Education: Current State, Limitations, Career Scope, Challenges, and Future Directions
null
null
null
null
cs.CY cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Blockchain is a revolutionary technology, and its growth started in various industries (such as IT, education, business, banking, and many others) to capitalize on it. Currently, in higher education institutions (HEIs) adoption of blockchain education needs to be improved in the academic programs and curriculums. In addition, HEIs must make many intense changes in the teaching and learning methods to educate learners about blockchain technology and its applications to meet the current industry workforce demand. Due to a lack of academic programs and courses, students nowadays rely on online resources and pay non-academic organizations a high fee. This paper provides a comprehensive survey of blockchain education's current state of the art by reviewing the different academic programs and industry workforce demand. In addition, blockchain application trends which include market growth and demands are discussed. Moreover, the blockchain career scope for different disciplines of students is examined.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 05:23:32 GMT" } ]
2023-01-20T00:00:00
[ [ "Patan", "Rizwan", "" ], [ "Parizi", "Reza M.", "" ], [ "Dorodchi", "Mohsen", "" ], [ "Pouriyeh", "Seyedamin", "" ], [ "Rorrer", "Audrey", "" ] ]
new_dataset
0.985926
2301.07947
Boris Mocialov
Boris Mocialov, Eirik Eythorsson, Reza Parseh, Hoang Tran, Vegard Flovik
Point Cloud Data Simulation and Modelling with Aize Workspace
Extended abstract, Northern Lights Deep Learning Conference, 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This work takes a look at data models often used in digital twins and presents preliminary results specifically from surface reconstruction and semantic segmentation models trained using simulated data. This work is expected to serve as a ground work for future endeavours in data contextualisation inside a digital twin.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 08:47:31 GMT" } ]
2023-01-20T00:00:00
[ [ "Mocialov", "Boris", "" ], [ "Eythorsson", "Eirik", "" ], [ "Parseh", "Reza", "" ], [ "Tran", "Hoang", "" ], [ "Flovik", "Vegard", "" ] ]
new_dataset
0.976862
2301.07967
Lara Bargmann
Lara Bargmann and Heike Wehrheim
View-Based Axiomatic Reasoning for PSO (Extended Version)
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
Weak memory models describe the semantics of concurrent programs on modern multi-core architectures. Reasoning techniques for concurrent programs, like Owicki-Gries-style proof calculi, have to be based on such a semantics, and hence need to be freshly developed for every new memory model. Recently, a more uniform approach to reasoning has been proposed which builds correctness proofs on the basis of a number of core axioms. This allows to prove program correctness independent of memory models, and transfers proofs to specific memory models by showing these to instantiate all axioms required in a proof. The axiomatisation is built on the notion of thread views as first class elements in the semantics. In this paper, we investigate the applicability of this form of axiomatic reasoning to the Partial Store Order (PSO) memory model. As the standard semantics for PSO is not based on views, we first of all provide a view-based semantics for PSO and prove it to coincide with the standard semantics. We then show the new view-based semantics to satisfy all but one axiom. The missing axiom refers to message-passing (MP) abilities of memory models, which PSO does not guarantee. As a consequence, only proofs without usage of the MP axiom are transferable to PSO. We illustrate the reasoning technique by proving correctness of a litmus test employing a fence to ensure message passing.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 09:44:04 GMT" } ]
2023-01-20T00:00:00
[ [ "Bargmann", "Lara", "" ], [ "Wehrheim", "Heike", "" ] ]
new_dataset
0.991327
2301.07978
Spyridon Kantarelis
Ioannis Dimolitsas, Spyridon Kantarelis, Afroditi Fouka
SpotHitPy: A Study For ML-Based Song Hit Prediction Using Spotify
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this study, we approached the Hit Song Prediction problem, which aims to predict which songs will become Billboard hits. We gathered a dataset of nearly 18500 hit and non-hit songs and extracted their audio features using the Spotify Web API. We test four machine-learning models on our dataset. We were able to predict the Billboard success of a song with approximately 86\% accuracy. The most succesful algorithms were Random Forest and Support Vector Machine.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 10:13:52 GMT" } ]
2023-01-20T00:00:00
[ [ "Dimolitsas", "Ioannis", "" ], [ "Kantarelis", "Spyridon", "" ], [ "Fouka", "Afroditi", "" ] ]
new_dataset
0.971038
2301.07996
Warley F. R. Ribeiro
Warley F. R. Ribeiro, Kentaro Uno, Masazumi Imai, Koki Murase, Kazuya Yoshida
RAMP: Reaction-Aware Motion Planning of Multi-Legged Robots for Locomotion in Microgravity
Submitted version of paper accepted for presentation at the 2023 IEEE International Conference on Robotics and Automation (ICRA)
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robotic mobility in microgravity is necessary to expand human utilization and exploration of outer space. Bio-inspired multi-legged robots are a possible solution for safe and precise locomotion. However, a dynamic motion of a robot in microgravity can lead to failures due to gripper detachment caused by excessive motion reactions. We propose a novel Reaction-Aware Motion Planning (RAMP) to improve locomotion safety in microgravity, decreasing the risk of losing contact with the terrain surface by reducing the robot's momentum change. RAMP minimizes the swing momentum with a Low-Reaction Swing Trajectory (LRST) while distributing this momentum to the whole body, ensuring zero velocity for the supporting grippers and minimizing motion reactions. We verify the proposed approach with dynamic simulations indicating the capability of RAMP to generate a safe motion without detachment of the supporting grippers, resulting in the robot reaching its specified location. We further validate RAMP in experiments with an air-floating system, demonstrating a significant reduction in reaction forces and improved mobility in microgravity.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 10:54:49 GMT" } ]
2023-01-20T00:00:00
[ [ "Ribeiro", "Warley F. R.", "" ], [ "Uno", "Kentaro", "" ], [ "Imai", "Masazumi", "" ], [ "Murase", "Koki", "" ], [ "Yoshida", "Kazuya", "" ] ]
new_dataset
0.961831
2301.08107
Ryosei Kojima
Ryosei Kojima, Shitara Akihisa, Tatsuki Fushimi, Ryogo Niwa, Atushi Shinoda, Ryo Iijima, Kengo Tanaka, Sayan Sarcar, and Yoichi Ochiai
SHITARA: Sending Haptic Induced Touchable Alarm by Ring-shaped Air vortex
30 pages, 22 figures
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social interaction begins with the other person's attention, but it is difficult for a d/Deaf or hard-of-hearing (DHH) person to notice the initial conversation cues. Wearable or visual devices have been proposed previously. However, these devices are cumbersome to wear or must stay within the DHH person's vision. In this study, we have proposed SHITARA, a novel accessibility method with air vortex rings that provides a non-contact haptic cue for a DHH person. We have developed a proof-of-concept device and determined the air vortex ring's accuracy, noticeability and comfortability when it hits a DHH's hair. Though strength, accuracy, and noticeability of air vortex rings decrease as the distance between the air vortex ring generator and the user increases, we have demonstrated that the air vortex ring is noticeable up to 2.5 meters away. Moreover, the optimum strength is found for each distance from a DHH.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 14:54:55 GMT" } ]
2023-01-20T00:00:00
[ [ "Kojima", "Ryosei", "" ], [ "Akihisa", "Shitara", "" ], [ "Fushimi", "Tatsuki", "" ], [ "Niwa", "Ryogo", "" ], [ "Shinoda", "Atushi", "" ], [ "Iijima", "Ryo", "" ], [ "Tanaka", "Kengo", "" ], [ "Sarcar", "Sayan", "" ], [ "Ochiai", "Yoichi", "" ] ]
new_dataset
0.999732
2301.08193
Kimiaki Shirahama
Zihao Chen, Hisashi Handa, Kimiaki Shirahama
JCSE: Contrastive Learning of Japanese Sentence Embeddings and Its Applications
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive learning is widely used for sentence representation learning. Despite this prevalence, most studies have focused exclusively on English and few concern domain adaptation for domain-specific downstream tasks, especially for low-resource languages like Japanese, which are characterized by insufficient target domain data and the lack of a proper training strategy. To overcome this, we propose a novel Japanese sentence representation framework, JCSE (derived from ``Contrastive learning of Sentence Embeddings for Japanese''), that creates training data by generating sentences and synthesizing them with sentences available in a target domain. Specifically, a pre-trained data generator is finetuned to a target domain using our collected corpus. It is then used to generate contradictory sentence pairs that are used in contrastive learning for adapting a Japanese language model to a specific task in the target domain. Another problem of Japanese sentence representation learning is the difficulty of evaluating existing embedding methods due to the lack of benchmark datasets. Thus, we establish a comprehensive Japanese Semantic Textual Similarity (STS) benchmark on which various embedding models are evaluated. Based on this benchmark result, multiple embedding methods are chosen and compared with JCSE on two domain-specific tasks, STS in a clinical domain and information retrieval in an educational domain. The results show that JCSE achieves significant performance improvement surpassing direct transfer and other training strategies. This empirically demonstrates JCSE's effectiveness and practicability for downstream tasks of a low-resource language.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 17:41:46 GMT" } ]
2023-01-20T00:00:00
[ [ "Chen", "Zihao", "" ], [ "Handa", "Hisashi", "" ], [ "Shirahama", "Kimiaki", "" ] ]
new_dataset
0.990398
2301.08237
Xizi Wang
Xizi Wang, Feng Cheng, Gedas Bertasius, David Crandall
LoCoNet: Long-Short Context Network for Active Speaker Detection
tech report
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Active Speaker Detection (ASD) aims to identify who is speaking in each frame of a video. ASD reasons from audio and visual information from two contexts: long-term intra-speaker context and short-term inter-speaker context. Long-term intra-speaker context models the temporal dependencies of the same speaker, while short-term inter-speaker context models the interactions of speakers in the same scene. These two contexts are complementary to each other and can help infer the active speaker. Motivated by these observations, we propose LoCoNet, a simple yet effective Long-Short Context Network that models the long-term intra-speaker context and short-term inter-speaker context. We use self-attention to model long-term intra-speaker context due to its effectiveness in modeling long-range dependencies, and convolutional blocks that capture local patterns to model short-term inter-speaker context. Extensive experiments show that LoCoNet achieves state-of-the-art performance on multiple datasets, achieving an mAP of 95.2%(+1.1%) on AVA-ActiveSpeaker, 68.1%(+22%) on Columbia dataset, 97.2%(+2.8%) on Talkies dataset and 59.7%(+8.0%) on Ego4D dataset. Moreover, in challenging cases where multiple speakers are present, or face of active speaker is much smaller than other faces in the same scene, LoCoNet outperforms previous state-of-the-art methods by 3.4% on the AVA-ActiveSpeaker dataset. The code will be released at https://github.com/SJTUwxz/LoCoNet_ASD.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 18:54:43 GMT" } ]
2023-01-20T00:00:00
[ [ "Wang", "Xizi", "" ], [ "Cheng", "Feng", "" ], [ "Bertasius", "Gedas", "" ], [ "Crandall", "David", "" ] ]
new_dataset
0.996997
2301.08245
Pierluigi Zama Ramirez
Pierluigi Zama Ramirez, Alex Costanzino, Fabio Tosi, Matteo Poggi, Samuele Salti, Stefano Mattoccia, Luigi Di Stefano
Booster: a Benchmark for Depth from Images of Specular and Transparent Surfaces
Extension of the paper "Open Challenges in Deep Stereo: the Booster Dataset" that was presented at CVPR 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating depth from images nowadays yields outstanding results, both in terms of in-domain accuracy and generalization. However, we identify two main challenges that remain open in this field: dealing with non-Lambertian materials and effectively processing high-resolution images. Purposely, we propose a novel dataset that includes accurate and dense ground-truth labels at high resolution, featuring scenes containing several specular and transparent surfaces. Our acquisition pipeline leverages a novel deep space-time stereo framework, enabling easy and accurate labeling with sub-pixel precision. The dataset is composed of 606 samples collected in 85 different scenes, each sample includes both a high-resolution pair (12 Mpx) as well as an unbalanced stereo pair (Left: 12 Mpx, Right: 1.1 Mpx). Additionally, we provide manually annotated material segmentation masks and 15K unlabeled samples. We divide the dataset into a training set, and two testing sets, the latter devoted to the evaluation of stereo and monocular depth estimation networks respectively to highlight the open challenges and future research directions in this field.
[ { "version": "v1", "created": "Thu, 19 Jan 2023 18:59:28 GMT" } ]
2023-01-20T00:00:00
[ [ "Ramirez", "Pierluigi Zama", "" ], [ "Costanzino", "Alex", "" ], [ "Tosi", "Fabio", "" ], [ "Poggi", "Matteo", "" ], [ "Salti", "Samuele", "" ], [ "Mattoccia", "Stefano", "" ], [ "Di Stefano", "Luigi", "" ] ]
new_dataset
0.999548