id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2201.07188
Philip Lazos
Aggelos Kiayias, Philip Lazos
SoK: Blockchain Governance
null
null
null
null
cs.CR cs.CY cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Blockchain systems come with a promise of decentralization that often stumbles on a roadblock when key decisions about modifying the software codebase need to be made. This is attested by the fact that both of the two major cryptocurrencies, Bitcoin and Ethereum, have undergone hard forks that resulted in the creation of alternative systems, creating confusion and opportunities for fraudulent activities. These events, and numerous others, underscore the importance of Blockchain governance, namely the set of processes that blockchain platforms utilize in order to perform decision-making and converge to a widely accepted direction for the system to evolve. While a rich topic of study in other areas, governance of blockchain platforms is lacking a well established set of methods and practices that are adopted industry wide. This makes the topic of blockchain governance a fertile domain for a thorough systematization that we undertake in this work. We start by distilling a comprehensive array of properties for sound governance systems drawn from academic sources as well as grey literature of election systems and blockchain white papers. These are divided into seven categories, confidentiality, verifiability, accountability, sustainability, Pareto efficiency, suffrage and liveness that capture the whole spectrum of desiderata of governance systems. We proceed to classify ten well-documented blockchain systems. While all properties are satisfied, even partially, by at least one system, no system that satisfies most of them. Our work lays out a foundation for assessing blockchain governance processes. While it highlights shortcomings and deficiencies in currently deployed systems, it can also be a catalyst for improving these processes to the highest possible standard with appropriate trade-offs, something direly needed for blockchain platforms to operate effectively in the long term.
[ { "version": "v1", "created": "Tue, 18 Jan 2022 18:38:26 GMT" }, { "version": "v2", "created": "Wed, 19 Jan 2022 18:51:20 GMT" }, { "version": "v3", "created": "Tue, 17 May 2022 17:33:36 GMT" }, { "version": "v4", "created": "Wed, 18 Jan 2023 18:46:18 GMT" } ]
2023-01-19T00:00:00
[ [ "Kiayias", "Aggelos", "" ], [ "Lazos", "Philip", "" ] ]
new_dataset
0.953007
2211.09800
Aleksander Holynski
Tim Brooks, Aleksander Holynski, Alexei A. Efros
InstructPix2Pix: Learning to Follow Image Editing Instructions
Project page with code: https://www.timothybrooks.com/instruct-pix2pix
null
null
null
cs.CV cs.AI cs.CL cs.GR cs.LG
http://creativecommons.org/licenses/by/4.0/
We propose a method for editing images from human instructions: given an input image and a written instruction that tells the model what to do, our model follows these instructions to edit the image. To obtain training data for this problem, we combine the knowledge of two large pretrained models -- a language model (GPT-3) and a text-to-image model (Stable Diffusion) -- to generate a large dataset of image editing examples. Our conditional diffusion model, InstructPix2Pix, is trained on our generated data, and generalizes to real images and user-written instructions at inference time. Since it performs edits in the forward pass and does not require per example fine-tuning or inversion, our model edits images quickly, in a matter of seconds. We show compelling editing results for a diverse collection of input images and written instructions.
[ { "version": "v1", "created": "Thu, 17 Nov 2022 18:58:43 GMT" }, { "version": "v2", "created": "Wed, 18 Jan 2023 17:31:52 GMT" } ]
2023-01-19T00:00:00
[ [ "Brooks", "Tim", "" ], [ "Holynski", "Aleksander", "" ], [ "Efros", "Alexei A.", "" ] ]
new_dataset
0.999647
2212.07601
Chase Mathews
Evangelos Chatziandreou, Chase W. Mathews, David J. Braun
Design of a Parallel Elastic Actuator with a Continuously-Adjustable Equilibrium Position
6 pages, 5 figures
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we present an adjustable-equilibrium parallel elastic actuator (AE-PEA). The actuator consists of a motor, an equilibrium adjusting mechanism, and a spring arranged into a cylindrical geometry, similar to a motor-gearbox assembly. The novel component of the actuator is the equilibrium adjusting mechanism which (i) does not require external energy to maintain the equilibrium position of the actuator even if the spring is deformed and (ii) enables equilibrium position control with low energy cost by rotating the spring while keeping it undeformed. Adjustable equilibrium parallel elastic actuators resolve the main limitation of parallel elastic actuators (PEAs) by enabling energy-efficient operation at different equilibrium positions, instead of being limited to energy-efficient operation at a single equilibrium position. We foresee the use of AE-PEAs in industrial robots, mobile robots, exoskeletons, and prostheses, where efficient oscillatory motion and gravity compensation at different positions are required.
[ { "version": "v1", "created": "Thu, 15 Dec 2022 03:06:43 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 21:05:03 GMT" } ]
2023-01-19T00:00:00
[ [ "Chatziandreou", "Evangelos", "" ], [ "Mathews", "Chase W.", "" ], [ "Braun", "David J.", "" ] ]
new_dataset
0.998255
2301.06249
Xiaowei Chen
Xiaowei Chen, Xiao Jiang, Jiawei Fang, Shihui Guo, Juncong Lin, Minghong Liao, Guoliang Luo, Hongbo Fu
DisPad: Flexible On-Body Displacement of Fabric Sensors for Robust Joint-Motion Tracking
25 pages, 14 figures
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The last few decades have witnessed an emerging trend of wearable soft sensors; however, there are important signal-processing challenges for soft sensors that still limit their practical deployment. They are error-prone when displaced, resulting in significant deviations from their ideal sensor output. In this work, we propose a novel prototype that integrates an elbow pad with a sparse network of soft sensors. Our prototype is fully bio-compatible, stretchable, and wearable. We develop a learning-based method to predict the elbow orientation angle and achieve an average tracking error of 9.82 degrees for single-user multi-motion experiments. With transfer learning, our method achieves the average tracking errors of 10.98 degrees and 11.81 degrees across different motion types and users, respectively. Our core contributions lie in a solution that realizes robust and stable human joint motion tracking across different device displacements.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 03:54:32 GMT" } ]
2023-01-19T00:00:00
[ [ "Chen", "Xiaowei", "" ], [ "Jiang", "Xiao", "" ], [ "Fang", "Jiawei", "" ], [ "Guo", "Shihui", "" ], [ "Lin", "Juncong", "" ], [ "Liao", "Minghong", "" ], [ "Luo", "Guoliang", "" ], [ "Fu", "Hongbo", "" ] ]
new_dataset
0.999297
2301.07098
Andreas Ostermaier
Sebastian Kr\"ugel, Andreas Ostermaier, Matthias Uhl
The moral authority of ChatGPT
null
null
null
null
cs.CY cs.AI cs.HC cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
ChatGPT is not only fun to chat with, but it also searches information, answers questions, and gives advice. With consistent moral advice, it might improve the moral judgment and decisions of users, who often hold contradictory moral beliefs. Unfortunately, ChatGPT turns out highly inconsistent as a moral advisor. Nonetheless, it influences users' moral judgment, we find in an experiment, even if they know they are advised by a chatting bot, and they underestimate how much they are influenced. Thus, ChatGPT threatens to corrupt rather than improves users' judgment. These findings raise the question of how to ensure the responsible use of ChatGPT and similar AI. Transparency is often touted but seems ineffective. We propose training to improve digital literacy.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 20:24:38 GMT" } ]
2023-01-19T00:00:00
[ [ "Krügel", "Sebastian", "" ], [ "Ostermaier", "Andreas", "" ], [ "Uhl", "Matthias", "" ] ]
new_dataset
0.996839
2301.07163
Shubham Atreja
Shubham Atreja, Jane Im, Paul Resnick, Libby Hemphill
AppealMod: Shifting Effort from Moderators to Users Making Appeals
under review
null
null
null
cs.CY cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As content moderation becomes a central aspect of all social media platforms and online communities, interest has grown in how to make moderation decisions contestable. On social media platforms where individual communities moderate their own activities, the responsibility to address user appeals falls on volunteers from within the community. While there is a growing body of work devoted to understanding and supporting the volunteer moderators' workload, little is known about their practice of handling user appeals. Through a collaborative and iterative design process with Reddit moderators, we found that moderators spend considerable effort in investigating user ban appeals and desired to directly engage with users and retain their agency over each decision. To fulfill their needs, we designed and built AppealMod, a system that asks users to put more effort in their appeals, by providing additional information, before their appeals are reviewed by human moderators. In addition to giving moderators more information, we expected the friction in the appeal process would lead to a selection effect among users, with many insincere and toxic appeals being abandoned before getting any attention from human moderators. To evaluate our system, we conducted a field experiment in a Reddit community of over 29 million users that lasted for four months. As a result of the selection effect, moderators viewed only 30\% of initial appeals and less than 10\% of the toxically worded appeals; yet they granted roughly the same number of appeals. Overall, our system is effective at reducing moderator workload and minimizing their exposure to toxic content while honoring their preference for direct engagement and agency in appeals.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 20:15:20 GMT" } ]
2023-01-19T00:00:00
[ [ "Atreja", "Shubham", "" ], [ "Im", "Jane", "" ], [ "Resnick", "Paul", "" ], [ "Hemphill", "Libby", "" ] ]
new_dataset
0.982226
2301.07183
Runcong Zhao
Runcong Zhao and Lin Gui and Hanqi Yan and Yulan He
Tracking Brand-Associated Polarity-Bearing Topics in User Reviews
null
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Monitoring online customer reviews is important for business organisations to measure customer satisfaction and better manage their reputations. In this paper, we propose a novel dynamic Brand-Topic Model (dBTM) which is able to automatically detect and track brand-associated sentiment scores and polarity-bearing topics from product reviews organised in temporally-ordered time intervals. dBTM models the evolution of the latent brand polarity scores and the topic-word distributions over time by Gaussian state space models. It also incorporates a meta learning strategy to control the update of the topic-word distribution in each time interval in order to ensure smooth topic transitions and better brand score predictions. It has been evaluated on a dataset constructed from MakeupAlley reviews and a hotel review dataset. Experimental results show that dBTM outperforms a number of competitive baselines in brand ranking, achieving a good balance of topic coherence and uniqueness, and extracting well-separated polarity-bearing topics across time intervals.
[ { "version": "v1", "created": "Tue, 3 Jan 2023 18:30:34 GMT" } ]
2023-01-19T00:00:00
[ [ "Zhao", "Runcong", "" ], [ "Gui", "Lin", "" ], [ "Yan", "Hanqi", "" ], [ "He", "Yulan", "" ] ]
new_dataset
0.985802
2301.07189
Gopika Ajaykumar
Gopika Ajaykumar and Chien-Ming Huang
Multimodal Robot Programming by Demonstration: A Preliminary Exploration
6 pages, 6 figures, 2021 RSS Workshop on Accessibility of Robot Programming and the Work of the Future
null
null
null
cs.RO cs.HC
http://creativecommons.org/licenses/by/4.0/
Recent years have seen a growth in the number of industrial robots working closely with end-users such as factory workers. This growing use of collaborative robots has been enabled in part due to the availability of end-user robot programming methods that allow users who are not robot programmers to teach robots task actions. Programming by Demonstration (PbD) is one such end-user programming method that enables users to bypass the complexities of specifying robot motions using programming languages by instead demonstrating the desired robot behavior. Demonstrations are often provided by physically guiding the robot through the motions required for a task action in a process known as kinesthetic teaching. Kinesthetic teaching enables users to directly demonstrate task behaviors in the robot's configuration space, making it a popular end-user robot programming method for collaborative robots known for its low cognitive burden. However, because kinesthetic teaching restricts the programmer's teaching to motion demonstrations, it fails to leverage information from other modalities that humans naturally use when providing physical task demonstrations to one other, such as gaze and speech. Incorporating multimodal information into the traditional kinesthetic programming workflow has the potential to enhance robot learning by highlighting critical aspects of a program, reducing ambiguity, and improving situational awareness for the robot learner and can provide insight into the human programmer's intent and difficulties. In this extended abstract, we describe a preliminary study on multimodal kinesthetic demonstrations and future directions for using multimodal demonstrations to enhance robot learning and user programming experiences.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 21:05:13 GMT" } ]
2023-01-19T00:00:00
[ [ "Ajaykumar", "Gopika", "" ], [ "Huang", "Chien-Ming", "" ] ]
new_dataset
0.995889
2301.07197
Mehmet Efe Tiryaki
Mehmet Efe Tiryaki, Fatih Dogangun, Cem Balda Dayan, Paul Wrede, Metin Sitti
MRI-powered Magnetic Miniature Capsule Robot with HIFU-controlled On-demand Drug Delivery
6 pages, 6 figures, accepted to ICRA2023
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Magnetic resonance imaging (MRI)-guided robotic systems offer great potential for new minimally invasive medical tools, including MRI-powered miniature robots. By re-purposing the imaging hardware of an MRI scanner, the magnetic miniature robot could be navigated into the remote part of the patient's body without needing tethered endoscopic tools. However, the state-of-art MRI-powered magnetic miniature robots have limited functionality besides navigation. Here, we propose an MRI-powered magnetic miniature capsule robot benefiting from acoustic streaming forces generated by MRI-guided high-intensity focus ultrasound (HIFU) for controlled drug release. Our design comprises a polymer capsule shell with a submillimeter-diameter drug-release hole that captures an air bubble functioning as a stopper. We use the HIFU pulse to initiate drug release by removing the air bubble once the capsule robot reaches the target location. By controlling acoustic pressure, we also regulate the drug release rate for multiple location targeting during navigation. We demonstrated that the proposed magnetic capsule robot could travel at high speed up to 1.13 cm/s in ex vivo porcine small intestine and release drug to multiple target sites in a single operation, using a combination of MRI-powered actuation and HIFU-controlled release. The proposed MRI-guided microrobotic drug release system will greatly impact minimally invasive medical procedures by allowing on-demand targeted drug delivery.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 21:23:30 GMT" } ]
2023-01-19T00:00:00
[ [ "Tiryaki", "Mehmet Efe", "" ], [ "Dogangun", "Fatih", "" ], [ "Dayan", "Cem Balda", "" ], [ "Wrede", "Paul", "" ], [ "Sitti", "Metin", "" ] ]
new_dataset
0.997814
2301.07202
Ali Abedi Abedi
Christopher Vattheuer, Charlie Liu, Ali Abedi, Omid Abari
Are Home Security Systems Reliable?
null
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Home security systems have become increasingly popular since they provide an additional layer of protection and peace of mind. These systems typically include battery-powered motion sensors, contact sensors, and smart locks. Z-Wave is a very popular wireless communication technology for these low-power systems. In this paper, we demonstrate two new attacks targeting Z-Wave devices. First, we show how an attacker can remotely attack Z-Wave security devices to increase their power consumption by three orders of magnitude, reducing their battery life from a few years to just a few hours. Second, we show multiple Denial of Service (DoS) attacks which enables an attacker to interrupt the operation of security systems in just a few seconds. Our experiments show that these attacks are effective even when the attacker device is in a car 100 meters away from the targeted house.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 21:27:01 GMT" } ]
2023-01-19T00:00:00
[ [ "Vattheuer", "Christopher", "" ], [ "Liu", "Charlie", "" ], [ "Abedi", "Ali", "" ], [ "Abari", "Omid", "" ] ]
new_dataset
0.998517
2301.07271
Tanj Bennett
Tanj Bennett
Chip Guard ECC: An Efficient, Low Latency Method
6 pages, 1 figure
null
null
null
cs.AR
http://creativecommons.org/licenses/by/4.0/
Chip Guard is a new approach to symbol-correcting error correction codes. It can be scaled to various data burst sizes and reliability levels. A specific version for DDR5 is described. It uses the usual DDR5 configuration of 8 data chips, plus 2 chips for ECC and metadata, with 64-bit bursts per chip, to support whole-chip correction reliably and with high probity (reporting of uncorrectable faults). Various numbers of metadata bits may be supported with defined tradeoffs for reliability and probity. The method should correct all bounded faults of a single chip, with less than 1 in 10^12 chance of failing to correct unbounded faults in one chip, or less than 1 in 10^12 chance of failure to detect an uncorrected fault which affects multiple chips.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 02:27:25 GMT" } ]
2023-01-19T00:00:00
[ [ "Bennett", "Tanj", "" ] ]
new_dataset
0.998536
2301.07301
Rui Wan
Rui Wan, Tianyun Zhao, Wei Zhao
PTA-Det: Point Transformer Associating Point cloud and Image for 3D Object Detection
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In autonomous driving, 3D object detection based on multi-modal data has become an indispensable approach when facing complex environments around the vehicle. During multi-modal detection, LiDAR and camera are simultaneously applied for capturing and modeling. However, due to the intrinsic discrepancies between the LiDAR point and camera image, the fusion of the data for object detection encounters a series of problems. Most multi-modal detection methods perform even worse than LiDAR-only methods. In this investigation, we propose a method named PTA-Det to improve the performance of multi-modal detection. Accompanied by PTA-Det, a Pseudo Point Cloud Generation Network is proposed, which can convert image information including texture and semantic features by pseudo points. Thereafter, through a transformer-based Point Fusion Transition (PFT) module, the features of LiDAR points and pseudo points from image can be deeply fused under a unified point-based representation. The combination of these modules can conquer the major obstacle in feature fusion across modalities and realizes a complementary and discriminative representation for proposal generation. Extensive experiments on the KITTI dataset show the PTA-Det achieves a competitive result and support its effectiveness.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 04:35:49 GMT" } ]
2023-01-19T00:00:00
[ [ "Wan", "Rui", "" ], [ "Zhao", "Tianyun", "" ], [ "Zhao", "Wei", "" ] ]
new_dataset
0.999031
2301.07315
Shrey Jain
Aaditya Bhat, Shrey Jain
Face Recognition in the age of CLIP & Billion image datasets
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
CLIP (Contrastive Language-Image Pre-training) models developed by OpenAI have achieved outstanding results on various image recognition and retrieval tasks, displaying strong zero-shot performance. This means that they are able to perform effectively on tasks for which they have not been explicitly trained. Inspired by the success of OpenAI CLIP, a new publicly available dataset called LAION-5B was collected which resulted in the development of open ViT-H/14, ViT-G/14 models that outperform the OpenAI L/14 model. The LAION-5B dataset also released an approximate nearest neighbor index, with a web interface for search & subset creation. In this paper, we evaluate the performance of various CLIP models as zero-shot face recognizers. Our findings show that CLIP models perform well on face recognition tasks, but increasing the size of the CLIP model does not necessarily lead to improved accuracy. Additionally, we investigate the robustness of CLIP models against data poisoning attacks by testing their performance on poisoned data. Through this analysis, we aim to understand the potential consequences and misuse of search engines built using CLIP models, which could potentially function as unintentional face recognition engines.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 05:34:57 GMT" } ]
2023-01-19T00:00:00
[ [ "Bhat", "Aaditya", "" ], [ "Jain", "Shrey", "" ] ]
new_dataset
0.993824
2301.07322
Youbao Tang
Xiaoye Qian, Youbao Tang, Ning Zhang, Mei Han, Jing Xiao, Ming-Chun Huang, Ruei-Sung Lin
HSTFormer: Hierarchical Spatial-Temporal Transformers for 3D Human Pose Estimation
The first two authors have equal contribution
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformer-based approaches have been successfully proposed for 3D human pose estimation (HPE) from 2D pose sequence and achieved state-of-the-art (SOTA) performance. However, current SOTAs have difficulties in modeling spatial-temporal correlations of joints at different levels simultaneously. This is due to the poses' spatial-temporal complexity. Poses move at various speeds temporarily with various joints and body-parts movement spatially. Hence, a cookie-cutter transformer is non-adaptable and can hardly meet the "in-the-wild" requirement. To mitigate this issue, we propose Hierarchical Spatial-Temporal transFormers (HSTFormer) to capture multi-level joints' spatial-temporal correlations from local to global gradually for accurate 3D HPE. HSTFormer consists of four transformer encoders (TEs) and a fusion module. To the best of our knowledge, HSTFormer is the first to study hierarchical TEs with multi-level fusion. Extensive experiments on three datasets (i.e., Human3.6M, MPI-INF-3DHP, and HumanEva) demonstrate that HSTFormer achieves competitive and consistent performance on benchmarks with various scales and difficulties. Specifically, it surpasses recent SOTAs on the challenging MPI-INF-3DHP dataset and small-scale HumanEva dataset, with a highly generalized systematic approach. The code is available at: https://github.com/qianxiaoye825/HSTFormer.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 05:54:02 GMT" } ]
2023-01-19T00:00:00
[ [ "Qian", "Xiaoye", "" ], [ "Tang", "Youbao", "" ], [ "Zhang", "Ning", "" ], [ "Han", "Mei", "" ], [ "Xiao", "Jing", "" ], [ "Huang", "Ming-Chun", "" ], [ "Lin", "Ruei-Sung", "" ] ]
new_dataset
0.999221
2301.07368
Marco Ruffini
M. Ruffini, C. Xie, L. shi and J. S. Wey
Connected OFCity Challenge: an updated perspective on technology for connected cities
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
This paper gives an update on technologies discussed during three OFC events, called 'The Connected OFCity Challenge', from 2016 to 2018. It focuses on research development and field deployment of Passive Optical Networks and Cloud-Based technologies.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 08:33:23 GMT" } ]
2023-01-19T00:00:00
[ [ "Ruffini", "M.", "" ], [ "Xie", "C.", "" ], [ "shi", "L.", "" ], [ "Wey", "J. S.", "" ] ]
new_dataset
0.990774
2301.07378
Pardeep Singh
Pardeep Singh, Rabindra Lamsal, Monika, Satish Chand, Bhawna Shishodia
GeoCovaxTweets: COVID-19 Vaccines and Vaccination-specific Global Geotagged Twitter Conversations
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Social media platforms provide actionable information during crises and pandemic outbreaks. The COVID-19 pandemic has imposed a chronic public health crisis worldwide, with experts considering vaccines as the ultimate prevention to achieve herd immunity against the virus. A proportion of people may turn to social media platforms to oppose vaccines and vaccination, hindering government efforts to eradicate the virus. This paper presents the COVID-19 vaccines and vaccination-specific global geotagged tweets dataset, GeoCovaxTweets, that contains more than 1.8 million tweets, with location information and longer temporal coverage, originating from 233 countries and territories between January 2020 and November 2022. The paper discusses the dataset's curation method and how it can be re-created locally, and later explores the dataset through multiple tweets distributions and briefly discusses its potential use cases. We anticipate that the dataset will assist the researchers in the crisis computing domain to explore the conversational dynamics of COVID-19 vaccines and vaccination Twitter discourse through numerous spatial and temporal dimensions concerning trends, shifts in opinions, misinformation, and anti-vaccination campaigns.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 09:12:21 GMT" } ]
2023-01-19T00:00:00
[ [ "Singh", "Pardeep", "" ], [ "Lamsal", "Rabindra", "" ], [ "Monika", "", "" ], [ "Chand", "Satish", "" ], [ "Shishodia", "Bhawna", "" ] ]
new_dataset
0.999685
2301.07424
Sajad Tavakoil
Shafagh A. Pashaki, Ali Nahvi, Ahmad Ahmadi, Sajad Tavakoli, Shahin Naeemi, Salar H. Shamchi
Autonomous Slalom Maneuver Based on Expert Drivers' Behavior Using Convolutional Neural Network
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Lane changing and obstacle avoidance are one of the most important tasks in automated cars. To date, many algorithms have been suggested that are generally based on path trajectory or reinforcement learning approaches. Although these methods have been efficient, they are not able to accurately imitate a smooth path traveled by an expert driver. In this paper, a method is presented to mimic drivers' behavior using a convolutional neural network (CNN). First, seven features are extracted from a dataset gathered from four expert drivers in a driving simulator. Then, these features are converted from 1D arrays to 2D arrays and injected into a CNN. The CNN model computes the desired steering wheel angle and sends it to an adaptive PD controller. Finally, the control unit applies proper torque to the steering wheel. Results show that the CNN model can mimic the drivers' behavior with an R2-squared of 0.83. Also, the performance of the presented method was evaluated in the driving simulator for 17 trials, which avoided all traffic cones successfully. In some trials, the presented method performed a smoother maneuver compared to the expert drivers.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 10:47:43 GMT" } ]
2023-01-19T00:00:00
[ [ "Pashaki", "Shafagh A.", "" ], [ "Nahvi", "Ali", "" ], [ "Ahmadi", "Ahmad", "" ], [ "Tavakoli", "Sajad", "" ], [ "Naeemi", "Shahin", "" ], [ "Shamchi", "Salar H.", "" ] ]
new_dataset
0.977417
2301.07431
Ge Zhu
Ge Zhu, Jinbao Li and Yahong Guo
Sharp Eyes: A Salient Object Detector Working The Same Way as Human Visual Characteristics
null
null
null
null
cs.CV cs.AI cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Current methods aggregate multi-level features or introduce edge and skeleton to get more refined saliency maps. However, little attention is paid to how to obtain the complete salient object in cluttered background, where the targets are usually similar in color and texture to the background. To handle this complex scene, we propose a sharp eyes network (SENet) that first seperates the object from scene, and then finely segments it, which is in line with human visual characteristics, i.e., to look first and then focus. Different from previous methods which directly integrate edge or skeleton to supplement the defects of objects, the proposed method aims to utilize the expanded objects to guide the network obtain complete prediction. Specifically, SENet mainly consists of target separation (TS) brach and object segmentation (OS) branch trained by minimizing a new hierarchical difference aware (HDA) loss. In the TS branch, we construct a fractal structure to produce saliency features with expanded boundary via the supervision of expanded ground truth, which can enlarge the detail difference between foreground and background. In the OS branch, we first aggregate multi-level features to adaptively select complementary components, and then feed the saliency features with expanded boundary into aggregated features to guide the network obtain complete prediction. Moreover, we propose the HDA loss to further improve the structural integrity and local details of the salient objects, which assigns weight to each pixel according to its distance from the boundary hierarchically. Hard pixels with similar appearance in border region will be given more attention hierarchically to emphasize their importance in completeness prediction. Comprehensive experimental results on five datasets demonstrate that the proposed approach outperforms the state-of-the-art methods both quantitatively and qualitatively.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 11:00:45 GMT" } ]
2023-01-19T00:00:00
[ [ "Zhu", "Ge", "" ], [ "Li", "Jinbao", "" ], [ "Guo", "Yahong", "" ] ]
new_dataset
0.999524
2301.07566
Grigorii Trofimiuk
Grigorii Trofimiuk, Evgeny Belyaev, Peter Trifonov
Distributed Video Coding Based on Polar Codes
This is a slightly modified version of a paper accepted for publication in IEEE Communications Letters
null
10.1109/LCOMM.2023.3237285
null
cs.IT eess.IV math.IT
http://creativecommons.org/licenses/by/4.0/
In this letter we present an improved distributed video coding (DVC) scheme based on polar coding techniques. Firstly, we adapt log-likelihood ratios (LLRs) for DVC with integer implementation of a discrete cosine transform (DCT). We propose a computationally efficient and numerically stable modification of these LLRs based on the simplified methods of polar codes decoding. We show that on average this approach provides 0.3 dB PSNR gain for DVC with LDPC accumulated (LDPCA) codes. Secondly, we introduce the nested shortened polar codes construction algorithm. We demonstrate that replacement of LDPCA by polar codes improves PSNR by 0.1 dB on average, whereas, for videos with relatively high motion level, the gain reaches up to 0.23, 0.39 and 0.55 dB for Group of Pictures (GOP) lengths 2, 4 and 8 frames, respectively. Finally, experimental results demonstrate that DVC with polar codes and Tal-Vardy list decoder operates up to two times faster than DVC with LDPCA code and belief propagation (BP) decoder.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 14:36:50 GMT" } ]
2023-01-19T00:00:00
[ [ "Trofimiuk", "Grigorii", "" ], [ "Belyaev", "Evgeny", "" ], [ "Trifonov", "Peter", "" ] ]
new_dataset
0.998408
2301.07613
Muhammad Ali Farooq
Muhammad Ali Farooq, Waseem Shariff, Faisal Khan, Peter Corcoran
Development, Optimization, and Deployment of Thermal Forward Vision Systems for Advance Vehicular Applications on Edge Devices
The paper is accepted and in the publication phase at ICMV 2022 Conference. Link: http://icmv.org/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this research work, we have proposed a thermal tiny-YOLO multi-class object detection (TTYMOD) system as a smart forward sensing system that should remain effective in all weather and harsh environmental conditions using an end-to-end YOLO deep learning framework. It provides enhanced safety and improved awareness features for driver assistance. The system is trained on large-scale thermal public datasets as well as newly gathered novel open-sourced dataset comprising of more than 35,000 distinct thermal frames. For optimal training and convergence of YOLO-v5 tiny network variant on thermal data, we have employed different optimizers which include stochastic decent gradient (SGD), Adam, and its variant AdamW which has an improved implementation of weight decay. The performance of thermally tuned tiny architecture is further evaluated on the public as well as locally gathered test data in diversified and challenging weather and environmental conditions. The efficacy of a thermally tuned nano network is quantified using various qualitative metrics which include mean average precision, frames per second rate, and average inference time. Experimental outcomes show that the network achieved the best mAP of 56.4% with an average inference time/ frame of 4 milliseconds. The study further incorporates optimization of tiny network variant using the TensorFlow Lite quantization tool this is beneficial for the deployment of deep learning architectures on the edge and mobile devices. For this study, we have used a raspberry pi 4 computing board for evaluating the real-time feasibility performance of an optimized version of the thermal object detection network for the automotive sensor suite. The source code, trained and optimized models and complete validation/ testing results are publicly available at https://github.com/MAli-Farooq/Thermal-YOLO-And-Model-Optimization-Using-TensorFlowLite.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 15:45:33 GMT" } ]
2023-01-19T00:00:00
[ [ "Farooq", "Muhammad Ali", "" ], [ "Shariff", "Waseem", "" ], [ "Khan", "Faisal", "" ], [ "Corcoran", "Peter", "" ] ]
new_dataset
0.99648
2301.07627
Huadeng Wang
Huadeng Wang, Hao Xu, Bingbing Li, Xipeng Pan, Lingqi Zeng, Rushi Lan, Xiaonan Luo
A novel dataset and a two-stage mitosis nuclei detection method based on hybrid anchor branch
22 pages,10 figures, 8 tables
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Mitosis detection is one of the challenging problems in computational pathology, and mitotic count is an important index of cancer grading for pathologists. However, current counts of mitotic nuclei rely on pathologists looking microscopically at the number of mitotic nuclei in hot spots, which is subjective and time-consuming. In this paper, we propose a two-stage cascaded network, named FoCasNet, for mitosis detection. In the first stage, a detection network named M_det is proposed to detect as many mitoses as possible. In the second stage, a classification network M_class is proposed to refine the results of the first stage. In addition, the attention mechanism, normalization method, and hybrid anchor branch classification subnet are introduced to improve the overall detection performance. Our method achieves the current highest F1-score of 0.888 on the public dataset ICPR 2012. We also evaluated our method on the GZMH dataset released by our research team for the first time and reached the highest F1-score of 0.563, which is also better than multiple classic detection networks widely used at present. It confirmed the effectiveness and generalization of our method. The code will be available at: https://github.com/antifen/mitosis-nuclei-detection.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 16:11:09 GMT" } ]
2023-01-19T00:00:00
[ [ "Wang", "Huadeng", "" ], [ "Xu", "Hao", "" ], [ "Li", "Bingbing", "" ], [ "Pan", "Xipeng", "" ], [ "Zeng", "Lingqi", "" ], [ "Lan", "Rushi", "" ], [ "Luo", "Xiaonan", "" ] ]
new_dataset
0.999746
2301.07652
Wei Xie
Wei Xie, Zhipeng Yu, Zimeng Zhao, Binghui Zuo, Yangang Wang
HMDO: Markerless Multi-view Hand Manipulation Capture with Deformable Objects
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We construct the first markerless deformable interaction dataset recording interactive motions of the hands and deformable objects, called HMDO (Hand Manipulation with Deformable Objects). With our built multi-view capture system, it captures the deformable interactions with multiple perspectives, various object shapes, and diverse interactive forms. Our motivation is the current lack of hand and deformable object interaction datasets, as 3D hand and deformable object reconstruction is challenging. Mainly due to mutual occlusion, the interaction area is difficult to observe, the visual features between the hand and the object are entangled, and the reconstruction of the interaction area deformation is difficult. To tackle this challenge, we propose a method to annotate our captured data. Our key idea is to collaborate with estimated hand features to guide the object global pose estimation, and then optimize the deformation process of the object by analyzing the relationship between the hand and the object. Through comprehensive evaluation, the proposed method can reconstruct interactive motions of hands and deformable objects with high quality. HMDO currently consists of 21600 frames over 12 sequences. In the future, this dataset could boost the research of learning-based reconstruction of deformable interaction scenes.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 16:55:15 GMT" } ]
2023-01-19T00:00:00
[ [ "Xie", "Wei", "" ], [ "Yu", "Zhipeng", "" ], [ "Zhao", "Zimeng", "" ], [ "Zuo", "Binghui", "" ], [ "Wang", "Yangang", "" ] ]
new_dataset
0.99462
2301.07669
Regis Kopper
Zekun Cao and Regis Kopper
Real-Time Viewport-Aware Optical Flow Estimation in 360-degree Videos for Visually-Induced Motion Sickness Mitigation
null
null
null
null
cs.HC cs.GR
http://creativecommons.org/licenses/by-sa/4.0/
Visually-induced motion sickness (VIMS), a side effect of illusionary motions caused by visual stimulation, is one of the major obstacles to the widespread use of Virtual Reality (VR). Along with scene object information, the visual stimulation can be primarily indicated by the optical flow, which characterizes the motion pattern, such as the intensity and direction of the moving image. We estimated real-time optical flow in 360-degree videos targeted at immersive user interactive visualization based on the user's current viewport. The proposed method allows the estimation of the customized visual flow for each experience of dynamic 360-degree videos and is an improvement over previous methods which take into account a single optical flow value for the entire equirectangular frame. We applied our method to modulate the opacity of Granulated Rest Frames (GRF), a novel technique consisting of visual noise-like randomly distributed visual references that are stable to the user's body during the experience of immersive prerecorded 360-degree videos. We report the results of a preliminary one-day between-subject study with 18 participants where users watched a 2-minute high-intensity 360-degree video. Results show that GRF combined with real-time optical flow estimation may help users be more comfortable when they watch the 360-degree videos, although the improvement is not significant.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 17:24:30 GMT" } ]
2023-01-19T00:00:00
[ [ "Cao", "Zekun", "" ], [ "Kopper", "Regis", "" ] ]
new_dataset
0.985984
2301.07673
Xingyi He
Xingyi He, Jiaming Sun, Yuang Wang, Di Huang, Hujun Bao, Xiaowei Zhou
OnePose++: Keypoint-Free One-Shot Object Pose Estimation without CAD Models
Accepted to NeurIPS 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose a new method for object pose estimation without CAD models. The previous feature-matching-based method OnePose has shown promising results under a one-shot setting which eliminates the need for CAD models or object-specific training. However, OnePose relies on detecting repeatable image keypoints and is thus prone to failure on low-textured objects. We propose a keypoint-free pose estimation pipeline to remove the need for repeatable keypoint detection. Built upon the detector-free feature matching method LoFTR, we devise a new keypoint-free SfM method to reconstruct a semi-dense point-cloud model for the object. Given a query image for object pose estimation, a 2D-3D matching network directly establishes 2D-3D correspondences between the query image and the reconstructed point-cloud model without first detecting keypoints in the image. Experiments show that the proposed pipeline outperforms existing one-shot CAD-model-free methods by a large margin and is comparable to CAD-model-based methods on LINEMOD even for low-textured objects. We also collect a new dataset composed of 80 sequences of 40 low-textured objects to facilitate future research on one-shot object pose estimation. The supplementary material, code and dataset are available on the project page: https://zju3dv.github.io/onepose_plus_plus/.
[ { "version": "v1", "created": "Wed, 18 Jan 2023 17:47:13 GMT" } ]
2023-01-19T00:00:00
[ [ "He", "Xingyi", "" ], [ "Sun", "Jiaming", "" ], [ "Wang", "Yuang", "" ], [ "Huang", "Di", "" ], [ "Bao", "Hujun", "" ], [ "Zhou", "Xiaowei", "" ] ]
new_dataset
0.970387
1804.09914
Hassan Habibi Gharakheili
Hassan Habibi Gharakheili, Minzhao Lyu, Yu Wang, Himal Kumar, Vijay Sivaraman
iTeleScope: Intelligent Video Telemetry and Classification in Real-Time using Software Defined Networking
12 pages, 16 figures
null
10.1109/TNSM.2019.2929511
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video continues to dominate network traffic, yet operators today have poor visibility into the number, duration, and resolutions of the video streams traversing their domain. Current approaches are inaccurate, expensive, or unscalable, as they rely on statistical sampling, middle-box hardware, or packet inspection software. We present {\em iTelescope}, the first intelligent, inexpensive, and scalable SDN-based solution for identifying and classifying video flows in real-time. Our solution is novel in combining dynamic flow rules with telemetry and machine learning, and is built on commodity OpenFlow switches and open-source software. We develop a fully functional system, train it in the lab using multiple machine learning algorithms, and validate its performance to show over 95\% accuracy in identifying and classifying video streams from many providers including Youtube and Netflix. Lastly, we conduct tests to demonstrate its scalability to tens of thousands of concurrent streams, and deploy it live on a campus network serving several hundred real users. Our system gives unprecedented fine-grained real-time visibility of video streaming performance to operators of enterprise and carrier networks at very low cost.
[ { "version": "v1", "created": "Thu, 26 Apr 2018 07:02:07 GMT" } ]
2023-01-18T00:00:00
[ [ "Gharakheili", "Hassan Habibi", "" ], [ "Lyu", "Minzhao", "" ], [ "Wang", "Yu", "" ], [ "Kumar", "Himal", "" ], [ "Sivaraman", "Vijay", "" ] ]
new_dataset
0.995057
1911.03089
Om Prakash
Om Prakash, Habibul Islam and Ram Krishna Verma
Constacyclic codes of length $4p^s$ over the Galois ring $GR(p^a,m)$
There is mistakes in a few initial results that affecting the whole paper
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
For prime $p$, $GR(p^a,m)$ represents the Galois ring of order $p^{am}$ and characterise $p$, where $a$ is any positive integer. In this article, we study the Type (1) $\lambda$-constacyclic codes of length $4p^s$ over the ring $GR(p^a,m)$, where $\lambda=\xi_0+p\xi_1+p^2z$, $\xi_0,\xi_1\in T(p,m)$ are nonzero elements and $z\in GR(p^a,m)$. In first case, when $\lambda$ is a square, we show that any ideal of $\mathcal{R}_p(a,m,\lambda)=\frac{GR(p^a,m)[x]}{\langle x^{4p^s}-\lambda\rangle}$ is the direct sum of the ideals of $\frac{GR(p^a,m)[x]}{\langle x^{2p^s}-\delta\rangle}$ and $\frac{GR(p^a,m)[x]}{\langle x^{2p^s}+\delta\rangle}$. In second, when $\lambda$ is not a square, we show that $\mathcal{R}_p(a,m,\lambda)$ is a chain ring whose ideals are $\langle (x^4-\alpha)^i\rangle\subseteq \mathcal{R}_p(a,m,\lambda)$, for $0\leq i\leq ap^s$ where $\alpha^{p^s}=\xi_0$. Also, we prove the dual of the above code is $\langle (x^4-\alpha^{-1})^{ap^s-i}\rangle\subseteq \mathcal{R}_p(a,m,\lambda^{-1})$ and present the necessary and sufficient condition for these codes to be self-orthogonal and self-dual, respectively. Moreover, the Rosenbloom-Tsfasman (RT) distance, Hamming distance and weight distribution of Type (1) $\lambda$-constacyclic codes of length $4p^s$ are obtained when $\lambda$ is not a square.
[ { "version": "v1", "created": "Fri, 8 Nov 2019 07:04:34 GMT" }, { "version": "v2", "created": "Sun, 15 Jan 2023 14:31:33 GMT" } ]
2023-01-18T00:00:00
[ [ "Prakash", "Om", "" ], [ "Islam", "Habibul", "" ], [ "Verma", "Ram Krishna", "" ] ]
new_dataset
0.999687
2002.04979
Jingjin Yu
Mario Szegedy and Jingjin Yu
Rubik Tables and Object Rearrangement
Pre-print of extended manuscript accepted by IJRR in 2022
null
null
null
cs.RO cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A great number of robotics applications demand the rearrangement of many mobile objects, e.g., organizing products on shelves, shuffling containers at shipping ports, reconfiguring fleets of mobile robots, and so on. To boost the throughput in systems designed for solving these rearrangement problems, it is essential to minimize the number of atomic operations, e.g., the pick-n-places of individual objects. However, this optimization task poses a rather difficult challenge due to complex inter-dependency between objects, especially in high-density settings. In tackling the aforementioned challenges, we develop a novel algorithmic tool, Rubik Tables, that provides a clean abstraction of object rearrangement problems as the proxy problem of shuffling items stored in a table or lattice. In its basic form, a Rubik Table is an $n\times n$ table containing $n^2$ items. We show that the reconfiguration of items in such a Rubik Table can be achieved using at most $2n$ column/row shuffles in the partially labeled setting, where each column (resp., row) shuffle may arbitrarily permute the items stored in a column (resp., row) of the table. When items are fully distinguishable, additional $n$ shuffles are needed. Rubik Tables allow many generalizations, e.g., to higher dimensions. Using Rubik Table, we have designed a first constant-factor optimal algorithm for stack rearrangement problems. We show that, for $nd$ items stored in $n$ stacks of depth $d$ each, using one empty stack as the swap space, $O(nd)$ stack pop-push operations are sufficient for an arbitrary reconfiguration of the stacks where $d \le n^{\frac{m}{2}}$ for arbitrary fixed $m >0$. Rubik Table results also allow the development of constant-factor optimal solutions for solving multi-robot motion planning problems under extreme robot density. These algorithms based on Rubik Table results run in low-polynomial time.
[ { "version": "v1", "created": "Wed, 12 Feb 2020 13:37:23 GMT" }, { "version": "v2", "created": "Thu, 7 May 2020 02:42:20 GMT" }, { "version": "v3", "created": "Tue, 17 Jan 2023 15:48:58 GMT" } ]
2023-01-18T00:00:00
[ [ "Szegedy", "Mario", "" ], [ "Yu", "Jingjin", "" ] ]
new_dataset
0.968416
2003.00839
Bin Fang
Bin Fang, Xingming Long, Yifan Zhang, GuoYi Luo, Fuchun Sun, Huaping Liu
Fabric Defect Detection Using Vision-Based Tactile Sensor
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a new type of system for fabric defect detection with the tactile inspection system. Different from existed visual inspection systems, the proposed system implements a vision-based tactile sensor. The tactile sensor, which mainly consists of a camera, four LEDs, and an elastic sensing layer, captures detailed information about fabric surface structure and ignores the color and pattern. Thus, the ambiguity between a defect and image background related to fabric color and pattern is avoided. To utilize the tactile sensor for fabric inspection, we employ intensity adjustment for image preprocessing, Residual Network with ensemble learning for detecting defects, and uniformity measurement for selecting ideal dataset for model training. An experiment is conducted to verify the performance of the proposed tactile system. The experimental results have demonstrated the feasibility of the proposed system, which performs well in detecting structural defects for various types of fabrics. In addition, the system does not require external light sources, which skips the process of setting up and tuning a lighting environment.
[ { "version": "v1", "created": "Mon, 2 Mar 2020 12:57:45 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 11:09:14 GMT" } ]
2023-01-18T00:00:00
[ [ "Fang", "Bin", "" ], [ "Long", "Xingming", "" ], [ "Zhang", "Yifan", "" ], [ "Luo", "GuoYi", "" ], [ "Sun", "Fuchun", "" ], [ "Liu", "Huaping", "" ] ]
new_dataset
0.999403
2011.04230
Afshin Alipour
Afshin Alipour, Mohammad J. Mahjoob, Zahra Fakhari, and Ara Nazarian
A New 4-DOF Robot for Rehabilitation of Knee and Ankle-Foot Complex: Simulation and Experiment
23 pages, 14 figures
Journal of Robotics and Control (JRC) [Online], 3.4 (2022): 483-495
10.18196/jrc.v3i4.14759
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Stationary robotic trainers are lower limb rehab robots which often incorporate an exoskeleton attached to a stationary base. The issue observed in the stationery trainers for simultaneous knee and ankle-foot complex joints is that they restrict the natural motion of ankle-foot in the rehab trainings due to the insufficient Degrees of Freedom (DOFs) of these trainers. A new stationary knee-ankle-foot rehab robot with all necessary DOFs is developed here. A typical rehab training is first implemented in simulation, and then tested on a healthy subject. Results show that the proposed system functions naturally and meets the requirements of the desired rehab training.
[ { "version": "v1", "created": "Mon, 9 Nov 2020 07:39:40 GMT" } ]
2023-01-18T00:00:00
[ [ "Alipour", "Afshin", "" ], [ "Mahjoob", "Mohammad J.", "" ], [ "Fakhari", "Zahra", "" ], [ "Nazarian", "Ara", "" ] ]
new_dataset
0.996573
2109.00908
Adam Michael Roberts
Joe Gildea, Adrian Korban, Adam Michael Roberts, Alexander Tylyshchak
Binary self-dual codes of various lengths with new weight enumerators from a modified bordered construction and neighbours
arXiv admin note: substantial text overlap with arXiv:2108.09184, arXiv:2106.12355, arXiv:2102.10354
null
10.3934/amc.2022021
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we define a modification of a bordered construction for self-dual codes which utilises $\lambda$-circulant matrices. We provide the necessary conditions for the construction to produce self-dual codes over finite commutative Frobenius rings of characteristic 2. Using the modified construction together with the neighbour construction, we construct many binary self-dual codes of lengths 54, 68, 82 and 94 with weight enumerators that have previously not been known to exist.
[ { "version": "v1", "created": "Thu, 2 Sep 2021 13:05:53 GMT" } ]
2023-01-18T00:00:00
[ [ "Gildea", "Joe", "" ], [ "Korban", "Adrian", "" ], [ "Roberts", "Adam Michael", "" ], [ "Tylyshchak", "Alexander", "" ] ]
new_dataset
0.999489
2109.12881
Ra'fat AL-Msie'deen Dr. Rafat
Ra'Fat Al-Msie'deen
SoftCloud: A Tool for Visualizing Software Artifacts as Tag Clouds
14 pages, 10 figures, 10 tables
Mu'tah Lil-Buhuth wad-Dirasat, Natural and Applied Sciences Series Vol. 37. No.2, pp. 93-115, 2022
null
null
cs.SE
http://creativecommons.org/publicdomain/zero/1.0/
Software artifacts visualization helps software developers to manage the size and complexity of the software system. The tag cloud technique visualizes tags within the cloud according to their frequencies in software artifacts. A font size of the tag within the cloud indicates its frequency within a software artifact, while the color of a tag within the cloud uses just for aesthetic purposes. This paper suggests a new approach (SoftCloud) to visualize software artifacts as a tag cloud. The originality of SoftCloud is visualizing all the artifacts available to the software program as a tag cloud. Experiments have conducted on different software artifacts to validate SoftCloud and demonstrate its strengths. The results showed the ability of SoftCloud to correctly retrieve all tags and their frequencies from available software artifacts.
[ { "version": "v1", "created": "Mon, 27 Sep 2021 09:04:19 GMT" } ]
2023-01-18T00:00:00
[ [ "Al-Msie'deen", "Ra'Fat", "" ] ]
new_dataset
0.972297
2110.07689
Xinyu Wang
Xinyu Wang
First-Order Modal $\xi$-Calculus: On the Aspects of Application and Bisimulation
null
null
null
null
cs.LO math.LO
http://creativecommons.org/licenses/by/4.0/
This paper proposes first-order modal $\xi$-calculus as well as genealogical Kripke models. Inspired by modal $\mu$-calculus, first-order modal $\xi$-calculus takes a quite similar form and extends its inductive expressivity onto a different dimension. We elaborate on several vivid examples that demonstrate this logic's profound utility, especially for depicting genealogy of concurrent computer processes. Bisimulation notion for the logic has also been thoroughly examined.
[ { "version": "v1", "created": "Thu, 14 Oct 2021 19:54:57 GMT" }, { "version": "v2", "created": "Thu, 24 Feb 2022 03:16:29 GMT" }, { "version": "v3", "created": "Mon, 21 Mar 2022 14:06:13 GMT" }, { "version": "v4", "created": "Tue, 17 Jan 2023 02:52:02 GMT" } ]
2023-01-18T00:00:00
[ [ "Wang", "Xinyu", "" ] ]
new_dataset
0.980183
2111.01946
Lijun Sun Mr
Jiawei Wang, Lijun Sun
Robust Dynamic Bus Control: A Distributional Multi-agent Reinforcement Learning Approach
null
IEEE Transactions on Intelligent Transportation Systems (2022)
10.1109/TITS.2022.3229527
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Bus system is a critical component of sustainable urban transportation. However, the operation of a bus fleet is unstable in nature, and bus bunching has become a common phenomenon that undermines the efficiency and reliability of bus systems. Recently research has demonstrated the promising application of multi-agent reinforcement learning (MARL) to achieve efficient vehicle holding control to avoid bus bunching. However, existing studies essentially overlook the robustness issue resulting from various events, perturbations and anomalies in a transit system, which is of utmost importance when transferring the models for real-world deployment/application. In this study, we integrate implicit quantile network and meta-learning to develop a distributional MARL framework -- IQNC-M -- to learn continuous control. The proposed IQNC-M framework achieves efficient and reliable control decisions through better handling various uncertainties/events in real-time transit operations. Specifically, we introduce an interpretable meta-learning module to incorporate global information into the distributional MARL framework, which is an effective solution to circumvent the credit assignment issue in the transit system. In addition, we design a specific learning procedure to train each agent within the framework to pursue a robust control policy. We develop simulation environments based on real-world bus services and passenger demand data and evaluate the proposed framework against both traditional holding control models and state-of-the-art MARL models. Our results show that the proposed IQNC-M framework can effectively handle the various extreme events, such as traffic state perturbations, service interruptions, and demand surges, thus improving both efficiency and reliability of the system.
[ { "version": "v1", "created": "Tue, 2 Nov 2021 23:41:09 GMT" } ]
2023-01-18T00:00:00
[ [ "Wang", "Jiawei", "" ], [ "Sun", "Lijun", "" ] ]
new_dataset
0.992728
2111.05481
Noah Kaufmann
Noah Kaufmann
A Diamond Structure in the Transducer Hierarchy
null
null
null
null
cs.FL math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We answer an open question in the theory of transducer degrees initially posed in [1] on the existence of a diamond structure in the transducer hierarchy. Transducer degrees are the equivalence classes formed by word transformations which can be realized by a finite state transducer, which form an order based on which words can be transformed into other words. We provide a construction which proves the existence of a diamond structure, while also introducing a new function on streams which may be useful for proving more results about the transducer hierarchy.
[ { "version": "v1", "created": "Wed, 10 Nov 2021 01:48:10 GMT" }, { "version": "v2", "created": "Tue, 2 Aug 2022 21:49:43 GMT" }, { "version": "v3", "created": "Sun, 15 Jan 2023 20:19:11 GMT" } ]
2023-01-18T00:00:00
[ [ "Kaufmann", "Noah", "" ] ]
new_dataset
0.987681
2111.12263
Bin-Bin Gao
Jiacheng Chen, Bin-Bin Gao, Zongqing Lu, Jing-Hao Xue, Chengjie Wang and Qingmin Liao
APANet: Adaptive Prototypes Alignment Network for Few-Shot Semantic Segmentation
12 pages, 7 figures, Accepted to IEEE Trans. on Multimedia. arXiv admin note: substantial text overlap with arXiv:2104.09216
null
10.1109/TMM.2022.3174405
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot semantic segmentation aims to segment novel-class objects in a given query image with only a few labeled support images. Most advanced solutions exploit a metric learning framework that performs segmentation through matching each query feature to a learned class-specific prototype. However, this framework suffers from biased classification due to incomplete feature comparisons. To address this issue, we present an adaptive prototype representation by introducing class-specific and class-agnostic prototypes and thus construct complete sample pairs for learning semantic alignment with query features. The complementary features learning manner effectively enriches feature comparison and helps yield an unbiased segmentation model in the few-shot setting. It is implemented with a two-branch end-to-end network (i.e., a class-specific branch and a class-agnostic branch), which generates prototypes and then combines query features to perform comparisons. In addition, the proposed class-agnostic branch is simple yet effective. In practice, it can adaptively generate multiple class-agnostic prototypes for query images and learn feature alignment in a self-contrastive manner. Extensive experiments on PASCAL-5$^i$ and COCO-20$^i$ demonstrate the superiority of our method. At no expense of inference efficiency, our model achieves state-of-the-art results in both 1-shot and 5-shot settings for semantic segmentation.
[ { "version": "v1", "created": "Wed, 24 Nov 2021 04:38:37 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 09:24:37 GMT" } ]
2023-01-18T00:00:00
[ [ "Chen", "Jiacheng", "" ], [ "Gao", "Bin-Bin", "" ], [ "Lu", "Zongqing", "" ], [ "Xue", "Jing-Hao", "" ], [ "Wang", "Chengjie", "" ], [ "Liao", "Qingmin", "" ] ]
new_dataset
0.993113
2202.05917
Delaram Kahrobaei
Delaram Kahrobaei, Ram\'on Flores, Marialaura Noce
Group-based Cryptography in the Quantum Era
To appear in the Notices of the American Mathematical Society
null
null
null
cs.CR math.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this expository article we present an overview of the current state-of-the-art in post-quantum group-based cryptography. We describe several families of groups that have been proposed as platforms, with special emphasis in polycyclic groups and graph groups, dealing in particular with their algorithmic properties and cryptographic applications. We then, describe some applications of combinatorial algebra in fully homomorphic encryption. In the end we discussing several open problems in this direction.
[ { "version": "v1", "created": "Fri, 11 Feb 2022 22:01:45 GMT" }, { "version": "v2", "created": "Sat, 19 Feb 2022 17:22:40 GMT" }, { "version": "v3", "created": "Thu, 24 Feb 2022 15:01:28 GMT" }, { "version": "v4", "created": "Tue, 17 Jan 2023 11:52:12 GMT" } ]
2023-01-18T00:00:00
[ [ "Kahrobaei", "Delaram", "" ], [ "Flores", "Ramón", "" ], [ "Noce", "Marialaura", "" ] ]
new_dataset
0.986647
2204.01968
Soumik Mohian
Soumik Mohian, Christoph Csallner
PSDoodle: Searching for App Screens via Interactive Sketching
arXiv admin note: text overlap with arXiv:2204.01956
null
10.1145/3524613.3527807
null
cs.CV cs.SE
http://creativecommons.org/licenses/by/4.0/
Keyword-based mobile screen search does not account for screen content and fails to operate as a universal tool for all levels of users. Visual searching (e.g., image, sketch) is structured and easy to adopt. Current visual search approaches count on a complete screen and are therefore slow and tedious. PSDoodle employs a deep neural network to recognize partial screen element drawings instantly on a digital drawing interface and shows results in real-time. PSDoodle is the first tool that utilizes partial sketches and searches for screens in an interactive iterative way. PSDoodle supports different drawing styles and retrieves search results that are relevant to the user's sketch query. A short video demonstration is available online at: https://youtu.be/3cVLHFm5pY4
[ { "version": "v1", "created": "Tue, 5 Apr 2022 03:46:48 GMT" }, { "version": "v2", "created": "Wed, 6 Apr 2022 18:53:24 GMT" } ]
2023-01-18T00:00:00
[ [ "Mohian", "Soumik", "" ], [ "Csallner", "Christoph", "" ] ]
new_dataset
0.956047
2204.12384
Finn Voichick
Finn Voichick, Liyi Li, Robert Rand, Michael Hicks
Qunity: A Unified Language for Quantum and Classical Computing (Extended Version)
76 pages, 37 figures. To appear at POPL 2023, previous version presented at QPL 2022. Expanded with additional background information and a characterization of the classical sublanguage
null
10.1145/3571225
null
cs.PL cs.LO quant-ph
http://creativecommons.org/licenses/by/4.0/
We introduce Qunity, a new quantum programming language designed to treat quantum computing as a natural generalization of classical computing. Qunity presents a unified syntax where familiar programming constructs can have both quantum and classical effects. For example, one can use sum types to implement the direct sum of linear operators, exception-handling syntax to implement projective measurements, and aliasing to induce entanglement. Further, Qunity takes advantage of the overlooked BQP subroutine theorem, allowing one to construct reversible subroutines from irreversible quantum algorithms through the uncomputation of "garbage" outputs. Unlike existing languages that enable quantum aspects with separate add-ons (like a classical language with quantum gates bolted on), Qunity provides a unified syntax and a novel denotational semantics that guarantees that programs are quantum mechanically valid. We present Qunity's syntax, type system, and denotational semantics, showing how it can cleanly express several quantum algorithms. We also detail how Qunity can be compiled into a low-level qubit circuit language like OpenQASM, proving the realizability of our design.
[ { "version": "v1", "created": "Tue, 26 Apr 2022 15:34:22 GMT" }, { "version": "v2", "created": "Wed, 20 Jul 2022 12:31:06 GMT" }, { "version": "v3", "created": "Tue, 15 Nov 2022 02:44:37 GMT" } ]
2023-01-18T00:00:00
[ [ "Voichick", "Finn", "" ], [ "Li", "Liyi", "" ], [ "Rand", "Robert", "" ], [ "Hicks", "Michael", "" ] ]
new_dataset
0.99939
2206.02153
Amashi Niwarthana
Arulmolivarman Thieshanthan, Amashi Niwarthana, Pamuditha Somarathne, Tharindu Wickremasinghe, Ranga Rodrigo
HPGNN: Using Hierarchical Graph Neural Networks for Outdoor Point Cloud Processing
Accepted for ICPR 2022
null
10.1109/ICPR56361.2022.9956238
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inspired by recent improvements in point cloud processing for autonomous navigation, we focus on using hierarchical graph neural networks for processing and feature learning over large-scale outdoor LiDAR point clouds. We observe that existing GNN based methods fail to overcome challenges of scale and irregularity of points in outdoor datasets. Addressing the need to preserve structural details while learning over a larger volume efficiently, we propose Hierarchical Point Graph Neural Network (HPGNN). It learns node features at various levels of graph coarseness to extract information. This enables to learn over a large point cloud while retaining fine details that existing point-level graph networks struggle to achieve. Connections between multiple levels enable a point to learn features in multiple scales, in a few iterations. We design HPGNN as a purely GNN-based approach, so that it offers modular expandability as seen with other point-based and Graph network baselines. To illustrate the improved processing capability, we compare previous point based and GNN models for semantic segmentation with our HPGNN, achieving a significant improvement for GNNs (+36.7 mIoU) on the SemanticKITTI dataset.
[ { "version": "v1", "created": "Sun, 5 Jun 2022 11:18:09 GMT" } ]
2023-01-18T00:00:00
[ [ "Thieshanthan", "Arulmolivarman", "" ], [ "Niwarthana", "Amashi", "" ], [ "Somarathne", "Pamuditha", "" ], [ "Wickremasinghe", "Tharindu", "" ], [ "Rodrigo", "Ranga", "" ] ]
new_dataset
0.995965
2206.07038
Yanze Wu
Yanze Wu, Xintao Wang, Gen Li, Ying Shan
AnimeSR: Learning Real-World Super-Resolution Models for Animation Videos
NeurIPS 2022. Codes and models are available at https://github.com/TencentARC/AnimeSR
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the problem of real-world video super-resolution (VSR) for animation videos, and reveals three key improvements for practical animation VSR. First, recent real-world super-resolution approaches typically rely on degradation simulation using basic operators without any learning capability, such as blur, noise, and compression. In this work, we propose to learn such basic operators from real low-quality animation videos, and incorporate the learned ones into the degradation generation pipeline. Such neural-network-based basic operators could help to better capture the distribution of real degradations. Second, a large-scale high-quality animation video dataset, AVC, is built to facilitate comprehensive training and evaluations for animation VSR. Third, we further investigate an efficient multi-scale network structure. It takes advantage of the efficiency of unidirectional recurrent networks and the effectiveness of sliding-window-based methods. Thanks to the above delicate designs, our method, AnimeSR, is capable of restoring real-world low-quality animation videos effectively and efficiently, achieving superior performance to previous state-of-the-art methods. Codes and models are available at https://github.com/TencentARC/AnimeSR.
[ { "version": "v1", "created": "Tue, 14 Jun 2022 17:57:11 GMT" }, { "version": "v2", "created": "Tue, 21 Jun 2022 11:20:53 GMT" }, { "version": "v3", "created": "Tue, 17 Jan 2023 11:08:41 GMT" } ]
2023-01-18T00:00:00
[ [ "Wu", "Yanze", "" ], [ "Wang", "Xintao", "" ], [ "Li", "Gen", "" ], [ "Shan", "Ying", "" ] ]
new_dataset
0.991098
2206.10881
Yuan Li
Jinjie Gao, Haibin Kan, Yuan Li, Qichun Wang
The Covering Radius of the Third-Order Reed-Muller Code RM(3,7) is 20
null
null
null
null
cs.IT cs.DM math.IT
http://creativecommons.org/licenses/by/4.0/
We prove the covering radius of the third-order Reed-Muller code RM(3,7) is 20, which was previously known to be between 20 and 23 (inclusive). The covering radius of RM(3, 7) is the maximum third-order nonlinearity among all 7-variable Boolean functions. It was known that there exist 7-variable Boolean functions with third-order nonlinearity 20. We prove the third-order nonlinearity cannot achieve 21. According to the classification of the quotient space of RM(6,6)/RM(3,6), we classify all 7-variable Boolean functions into 66 types. Firstly, we prove 62 types (among 66) cannot have third-order nonlinearity 21; Secondly, we prove function of the remaining 4 types can be transformed into a type (6, 10) function, if its third-order nonlinearity is 21; Finally, we transform type (6, 10) functions into a specific form, and prove the functions in that form cannot achieve third-order nonlinearity 21 (with the assistance of computers). By the way, we prove that the affine transformation group over any finite field can be generated by two elements.
[ { "version": "v1", "created": "Wed, 22 Jun 2022 07:10:37 GMT" }, { "version": "v2", "created": "Sat, 27 Aug 2022 01:28:16 GMT" }, { "version": "v3", "created": "Sun, 15 Jan 2023 01:44:01 GMT" } ]
2023-01-18T00:00:00
[ [ "Gao", "Jinjie", "" ], [ "Kan", "Haibin", "" ], [ "Li", "Yuan", "" ], [ "Wang", "Qichun", "" ] ]
new_dataset
0.968456
2206.15398
Yanqin Jiang
Yanqin Jiang, Li Zhang, Zhenwei Miao, Xiatian Zhu, Jin Gao, Weiming Hu, Yu-Gang Jiang
PolarFormer: Multi-camera 3D Object Detection with Polar Transformer
Accepted to AAAI2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
3D object detection in autonomous driving aims to reason "what" and "where" the objects of interest present in a 3D world. Following the conventional wisdom of previous 2D object detection, existing methods often adopt the canonical Cartesian coordinate system with perpendicular axis. However, we conjugate that this does not fit the nature of the ego car's perspective, as each onboard camera perceives the world in shape of wedge intrinsic to the imaging geometry with radical (non-perpendicular) axis. Hence, in this paper we advocate the exploitation of the Polar coordinate system and propose a new Polar Transformer (PolarFormer) for more accurate 3D object detection in the bird's-eye-view (BEV) taking as input only multi-camera 2D images. Specifically, we design a cross attention based Polar detection head without restriction to the shape of input structure to deal with irregular Polar grids. For tackling the unconstrained object scale variations along Polar's distance dimension, we further introduce a multi-scalePolar representation learning strategy. As a result, our model can make best use of the Polar representation rasterized via attending to the corresponding image observation in a sequence-to-sequence fashion subject to the geometric constraints. Thorough experiments on the nuScenes dataset demonstrate that our PolarFormer outperforms significantly state-of-the-art 3D object detection alternatives.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 16:32:48 GMT" }, { "version": "v2", "created": "Fri, 1 Jul 2022 09:27:56 GMT" }, { "version": "v3", "created": "Sun, 10 Jul 2022 11:49:53 GMT" }, { "version": "v4", "created": "Tue, 12 Jul 2022 08:18:01 GMT" }, { "version": "v5", "created": "Fri, 23 Dec 2022 08:45:37 GMT" }, { "version": "v6", "created": "Mon, 16 Jan 2023 02:24:33 GMT" } ]
2023-01-18T00:00:00
[ [ "Jiang", "Yanqin", "" ], [ "Zhang", "Li", "" ], [ "Miao", "Zhenwei", "" ], [ "Zhu", "Xiatian", "" ], [ "Gao", "Jin", "" ], [ "Hu", "Weiming", "" ], [ "Jiang", "Yu-Gang", "" ] ]
new_dataset
0.996193
2207.08051
Samar Khanna
Yezhen Cong, Samar Khanna, Chenlin Meng, Patrick Liu, Erik Rozi, Yutong He, Marshall Burke, David B. Lobell, Stefano Ermon
SatMAE: Pre-training Transformers for Temporal and Multi-Spectral Satellite Imagery
Published at NeurIPS 2022. The first two listed names contributed equally to this project
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Unsupervised pre-training methods for large vision models have shown to enhance performance on downstream supervised tasks. Developing similar techniques for satellite imagery presents significant opportunities as unlabelled data is plentiful and the inherent temporal and multi-spectral structure provides avenues to further improve existing pre-training strategies. In this paper, we present SatMAE, a pre-training framework for temporal or multi-spectral satellite imagery based on Masked Autoencoder (MAE). To leverage temporal information, we include a temporal embedding along with independently masking image patches across time. In addition, we demonstrate that encoding multi-spectral data as groups of bands with distinct spectral positional encodings is beneficial. Our approach yields strong improvements over previous state-of-the-art techniques, both in terms of supervised learning performance on benchmark datasets (up to $\uparrow$ 7%), and transfer learning performance on downstream remote sensing tasks, including land cover classification (up to $\uparrow$ 14%) and semantic segmentation. Code and data are available on the project website: https://sustainlab-group.github.io/SatMAE/
[ { "version": "v1", "created": "Sun, 17 Jul 2022 01:35:29 GMT" }, { "version": "v2", "created": "Thu, 20 Oct 2022 01:04:57 GMT" }, { "version": "v3", "created": "Sun, 15 Jan 2023 19:27:57 GMT" } ]
2023-01-18T00:00:00
[ [ "Cong", "Yezhen", "" ], [ "Khanna", "Samar", "" ], [ "Meng", "Chenlin", "" ], [ "Liu", "Patrick", "" ], [ "Rozi", "Erik", "" ], [ "He", "Yutong", "" ], [ "Burke", "Marshall", "" ], [ "Lobell", "David B.", "" ], [ "Ermon", "Stefano", "" ] ]
new_dataset
0.997857
2208.00554
Noah Kaufmann
Noah Kaufmann
A Diamond Structure in the Transducer Hierarchy
Incorrectly uploaded
null
null
null
cs.FL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We answer an open question in the theory of transducer degrees initially posed in [1] on the existence of a diamond structure in the transducer hierarchy. Transducer degrees are the equivalence classes formed by word transformations which can be realized by a finite state transducer, which form an order based on which words can be transformed into other words. We provide a construction which proves the existence of a diamond structure, while also introducing a new function on streams which may be useful for proving more results about the transducer hierarchy.
[ { "version": "v1", "created": "Mon, 1 Aug 2022 01:05:01 GMT" }, { "version": "v2", "created": "Wed, 3 Aug 2022 15:23:55 GMT" }, { "version": "v3", "created": "Thu, 4 Aug 2022 14:38:05 GMT" }, { "version": "v4", "created": "Sat, 10 Sep 2022 02:25:24 GMT" }, { "version": "v5", "created": "Sun, 15 Jan 2023 20:14:53 GMT" } ]
2023-01-18T00:00:00
[ [ "Kaufmann", "Noah", "" ] ]
new_dataset
0.987681
2208.00751
Zhe Zhu
Zhe Zhu, Liangliang Nan, Haoran Xie, Honghua Chen, Mingqiang Wei, Jun Wang, Jing Qin
CSDN: Cross-modal Shape-transfer Dual-refinement Network for Point Cloud Completion
null
null
10.1109/TVCG.2023.3236061
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
How will you repair a physical object with some missings? You may imagine its original shape from previously captured images, recover its overall (global) but coarse shape first, and then refine its local details. We are motivated to imitate the physical repair procedure to address point cloud completion. To this end, we propose a cross-modal shape-transfer dual-refinement network (termed CSDN), a coarse-to-fine paradigm with images of full-cycle participation, for quality point cloud completion. CSDN mainly consists of "shape fusion" and "dual-refinement" modules to tackle the cross-modal challenge. The first module transfers the intrinsic shape characteristics from single images to guide the geometry generation of the missing regions of point clouds, in which we propose IPAdaIN to embed the global features of both the image and the partial point cloud into completion. The second module refines the coarse output by adjusting the positions of the generated points, where the local refinement unit exploits the geometric relation between the novel and the input points by graph convolution, and the global constraint unit utilizes the input image to fine-tune the generated offset. Different from most existing approaches, CSDN not only explores the complementary information from images but also effectively exploits cross-modal data in the whole coarse-to-fine completion procedure. Experimental results indicate that CSDN performs favorably against ten competitors on the cross-modal benchmark.
[ { "version": "v1", "created": "Mon, 1 Aug 2022 11:20:56 GMT" }, { "version": "v2", "created": "Mon, 12 Dec 2022 03:13:40 GMT" } ]
2023-01-18T00:00:00
[ [ "Zhu", "Zhe", "" ], [ "Nan", "Liangliang", "" ], [ "Xie", "Haoran", "" ], [ "Chen", "Honghua", "" ], [ "Wei", "Mingqiang", "" ], [ "Wang", "Jun", "" ], [ "Qin", "Jing", "" ] ]
new_dataset
0.998674
2208.05119
Tongzhou Shen
Atia Hamidizadeh, Tony Shen, Martin Ester
Semi-Supervised Junction Tree Variational Autoencoder for Molecular Property Prediction
null
null
null
null
cs.LG physics.chem-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Molecular Representation Learning is essential to solving many drug discovery and computational chemistry problems. It is a challenging problem due to the complex structure of molecules and the vast chemical space. Graph representations of molecules are more expressive than traditional representations, such as molecular fingerprints. Therefore, they can improve the performance of machine learning models. We propose SeMole, a method that augments the Junction Tree Variational Autoencoders, a state-of-the-art generative model for molecular graphs, with semi-supervised learning. SeMole aims to improve the accuracy of molecular property prediction when having limited labeled data by exploiting unlabeled data. We enforce that the model generates molecular graphs conditioned on target properties by incorporating the property into the latent representation. We propose an additional pre-training phase to improve the training process for our semi-supervised generative model. We perform an experimental evaluation on the ZINC dataset using three different molecular properties and demonstrate the benefits of semi-supervision.
[ { "version": "v1", "created": "Wed, 10 Aug 2022 03:06:58 GMT" }, { "version": "v2", "created": "Mon, 15 Aug 2022 19:13:45 GMT" }, { "version": "v3", "created": "Tue, 23 Aug 2022 21:20:54 GMT" }, { "version": "v4", "created": "Thu, 1 Sep 2022 16:16:06 GMT" }, { "version": "v5", "created": "Sun, 15 Jan 2023 02:07:56 GMT" } ]
2023-01-18T00:00:00
[ [ "Hamidizadeh", "Atia", "" ], [ "Shen", "Tony", "" ], [ "Ester", "Martin", "" ] ]
new_dataset
0.984459
2209.06909
Sebastian Wild
William Cawley Gelling and Markus E. Nebel and Benjamin Smith and Sebastian Wild
Multiway Powersort
17 pages; accompanying source code at https://github.com/sebawild/powersort; v2 adds new figure and text changes. v2 is identical to the ALENEX 2023 version
ALENEX 2023
10.1137/1.9781611977561.ch16
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a stable mergesort variant, Multiway Powersort, that exploits existing runs and finds nearly-optimal merging orders for k-way merges with negligible overhead. This builds on Powersort (Munro & Wild, ESA2018), which has recently replaced Timsort's suboptimal merge policy in the CPython reference implementation of Python, as well as in PyPy and further libraries. Multiway Powersort reduces the number of memory transfers, which increasingly determine the cost of internal sorting (as observed with Multiway Quicksort (Kushagra et al., ALENEX 2014; Aum\"uller & Dietzfelbinger, TALG 2016; Wild, PhD thesis 2016) and the inclusion of Dual-Pivot Quicksort in the Java runtime library). We demonstrate that our 4-way Powersort implementation can achieve substantial speedups over standard (2-way) Powersort and other stable sorting methods without compromising the optimally run-adaptive performance of Powersort.
[ { "version": "v1", "created": "Wed, 14 Sep 2022 20:06:30 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 01:26:12 GMT" } ]
2023-01-18T00:00:00
[ [ "Gelling", "William Cawley", "" ], [ "Nebel", "Markus E.", "" ], [ "Smith", "Benjamin", "" ], [ "Wild", "Sebastian", "" ] ]
new_dataset
0.995292
2209.07989
Erkang Cheng
Yifeng Bai, Zhirong Chen, Zhangjie Fu, Lang Peng, Pengpeng Liang, Erkang Cheng
CurveFormer: 3D Lane Detection by Curve Propagation with Curve Queries and Attention
Accepted at the IEEE Conference on Robotics and Automation, ICRA 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
3D lane detection is an integral part of autonomous driving systems. Previous CNN and Transformer-based methods usually first generate a bird's-eye-view (BEV) feature map from the front view image, and then use a sub-network with BEV feature map as input to predict 3D lanes. Such approaches require an explicit view transformation between BEV and front view, which itself is still a challenging problem. In this paper, we propose CurveFormer, a single-stage Transformer-based method that directly calculates 3D lane parameters and can circumvent the difficult view transformation step. Specifically, we formulate 3D lane detection as a curve propagation problem by using curve queries. A 3D lane query is represented by a dynamic and ordered anchor point set. In this way, queries with curve representation in Transformer decoder iteratively refine the 3D lane detection results. Moreover, a curve cross-attention module is introduced to compute the similarities between curve queries and image features. Additionally, a context sampling module that can capture more relative image features of a curve query is provided to further boost the 3D lane detection performance. We evaluate our method for 3D lane detection on both synthetic and real-world datasets, and the experimental results show that our method achieves promising performance compared with the state-of-the-art approaches. The effectiveness of each component is validated via ablation studies as well.
[ { "version": "v1", "created": "Fri, 16 Sep 2022 14:54:57 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 15:10:09 GMT" } ]
2023-01-18T00:00:00
[ [ "Bai", "Yifeng", "" ], [ "Chen", "Zhirong", "" ], [ "Fu", "Zhangjie", "" ], [ "Peng", "Lang", "" ], [ "Liang", "Pengpeng", "" ], [ "Cheng", "Erkang", "" ] ]
new_dataset
0.954269
2210.12375
Marten Lienen
Marten Lienen and Stephan G\"unnemann
torchode: A Parallel ODE Solver for PyTorch
Accepted at The Symbiosis of Deep Learning and Differential Equations Workshop, NeurIPS, 2022
null
null
null
cs.LG cs.NA math.NA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce an ODE solver for the PyTorch ecosystem that can solve multiple ODEs in parallel independently from each other while achieving significant performance gains. Our implementation tracks each ODE's progress separately and is carefully optimized for GPUs and compatibility with PyTorch's JIT compiler. Its design lets researchers easily augment any aspect of the solver and collect and analyze internal solver statistics. In our experiments, our implementation is up to 4.3 times faster per step than other ODE solvers and it is robust against within-batch interactions that lead other solvers to take up to 4 times as many steps. Code available at https://github.com/martenlienen/torchode
[ { "version": "v1", "created": "Sat, 22 Oct 2022 07:08:17 GMT" }, { "version": "v2", "created": "Tue, 17 Jan 2023 09:02:47 GMT" } ]
2023-01-18T00:00:00
[ [ "Lienen", "Marten", "" ], [ "Günnemann", "Stephan", "" ] ]
new_dataset
0.976232
2210.14791
Simar Kareer
Simar Kareer, Naoki Yokoyama, Dhruv Batra, Sehoon Ha, Joanne Truong
ViNL: Visual Navigation and Locomotion Over Obstacles
null
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present Visual Navigation and Locomotion over obstacles (ViNL), which enables a quadrupedal robot to navigate unseen apartments while stepping over small obstacles that lie in its path (e.g., shoes, toys, cables), similar to how humans and pets lift their feet over objects as they walk. ViNL consists of: (1) a visual navigation policy that outputs linear and angular velocity commands that guides the robot to a goal coordinate in unfamiliar indoor environments; and (2) a visual locomotion policy that controls the robot's joints to avoid stepping on obstacles while following provided velocity commands. Both the policies are entirely "model-free", i.e. sensors-to-actions neural networks trained end-to-end. The two are trained independently in two entirely different simulators and then seamlessly co-deployed by feeding the velocity commands from the navigator to the locomotor, entirely "zero-shot" (without any co-training). While prior works have developed learning methods for visual navigation or visual locomotion, to the best of our knowledge, this is the first fully learned approach that leverages vision to accomplish both (1) intelligent navigation in new environments, and (2) intelligent visual locomotion that aims to traverse cluttered environments without disrupting obstacles. On the task of navigation to distant goals in unknown environments, ViNL using just egocentric vision significantly outperforms prior work on robust locomotion using privileged terrain maps (+32.8% success and -4.42 collisions per meter). Additionally, we ablate our locomotion policy to show that each aspect of our approach helps reduce obstacle collisions. Videos and code at http://www.joannetruong.com/projects/vinl.html
[ { "version": "v1", "created": "Wed, 26 Oct 2022 15:38:28 GMT" }, { "version": "v2", "created": "Sat, 14 Jan 2023 20:19:59 GMT" } ]
2023-01-18T00:00:00
[ [ "Kareer", "Simar", "" ], [ "Yokoyama", "Naoki", "" ], [ "Batra", "Dhruv", "" ], [ "Ha", "Sehoon", "" ], [ "Truong", "Joanne", "" ] ]
new_dataset
0.999707
2211.00818
Yihong Dong
Yihong Dong, Xue Jiang, Yuchen Liu, Ge Li, Zhi Jin
CodePAD: Sequence-based Code Generation with Pushdown Automaton
Accepted to ISSTA 2023 (Technical Papers)
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
In the process of code generation, it is essential to guarantee the generated code satisfies grammar constraints of programming language (PL). However, neglecting grammar constraints is a fatal drawback of commonly used sequence-based code generation. In this paper, we devise a pushdown automaton (PDA)-based methodology to address this problem, exploiting the principle that PL is a subset of PDA recognizable language and code accepted by PDA is grammatical. Specifically, we construct a PDA module and design an algorithm to constrain the generation of sequence-based models to ensure grammatical correctness. Guided by this methodology, we further propose CodePAD, a sequence-based code generation framework equipped with a PDA module, to integrate the deduction of PDA into deep learning. Additionally, this framework can leverage states of PDA deduction (including state representation, state prediction task, and joint prediction with state) to assist models in learning PDA deduction. To comprehensively evaluate CodePAD, we construct a PDA for Python and conduct extensive experiments on four public benchmark datasets. CodePAD can leverage existing sequence-based models, and we show that it can achieve 100\% grammatical correctness percentage on these benchmark datasets. Thus, it relatively improve 17\% CodeBLEU on CONALA, 8\% EM on DJANGO, and 15\% CodeBLEU on JUICE-10K compared to base models. In addition, our method significantly enhances pre-trained models, e.g., CodeBLEU of CodeGen-350M improvement from 3.21 to 21.54 on MBPP in zero-shot setting.
[ { "version": "v1", "created": "Wed, 2 Nov 2022 01:40:18 GMT" }, { "version": "v2", "created": "Mon, 14 Nov 2022 16:53:15 GMT" }, { "version": "v3", "created": "Mon, 9 Jan 2023 06:14:56 GMT" }, { "version": "v4", "created": "Tue, 17 Jan 2023 03:14:35 GMT" } ]
2023-01-18T00:00:00
[ [ "Dong", "Yihong", "" ], [ "Jiang", "Xue", "" ], [ "Liu", "Yuchen", "" ], [ "Li", "Ge", "" ], [ "Jin", "Zhi", "" ] ]
new_dataset
0.998561
2211.14238
Huaxiu Yao
Huaxiu Yao, Caroline Choi, Bochuan Cao, Yoonho Lee, Pang Wei Koh, Chelsea Finn
Wild-Time: A Benchmark of in-the-Wild Distribution Shift over Time
Accepted by NeurIPS 2022 Track on Datasets and Benchmarks; v2: fixed some issues in FMoW and change the name from "FMoW" to "FMoW-Time"
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Distribution shift occurs when the test distribution differs from the training distribution, and it can considerably degrade performance of machine learning models deployed in the real world. Temporal shifts -- distribution shifts arising from the passage of time -- often occur gradually and have the additional structure of timestamp metadata. By leveraging timestamp metadata, models can potentially learn from trends in past distribution shifts and extrapolate into the future. While recent works have studied distribution shifts, temporal shifts remain underexplored. To address this gap, we curate Wild-Time, a benchmark of 5 datasets that reflect temporal distribution shifts arising in a variety of real-world applications, including patient prognosis and news classification. On these datasets, we systematically benchmark 13 prior approaches, including methods in domain generalization, continual learning, self-supervised learning, and ensemble learning. We use two evaluation strategies: evaluation with a fixed time split (Eval-Fix) and evaluation with a data stream (Eval-Stream). Eval-Fix, our primary evaluation strategy, aims to provide a simple evaluation protocol, while Eval-Stream is more realistic for certain real-world applications. Under both evaluation strategies, we observe an average performance drop of 20% from in-distribution to out-of-distribution data. Existing methods are unable to close this gap. Code is available at https://wild-time.github.io/.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 17:07:53 GMT" }, { "version": "v2", "created": "Mon, 16 Jan 2023 03:13:33 GMT" } ]
2023-01-18T00:00:00
[ [ "Yao", "Huaxiu", "" ], [ "Choi", "Caroline", "" ], [ "Cao", "Bochuan", "" ], [ "Lee", "Yoonho", "" ], [ "Koh", "Pang Wei", "" ], [ "Finn", "Chelsea", "" ] ]
new_dataset
0.994693
2301.05746
Jack Urbanek
Alexander Gurung, Mojtaba Komeili, Arthur Szlam, Jason Weston, and Jack Urbanek
Infusing Commonsense World Models with Graph Knowledge
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While language models have become more capable of producing compelling language, we find there are still gaps in maintaining consistency, especially when describing events in a dynamically changing world. We study the setting of generating narratives in an open world text adventure game, where a graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions. We build a large set of tasks by combining crowdsourced and simulated gameplays with a novel dataset of complex actions in order to to construct such models. We find it is possible to improve the consistency of action narration models by training on graph contexts and targets, even if graphs are not present at test time. This is shown both in automatic metrics and human evaluations. We plan to release our code, the new set of tasks, and best performing models.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 19:58:27 GMT" } ]
2023-01-18T00:00:00
[ [ "Gurung", "Alexander", "" ], [ "Komeili", "Mojtaba", "" ], [ "Szlam", "Arthur", "" ], [ "Weston", "Jason", "" ], [ "Urbanek", "Jack", "" ] ]
new_dataset
0.999085
2301.05768
Morteza Rezanejad
Maciej Sypetkowski, Morteza Rezanejad, Saber Saberian, Oren Kraus, John Urbanik, James Taylor, Ben Mabey, Mason Victors, Jason Yosinski, Alborz Rezazadeh Sereshkeh, Imran Haque, Berton Earnshaw
RxRx1: A Dataset for Evaluating Experimental Batch Correction Methods
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
High-throughput screening techniques are commonly used to obtain large quantities of data in many fields of biology. It is well known that artifacts arising from variability in the technical execution of different experimental batches within such screens confound these observations and can lead to invalid biological conclusions. It is therefore necessary to account for these batch effects when analyzing outcomes. In this paper we describe RxRx1, a biological dataset designed specifically for the systematic study of batch effect correction methods. The dataset consists of 125,510 high-resolution fluorescence microscopy images of human cells under 1,138 genetic perturbations in 51 experimental batches across 4 cell types. Visual inspection of the images alone clearly demonstrates significant batch effects. We propose a classification task designed to evaluate the effectiveness of experimental batch correction methods on these images and examine the performance of a number of correction methods on this task. Our goal in releasing RxRx1 is to encourage the development of effective experimental batch correction methods that generalize well to unseen experimental batches. The dataset can be downloaded at https://rxrx.ai.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 21:49:12 GMT" } ]
2023-01-18T00:00:00
[ [ "Sypetkowski", "Maciej", "" ], [ "Rezanejad", "Morteza", "" ], [ "Saberian", "Saber", "" ], [ "Kraus", "Oren", "" ], [ "Urbanik", "John", "" ], [ "Taylor", "James", "" ], [ "Mabey", "Ben", "" ], [ "Victors", "Mason", "" ], [ "Yosinski", "Jason", "" ], [ "Sereshkeh", "Alborz Rezazadeh", "" ], [ "Haque", "Imran", "" ], [ "Earnshaw", "Berton", "" ] ]
new_dataset
0.999724
2301.05776
Iurii Medvedev
Iurii Medvedev and Farhad Shadmand and Nuno Gon\c{c}alves
Young Labeled Faces in the Wild (YLFW): A Dataset for Children Faces Recognition
11 pages, 3 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Face recognition has achieved outstanding performance in the last decade with the development of deep learning techniques. Nowadays, the challenges in face recognition are related to specific scenarios, for instance, the performance under diverse image quality, the robustness for aging and edge cases of person age (children and elders), distinguishing of related identities. In this set of problems, recognizing children's faces is one of the most sensitive and important. One of the reasons for this problem is the existing bias towards adults in existing face datasets. In this work, we present a benchmark dataset for children's face recognition, which is compiled similarly to the famous face recognition benchmarks LFW, CALFW, CPLFW, XQLFW and AgeDB. We also present a development dataset (separated into train and test parts) for adapting face recognition models for face images of children. The proposed data is balanced for African, Asian, Caucasian, and Indian races. To the best of our knowledge, this is the first standartized data tool set for benchmarking and the largest collection for development for children's face recognition. Several face recognition experiments are presented to demonstrate the performance of the proposed data tool set.
[ { "version": "v1", "created": "Fri, 13 Jan 2023 22:19:44 GMT" } ]
2023-01-18T00:00:00
[ [ "Medvedev", "Iurii", "" ], [ "Shadmand", "Farhad", "" ], [ "Gonçalves", "Nuno", "" ] ]
new_dataset
0.971624
2301.05856
Mingjie Xie
Jian Guan, Mingjie Xie, Youtian Lin, Guangjun He, Pengming Feng
EARL: An Elliptical Distribution aided Adaptive Rotation Label Assignment for Oriented Object Detection in Remote Sensing Images
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Label assignment is often employed in recent convolutional neural network (CNN) based detectors to determine positive or negative samples during training process. However, we note that current label assignment strategies barely consider the characteristics of targets in remote sensing images thoroughly, such as large variations in orientations, aspect ratios and scales, which lead to insufficient sampling. In this paper, an Elliptical Distribution aided Adaptive Rotation Label Assignment (EARL) is proposed to select positive samples with higher quality in orientation detectors, and yields better performance. Concretely, to avoid inadequate sampling of targets with extreme scales, an adaptive scale sampling (ADS) strategy is proposed to dynamically select samples on different feature levels according to the scales of targets. To enhance ADS, positive samples are selected following a dynamic elliptical distribution (DED), which can further exploit the orientation and shape properties of targets. Moreover, a spatial distance weighting (SDW) module is introduced to mitigate the influence from low-quality samples on detection performance. Extensive experiments on popular remote sensing datasets, such as DOTA and HRSC2016, demonstrate the effectiveness and the superiority of our proposed EARL, where without bells and whistles, it achieves 72.87 of mAP on DOTA dataset by being integrated with simple structure, which outperforms current state-of-the-art anchor-free detectors and provides comparable performance as anchor-based methods. The source code will be available at https://github.com/Justlovesmile/EARL
[ { "version": "v1", "created": "Sat, 14 Jan 2023 08:32:16 GMT" } ]
2023-01-18T00:00:00
[ [ "Guan", "Jian", "" ], [ "Xie", "Mingjie", "" ], [ "Lin", "Youtian", "" ], [ "He", "Guangjun", "" ], [ "Feng", "Pengming", "" ] ]
new_dataset
0.993991
2301.05945
Sarad Venugopalan
Sarad Venugopalan and Heiko Aydt
Dance of the DAOs: Building Data Assets as a Use Case
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-sa/4.0/
Decentralised Autonomous Organisations (DAOs) have recently piqued the interest of participants from diverse backgrounds, including business owners, engineers, individual and institutional investors. In part, the promised autonomy (less rigid structure and more voice) in decision making along with ease of market access, has resulted in its participants pouring in their time and economic resources. In a DAO, governance is typically enacted via posting proposals and collectively voting on it. The winning proposals are then implemented. However, governance alone may be insufficient, when its participants economic incentives are misaligned. Governance and tokenomics need to work in tandem to ensure business stability. We present a case study on an example building data asset from the construction industry and present its tokenomics. We show its working, both as a caretaker and strategic DAO, to illustrate its effects on governance and DAO stability. The case study serves as an example for participants to decide whether their DAO tokenomics are aligned with participation incentives. Finally, we propose the DAO tension quadrilateral to study DAO stability and build a tool to measure agreement among its participants.
[ { "version": "v1", "created": "Sat, 14 Jan 2023 16:24:40 GMT" } ]
2023-01-18T00:00:00
[ [ "Venugopalan", "Sarad", "" ], [ "Aydt", "Heiko", "" ] ]
new_dataset
0.995735
2301.05965
George Chernishev
George Chernishev, Michael Polyntsov, Anton Chizhov, Kirill Stupakov, Ilya Shchuckin, Alexander Smirnov, Maxim Strutovsky, Alexey Shlyonskikh, Mikhail Firsov, Stepan Manannikov, Nikita Bobrov, Daniil Goncharov, Ilia Barutkin, Vladislav Shalnev, Kirill Muraviev, Anna Rakhmukova, Dmitriy Shcheka, Anton Chernikov, Dmitrii Mandelshtam, Mikhail Vyrodov, Arthur Saliou, Eduard Gaisin, Kirill Smirnov
Desbordante: from benchmarking suite to high-performance science-intensive data profiler (preprint)
null
null
null
null
cs.DB cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Pioneering data profiling systems such as Metanome and OpenClean brought public attention to science-intensive data profiling. This type of profiling aims to extract complex patterns (primitives) such as functional dependencies, data constraints, association rules, and others. However, these tools are research prototypes rather than production-ready systems. The following work presents Desbordante - a high-performance science-intensive data profiler with open source code. Unlike similar systems, it is built with emphasis on industrial application in a multi-user environment. It is efficient, resilient to crashes, and scalable. Its efficiency is ensured by implementing discovery algorithms in C++, resilience is achieved by extensive use of containerization, and scalability is based on replication of containers. Desbordante aims to open industrial-grade primitive discovery to a broader public, focusing on domain experts who are not IT professionals. Aside from the discovery of various primitives, Desbordante offers primitive validation, which not only reports whether a given instance of primitive holds or not, but also points out what prevents it from holding via the use of special screens. Next, Desbordante supports pipelines - ready-to-use functionality implemented using the discovered primitives, for example, typo detection. We provide built-in pipelines, and the users can construct their own via provided Python bindings. Unlike other profilers, Desbordante works not only with tabular data, but with graph and transactional data as well. In this paper, we present Desbordante, the vision behind it and its use-cases. To provide a more in-depth perspective, we discuss its current state, architecture, and design decisions it is built on. Additionally, we outline our future plans.
[ { "version": "v1", "created": "Sat, 14 Jan 2023 19:14:51 GMT" } ]
2023-01-18T00:00:00
[ [ "Chernishev", "George", "" ], [ "Polyntsov", "Michael", "" ], [ "Chizhov", "Anton", "" ], [ "Stupakov", "Kirill", "" ], [ "Shchuckin", "Ilya", "" ], [ "Smirnov", "Alexander", "" ], [ "Strutovsky", "Maxim", "" ], [ "Shlyonskikh", "Alexey", "" ], [ "Firsov", "Mikhail", "" ], [ "Manannikov", "Stepan", "" ], [ "Bobrov", "Nikita", "" ], [ "Goncharov", "Daniil", "" ], [ "Barutkin", "Ilia", "" ], [ "Shalnev", "Vladislav", "" ], [ "Muraviev", "Kirill", "" ], [ "Rakhmukova", "Anna", "" ], [ "Shcheka", "Dmitriy", "" ], [ "Chernikov", "Anton", "" ], [ "Mandelshtam", "Dmitrii", "" ], [ "Vyrodov", "Mikhail", "" ], [ "Saliou", "Arthur", "" ], [ "Gaisin", "Eduard", "" ], [ "Smirnov", "Kirill", "" ] ]
new_dataset
0.980131
2301.06018
Cheng-Ze Lu
Cheng-Ze Lu, Xiaojie Jin, Zhicheng Huang, Qibin Hou, Ming-Ming Cheng, Jiashi Feng
CMAE-V: Contrastive Masked Autoencoders for Video Action Recognition
Technical Report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Contrastive Masked Autoencoder (CMAE), as a new self-supervised framework, has shown its potential of learning expressive feature representations in visual image recognition. This work shows that CMAE also trivially generalizes well on video action recognition without modifying the architecture and the loss criterion. By directly replacing the original pixel shift with the temporal shift, our CMAE for visual action recognition, CMAE-V for short, can generate stronger feature representations than its counterpart based on pure masked autoencoders. Notably, CMAE-V, with a hybrid architecture, can achieve 82.2% and 71.6% top-1 accuracy on the Kinetics-400 and Something-something V2 datasets, respectively. We hope this report could provide some informative inspiration for future works.
[ { "version": "v1", "created": "Sun, 15 Jan 2023 05:07:41 GMT" } ]
2023-01-18T00:00:00
[ [ "Lu", "Cheng-Ze", "" ], [ "Jin", "Xiaojie", "" ], [ "Huang", "Zhicheng", "" ], [ "Hou", "Qibin", "" ], [ "Cheng", "Ming-Ming", "" ], [ "Feng", "Jiashi", "" ] ]
new_dataset
0.997444
2301.06160
Shu Zhong
Shu Zhong, Miriam Ribul, Youngjun Cho, Marianna Obrist
TextileNet: A Material Taxonomy-based Fashion Textile Dataset
10 papes, 4 figures, 2 tables
null
null
null
cs.DL cs.AI cs.CV
http://creativecommons.org/licenses/by/4.0/
The rise of Machine Learning (ML) is gradually digitalizing and reshaping the fashion industry. Recent years have witnessed a number of fashion AI applications, for example, virtual try-ons. Textile material identification and categorization play a crucial role in the fashion textile sector, including fashion design, retails, and recycling. At the same time, Net Zero is a global goal and the fashion industry is undergoing a significant change so that textile materials can be reused, repaired and recycled in a sustainable manner. There is still a challenge in identifying textile materials automatically for garments, as we lack a low-cost and effective technique for identifying them. In light of this, we build the first fashion textile dataset, TextileNet, based on textile material taxonomies - a fibre taxonomy and a fabric taxonomy generated in collaboration with material scientists. TextileNet can be used to train and evaluate the state-of-the-art Deep Learning models for textile materials. We hope to standardize textile related datasets through the use of taxonomies. TextileNet contains 33 fibres labels and 27 fabrics labels, and has in total 760,949 images. We use standard Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) to establish baselines for this dataset. Future applications for this dataset range from textile classification to optimization of the textile supply chain and interactive design for consumers. We envision that this can contribute to the development of a new AI-based fashion platform.
[ { "version": "v1", "created": "Sun, 15 Jan 2023 19:02:18 GMT" } ]
2023-01-18T00:00:00
[ [ "Zhong", "Shu", "" ], [ "Ribul", "Miriam", "" ], [ "Cho", "Youngjun", "" ], [ "Obrist", "Marianna", "" ] ]
new_dataset
0.999842
2301.06176
Isa Inuwa-Dutse
Bello Shehu Bello, Muhammad Abubakar Alhassan, Isa Inuwa-Dutse
#EndSARS Protest: Discourse and Mobilisation on Twitter
17 pages, 11 figures, 2 tables
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Using the @NGRPresident Twitter handle, the Government of Nigeria issued a special directive banning Special Anti-Robbery Squad (SARS) with immediate effect. The SARS is a special police unit under the Nigeria Police Force tasked with the responsibility of fighting violent crimes. However, the unit has been accused of waves of human rights abuse across the nation. According to a report by Amnesty International, between January 2017 and May 2020, 82 cases of police brutality have been committed. This has led to one of the major protests demanding more measures to be taken. The #EndSARS hashtag was widely used by the protesters to amplify their messages and reach out to wider communities on Twitter. In this study, we present a critical analysis of how the online protest unfolded. Essentially, we examine how the protest evolves on Twitter, the nature of engagement with the protest themes, the factors influencing the protest and public perceptions about the online movement. We found that the mobilisation strategies include direct and indirect engagements with influential users, sharing direct stories and vicarious experiences. Also, there is evidence that suggests the deployment of automated accounts to promote the course of the protest. In terms of participation, over 70% of the protest is confined within a few states in Nigeria, and the diaspora communities also lent their voices to the movement. The most active users are not those with high followership, and the majority of the protesters utilised mobile devices, accounting for 88% to mobilise and report on the protest. We also examined how social media users interact with the movement and the response from the wider online communities. Needless to say, the themes in the online discourse are mostly about #EndSARS and vicarious experiences with the police, however, there are topics around police reform and demand for regime change.
[ { "version": "v1", "created": "Sun, 15 Jan 2023 20:11:25 GMT" } ]
2023-01-18T00:00:00
[ [ "Bello", "Bello Shehu", "" ], [ "Alhassan", "Muhammad Abubakar", "" ], [ "Inuwa-Dutse", "Isa", "" ] ]
new_dataset
0.999834
2301.06178
Anthony Rios
Xingmeng Zhao, Xavier Walton, Suhana Shrestha and Anthony Rios
Bike Frames: Understanding the Implicit Portrayal of Cyclists in the News
null
null
null
null
cs.CY cs.CL
http://creativecommons.org/licenses/by/4.0/
Increasing the number of cyclists, whether for general transport or recreation, can provide health improvements and reduce the environmental impact of vehicular transportation. However, the public's perception of cycling may be driven by the ideologies and reporting standards of news agencies. For instance, people may identify cyclists on the road as "dangerous" if news agencies overly report cycling accidents, limiting the number of people that cycle for transportation. Moreover, if fewer people cycle, there may be less funding from the government to invest in safe infrastructure. In this paper, we explore the perceived perception of cyclists within news headlines. To accomplish this, we introduce a new dataset, "Bike Frames", that can help provide insight into how headlines portray cyclists and help detect accident-related headlines. Next, we introduce a multi-task (MT) regularization approach that increases the detection accuracy of accident-related posts, demonstrating improvements over traditional MT frameworks. Finally, we compare and contrast the perceptions of cyclists with motorcyclist-related headlines to ground the findings with another related activity for both male- and female-related posts. Our findings show that general news websites are more likely to report accidents about cyclists than other events. Moreover, cyclist-specific websites are more likely to report about accidents than motorcycling-specific websites, even though there is more potential danger for motorcyclists. Finally, we show substantial differences in the reporting about male vs. female-related persons, e.g., more male-related cyclists headlines are related to accidents, but more female-related motorcycling headlines about accidents. WARNING: This paper contains descriptions of accidents and death.
[ { "version": "v1", "created": "Sun, 15 Jan 2023 20:22:03 GMT" } ]
2023-01-18T00:00:00
[ [ "Zhao", "Xingmeng", "" ], [ "Walton", "Xavier", "" ], [ "Shrestha", "Suhana", "" ], [ "Rios", "Anthony", "" ] ]
new_dataset
0.998824
2301.06184
Yiqin Zhao
Yiqin Zhao, Chongyang Ma, Haibin Huang, Tian Guo
LitAR: Visually Coherent Lighting for Mobile Augmented Reality
null
null
10.1145/3550291
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An accurate understanding of omnidirectional environment lighting is crucial for high-quality virtual object rendering in mobile augmented reality (AR). In particular, to support reflective rendering, existing methods have leveraged deep learning models to estimate or have used physical light probes to capture physical lighting, typically represented in the form of an environment map. However, these methods often fail to provide visually coherent details or require additional setups. For example, the commercial framework ARKit uses a convolutional neural network that can generate realistic environment maps; however the corresponding reflective rendering might not match the physical environments. In this work, we present the design and implementation of a lighting reconstruction framework called LitAR that enables realistic and visually-coherent rendering. LitAR addresses several challenges of supporting lighting information for mobile AR. First, to address the spatial variance problem, LitAR uses two-field lighting reconstruction to divide the lighting reconstruction task into the spatial variance-aware near-field reconstruction and the directional-aware far-field reconstruction. The corresponding environment map allows reflective rendering with correct color tones. Second, LitAR uses two noise-tolerant data capturing policies to ensure data quality, namely guided bootstrapped movement and motion-based automatic capturing. Third, to handle the mismatch between the mobile computation capability and the high computation requirement of lighting reconstruction, LitAR employs two novel real-time environment map rendering techniques called multi-resolution projection and anchor extrapolation. These two techniques effectively remove the need of time-consuming mesh reconstruction while maintaining visual quality.
[ { "version": "v1", "created": "Sun, 15 Jan 2023 20:47:38 GMT" } ]
2023-01-18T00:00:00
[ [ "Zhao", "Yiqin", "" ], [ "Ma", "Chongyang", "" ], [ "Huang", "Haibin", "" ], [ "Guo", "Tian", "" ] ]
new_dataset
0.998432
2301.06246
Yiding Feng
Ozan Candogan, Yiding Feng
Mobility Data in Operations: The Facility Location Problem
null
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The recent large scale availability of mobility data, which captures individual mobility patterns, poses novel operational problems that are exciting and challenging. Motivated by this, we introduce and study a variant of the (cost-minimization) facility location problem where each individual is endowed with two locations (hereafter, her home and work locations), and the connection cost is the minimum distance between any of her locations and its closest facility. We design a polynomial-time algorithm whose approximation ratio is at most 3.103. We complement this positive result by showing that the proposed algorithm is at least a 3.073-approximation, and there exists no polynomial-time algorithm with approximation ratio $2-\epsilon$ under UG-hardness. We further extend our results and analysis to the model where each individual is endowed with K locations. Finally, we conduct numerical experiments over both synthetic data and US census data (for NYC, greater LA, greater DC, Research Triangle) and evaluate the performance of our algorithms.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 03:35:35 GMT" } ]
2023-01-18T00:00:00
[ [ "Candogan", "Ozan", "" ], [ "Feng", "Yiding", "" ] ]
new_dataset
0.976367
2301.06269
Bo Zhang
Bo Zhang, Yuchen Guo, Runzhao Yang, Zhihong Zhang, Jiayi Xie, Jinli Suo and Qionghai Dai
DarkVision: A Benchmark for Low-light Image/Video Perception
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Imaging and perception in photon-limited scenarios is necessary for various applications, e.g., night surveillance or photography, high-speed photography, and autonomous driving. In these cases, cameras suffer from low signal-to-noise ratio, which degrades the image quality severely and poses challenges for downstream high-level vision tasks like object detection and recognition. Data-driven methods have achieved enormous success in both image restoration and high-level vision tasks. However, the lack of high-quality benchmark dataset with task-specific accurate annotations for photon-limited images/videos delays the research progress heavily. In this paper, we contribute the first multi-illuminance, multi-camera, and low-light dataset, named DarkVision, serving for both image enhancement and object detection. We provide bright and dark pairs with pixel-wise registration, in which the bright counterpart provides reliable reference for restoration and annotation. The dataset consists of bright-dark pairs of 900 static scenes with objects from 15 categories, and 32 dynamic scenes with 4-category objects. For each scene, images/videos were captured at 5 illuminance levels using three cameras of different grades, and average photons can be reliably estimated from the calibration data for quantitative studies. The static-scene images and dynamic videos respectively contain around 7,344 and 320,667 instances in total. With DarkVision, we established baselines for image/video enhancement and object detection by representative algorithms. To demonstrate an exemplary application of DarkVision, we propose two simple yet effective approaches for improving performance in video enhancement and object detection respectively. We believe DarkVision would advance the state-of-the-arts in both imaging and related computer vision tasks in low-light environment.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 05:55:59 GMT" } ]
2023-01-18T00:00:00
[ [ "Zhang", "Bo", "" ], [ "Guo", "Yuchen", "" ], [ "Yang", "Runzhao", "" ], [ "Zhang", "Zhihong", "" ], [ "Xie", "Jiayi", "" ], [ "Suo", "Jinli", "" ], [ "Dai", "Qionghai", "" ] ]
new_dataset
0.999886
2301.06316
Ehsan Ul Haq
Ehsan-Ul Haq, Haris Bin Zia, Reza Hadi Mogavi, Gareth Tyson, Yang K. Lu, Tristan Braud, Pan Hui
A Twitter Dataset for Pakistani Political Discourse
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by-nc-nd/4.0/
We share the largest dataset for the Pakistani Twittersphere consisting of over 49 million tweets, collected during one of the most politically active periods in the country. We collect the data after the deposition of the government by a No Confidence Vote in April 2022. This large-scale dataset can be used for several downstream tasks such as political bias, bots detection, trolling behavior, (dis)misinformation, and censorship related to Pakistani Twitter users. In addition, this dataset provides a large collection of tweets in Urdu and Roman Urdu that can be used for optimizing language processing tasks.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 09:11:11 GMT" } ]
2023-01-18T00:00:00
[ [ "Haq", "Ehsan-Ul", "" ], [ "Zia", "Haris Bin", "" ], [ "Mogavi", "Reza Hadi", "" ], [ "Tyson", "Gareth", "" ], [ "Lu", "Yang K.", "" ], [ "Braud", "Tristan", "" ], [ "Hui", "Pan", "" ] ]
new_dataset
0.999883
2301.06375
Kwanghee Choi
Jeongkyun Park, Jung-Wook Hwang, Kwanghee Choi, Seung-Hyun Lee, Jun Hwan Ahn, Rae-Hong Park, Hyung-Min Park
OLKAVS: An Open Large-Scale Korean Audio-Visual Speech Dataset
null
null
null
null
cs.MM cs.AI cs.CL cs.CV cs.LG cs.SD
http://creativecommons.org/licenses/by/4.0/
Inspired by humans comprehending speech in a multi-modal manner, various audio-visual datasets have been constructed. However, most existing datasets focus on English, induce dependencies with various prediction models during dataset preparation, and have only a small number of multi-view videos. To mitigate the limitations, we recently developed the Open Large-scale Korean Audio-Visual Speech (OLKAVS) dataset, which is the largest among publicly available audio-visual speech datasets. The dataset contains 1,150 hours of transcribed audio from 1,107 Korean speakers in a studio setup with nine different viewpoints and various noise situations. We also provide the pre-trained baseline models for two tasks, audio-visual speech recognition and lip reading. We conducted experiments based on the models to verify the effectiveness of multi-modal and multi-view training over uni-modal and frontal-view-only training. We expect the OLKAVS dataset to facilitate multi-modal research in broader areas such as Korean speech recognition, speaker recognition, pronunciation level classification, and mouth motion analysis.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 11:40:50 GMT" } ]
2023-01-18T00:00:00
[ [ "Park", "Jeongkyun", "" ], [ "Hwang", "Jung-Wook", "" ], [ "Choi", "Kwanghee", "" ], [ "Lee", "Seung-Hyun", "" ], [ "Ahn", "Jun Hwan", "" ], [ "Park", "Rae-Hong", "" ], [ "Park", "Hyung-Min", "" ] ]
new_dataset
0.999829
2301.06400
Youmna Farag
Youmna Farag, Charlotte O. Brand, Jacopo Amidei, Paul Piwek, Tom Stafford, Svetlana Stoyanchev, Andreas Vlachos
Opening up Minds with Argumentative Dialogues
null
Findings of EMNLP 2022
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
Recent research on argumentative dialogues has focused on persuading people to take some action, changing their stance on the topic of discussion, or winning debates. In this work, we focus on argumentative dialogues that aim to open up (rather than change) people's minds to help them become more understanding to views that are unfamiliar or in opposition to their own convictions. To this end, we present a dataset of 183 argumentative dialogues about 3 controversial topics: veganism, Brexit and COVID-19 vaccination. The dialogues were collected using the Wizard of Oz approach, where wizards leverage a knowledge-base of arguments to converse with participants. Open-mindedness is measured before and after engaging in the dialogue using a questionnaire from the psychology literature, and success of the dialogue is measured as the change in the participant's stance towards those who hold opinions different to theirs. We evaluate two dialogue models: a Wikipedia-based and an argument-based model. We show that while both models perform closely in terms of opening up minds, the argument-based model is significantly better on other dialogue properties such as engagement and clarity.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 12:47:16 GMT" } ]
2023-01-18T00:00:00
[ [ "Farag", "Youmna", "" ], [ "Brand", "Charlotte O.", "" ], [ "Amidei", "Jacopo", "" ], [ "Piwek", "Paul", "" ], [ "Stafford", "Tom", "" ], [ "Stoyanchev", "Svetlana", "" ], [ "Vlachos", "Andreas", "" ] ]
new_dataset
0.999722
2301.06422
Novel Certad
Novel Certad, Walter Morales-Alvarez, Georg Novotny, Cristina Olaverri-Monreal
JKU-ITS Automobile for Research on Autonomous Vehicles
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present our brand-new platform for Automated Driving research. The chosen vehicle is a RAV4 hybrid SUV from TOYOTA provided with exteroceptive sensors such as a multilayer LIDAR, a monocular camera, Radar and GPS; and proprioceptive sensors such as encoders and a 9-DOF IMU. These sensors are integrated in the vehicle via a main computer running ROS1 under Linux 20.04. Additionally, we installed an open-source ADAS called Comma Two, that runs Openpilot to control the vehicle. The platform is currently being used to research in the field of autonomous vehicles, human and autonomous vehicles interaction, human factors, and energy consumption.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 13:21:15 GMT" } ]
2023-01-18T00:00:00
[ [ "Certad", "Novel", "" ], [ "Morales-Alvarez", "Walter", "" ], [ "Novotny", "Georg", "" ], [ "Olaverri-Monreal", "Cristina", "" ] ]
new_dataset
0.999759
2301.06433
Animesh Singhal
Animesh Singhal, Sahil Modi, Abhishek Gupta, Leena Vachhani
Wobble control of a pendulum actuated spherical robot
The length of the research paper is 20 pages, and it contains 15 graphs or illustrations
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Spherical robots can conduct surveillance in hostile, cluttered environments without being damaged, as their protective shell can safely house sensors such as cameras. However, lateral oscillations, also known as wobble, occur when these sphere-shaped robots operate at low speeds, leading to shaky camera feedback. These oscillations in a pendulum-actuated spherical robot are caused by the coupling between the forward and steering motions due to nonholonomic constraints. Designing a controller to limit wobbling in these robots is challenging due to their underactuated nature. We propose a model-based controller to navigate a pendulum-actuated spherical robot using wobble-free turning maneuvers consisting of circular arcs and straight lines. The model is developed using Lagrange-D'Alembert equations and accounts for the coupled forward and steering motions. The model is further analyzed to derive expressions for radius of curvature, precession rate, wobble amplitude, and wobble frequency during circular motions. Finally, we design an input-output feedback linearization-based controller to control the robot's heading direction and wobble. Overall, the proposed controller enables a teleoperator to command a specific forward velocity and pendulum angle as per the desired turning radius while limiting the robot's lateral oscillations to enhance the quality of camera feedback.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 13:48:49 GMT" } ]
2023-01-18T00:00:00
[ [ "Singhal", "Animesh", "" ], [ "Modi", "Sahil", "" ], [ "Gupta", "Abhishek", "" ], [ "Vachhani", "Leena", "" ] ]
new_dataset
0.999082
2301.06446
Chengju Li
Hai Liu, Chengju Li, Cunsheng Ding
Five infinite families of binary cyclic codes and their related codes with good parameters
33 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyclic codes are an interesting type of linear codes and have wide applications in communication and storage systems due to their efficient encoding and decoding algorithms. Inspired by the recent work on binary cyclic codes published in IEEE Trans. Inf. Theory, vol. 68, no. 12, pp. 7842-7849, 2022, the objectives of this paper are the construction and analyses of five infinite families of binary cyclic codes with parameters $[n, k]$ and $(n-6)/3 \leq k \leq 2(n+6)/3$. Three of the five families of binary cyclic codes and their duals have a very good lower bound on their minimum distances and contain distance-optimal codes. The other two families of binary cyclic codes are composed of binary duadic codes with a square-root-like lower bound on their minimum distances. As a by-product, two infinite families of self-dual binary codes with a square-root-like lower bound on their minimum distances are obtained.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 14:37:52 GMT" } ]
2023-01-18T00:00:00
[ [ "Liu", "Hai", "" ], [ "Li", "Chengju", "" ], [ "Ding", "Cunsheng", "" ] ]
new_dataset
0.998428
2301.06475
Julian Linke Dipl. Ing.
Julian Linke, Saskia Wepner, Gernot Kubin and Barbara Schuppler
Using Kaldi for Automatic Speech Recognition of Conversational Austrian German
10 pages, 2 figures, 4 tables
null
null
null
cs.CL cs.SD eess.AS
http://creativecommons.org/licenses/by/4.0/
As dialogue systems are becoming more and more interactional and social, also the accurate automatic speech recognition (ASR) of conversational speech is of increasing importance. This shifts the focus from short, spontaneous, task-oriented dialogues to the much higher complexity of casual face-to-face conversations. However, the collection and annotation of such conversations is a time-consuming process and data is sparse for this specific speaking style. This paper presents ASR experiments with read and conversational Austrian German as target. In order to deal with having only limited resources available for conversational German and, at the same time, with a large variation among speakers with respect to pronunciation characteristics, we improve a Kaldi-based ASR system by incorporating a (large) knowledge-based pronunciation lexicon, while exploring different data-based methods to restrict the number of pronunciation variants for each lexical entry. We achieve best WER of 0.4% on Austrian German read speech and best average WER of 48.5% on conversational speech. We find that by using our best pronunciation lexicon a similarly high performance can be achieved than by increasing the size of the data used for the language model by approx. 360% to 760%. Our findings indicate that for low-resource scenarios -- despite the general trend in speech technology towards using data-based methods only -- knowledge-based approaches are a successful, efficient method.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 15:28:28 GMT" } ]
2023-01-18T00:00:00
[ [ "Linke", "Julian", "" ], [ "Wepner", "Saskia", "" ], [ "Kubin", "Gernot", "" ], [ "Schuppler", "Barbara", "" ] ]
new_dataset
0.954971
2301.06528
Rodrigo Ramele
Franco Paviotti, Esteban Buniak, Rodrigo Ramele, Orestes Freixes and Juan Miguel Santos
Equilivest: A Robotic Vest to aid in Post-Stroke Dynamic Balance Rehabilitation
This extended abstract was presented at the "Workshop on Assistive Robotic Systems for Human Balancing and Walking: Emerging Trends and Perspectives" at IROS2022
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Stroke is a medical condition that can affect motor function, particularly dynamic balance. Biofeedback can aid in rehabilitation procedures which help patients to regain lost motor activity and recover functionality. In this work, we are presenting a robotic smart-vest device that can analyze Inertial Measurement Unit (IMU) data and assist in rehabilitation procedures by providing timed feedback in the form of vibrotactile stimulation. Information provided by principal caregivers and patients in the form of surveys and interviews, is used to hypothesize potential clinical causes and to derive alternative three alternative clinical modalities: Artificial Vestibular Feedback, Gait Pacemaker and Risk-Predictor.
[ { "version": "v1", "created": "Mon, 16 Jan 2023 17:25:21 GMT" } ]
2023-01-18T00:00:00
[ [ "Paviotti", "Franco", "" ], [ "Buniak", "Esteban", "" ], [ "Ramele", "Rodrigo", "" ], [ "Freixes", "Orestes", "" ], [ "Santos", "Juan Miguel", "" ] ]
new_dataset
0.999088
2301.06652
Mamtaj Akter
Leena Alghamdi, Mamtaj Akter, Jess Kropczynski, Pamela Wisniewski and Heather Lipford
Co-designing Community-based Sharing of Smarthome Devices for the Purpose of Co-monitoring In-home Emergencies
21 pages
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
We conducted 26 co-design interviews with 50 smarthome device owners to understand the perceived benefits, drawbacks, and design considerations for developing a smarthome system that facilitates co-monitoring with emergency contacts who live outside of one's home. Participants felt that such a system would help ensure their personal safety, safeguard from material loss, and give them peace of mind by ensuring quick response and verifying potential threats. However, they also expressed concerns regarding privacy, overburdening others, and other potential threats, such as unauthorized access and security breaches. To alleviate these concerns, participants designed for flexible and granular access control and fail-safe back-up features. Our study reveals why peer-based co-monitoring of smarthomes for emergencies may be beneficial but also difficult to implement. Based on the insights gained from our study, we provide recommendations for designing technologies that facilitate such co-monitoring while mitigating its risks.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 01:22:30 GMT" } ]
2023-01-18T00:00:00
[ [ "Alghamdi", "Leena", "" ], [ "Akter", "Mamtaj", "" ], [ "Kropczynski", "Jess", "" ], [ "Wisniewski", "Pamela", "" ], [ "Lipford", "Heather", "" ] ]
new_dataset
0.991466
2301.06680
Prateek Chhikara
Prateek Chhikara, Harshul Kuhar, Anil Goyal, Chirag Sharma
DIGITOUR: Automatic Digital Tours for Real-Estate Properties
Published at CODS-COMAD '23
null
10.1145/3570991.3571060
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A virtual or digital tour is a form of virtual reality technology which allows a user to experience a specific location remotely. Currently, these virtual tours are created by following a 2-step strategy. First, a photographer clicks a 360 degree equirectangular image; then, a team of annotators manually links these images for the "walkthrough" user experience. The major challenge in the mass adoption of virtual tours is the time and cost involved in manual annotation/linking of images. Therefore, this paper presents an end-to-end pipeline to automate the generation of 3D virtual tours using equirectangular images for real-estate properties. We propose a novel HSV-based coloring scheme for paper tags that need to be placed at different locations before clicking the equirectangular images using 360 degree cameras. These tags have two characteristics: i) they are numbered to help the photographer for placement of tags in sequence and; ii) bi-colored, which allows better learning of tag detection (using YOLOv5 architecture) in an image and digit recognition (using custom MobileNet architecture) tasks. Finally, we link/connect all the equirectangular images based on detected tags. We show the efficiency of the proposed pipeline on a real-world equirectangular image dataset collected from the Housing.com database.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 03:43:34 GMT" } ]
2023-01-18T00:00:00
[ [ "Chhikara", "Prateek", "" ], [ "Kuhar", "Harshul", "" ], [ "Goyal", "Anil", "" ], [ "Sharma", "Chirag", "" ] ]
new_dataset
0.984168
2301.06702
Casey Clifton
Ike Smith, Casey Clifton
Shackled: a 3D Rendering Engine Programmed Entirely in Ethereum Smart Contracts
Published in proceedings of 5th International Conference on Blockchain, Honolulu, HI, USA, December 10-14, 2022, Proceedings: https://link.springer.com/chapter/10.1007/978-3-031-23495-8_9
Chen, S., Shyamasundar, R. K., & Zhang, L. J. (Eds.). Blockchain-ICBC 2022: 5th International Conference, Held as part of the Services Conference Federation, SCF 2022, Honolulu, HI, USA, December 10-14, 2022, Proceedings
10.1007/978-3-031-23495-8_9
null
cs.CR cs.GR
http://creativecommons.org/licenses/by/4.0/
The Ethereum blockchain permits the development and deployment of smart contracts which can store and execute code 'on-chain' - that is, entirely on nodes in the blockchain's network. Smart contracts have traditionally been used for financial purposes, but since smart contracts are Turing-complete, their algorithmic scope is broader than any single domain. To that end, we design, develop, and deploy a comprehensive 3D rendering engine programmed entirely in Ethereum smart contracts, called Shackled. Shackled computes a 2D image from a 3D scene, executing every single computation on-chain, on Ethereum. To our knowledge, Shackled is the first and only fully on-chain 3D rendering engine for Ethereum. In this work, we 1) provide three unique datasets for the purpose of using and benchmarking Shackled, 2) execute said benchmarks and provide results, 3) demonstrate a potential use case of Shackled in the domain of tokenised generative art, 4) provide a no-code user interface to Shackled, 5) enumerate the challenges associated with programming complex algorithms in Solidity smart contracts, and 6) outline potential directions for improving the Shackled platform. It is our hope that this work increases the Ethereum blockchain's native graphics processing capabilities, and that it enables increased use of smart contracts for more complex algorithms, thus increasing the overall richness of the Ethereum ecosystem.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 05:00:13 GMT" } ]
2023-01-18T00:00:00
[ [ "Smith", "Ike", "" ], [ "Clifton", "Casey", "" ] ]
new_dataset
0.999603
2301.06715
Dongseok Shim
Dongseok Shim, H. Jin Kim
SwinDepth: Unsupervised Depth Estimation using Monocular Sequences via Swin Transformer and Densely Cascaded Network
ICRA 2023
null
null
null
cs.CV cs.LG cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
Monocular depth estimation plays a critical role in various computer vision and robotics applications such as localization, mapping, and 3D object detection. Recently, learning-based algorithms achieve huge success in depth estimation by training models with a large amount of data in a supervised manner. However, it is challenging to acquire dense ground truth depth labels for supervised training, and the unsupervised depth estimation using monocular sequences emerges as a promising alternative. Unfortunately, most studies on unsupervised depth estimation explore loss functions or occlusion masks, and there is little change in model architecture in that ConvNet-based encoder-decoder structure becomes a de-facto standard for depth estimation. In this paper, we employ a convolution-free Swin Transformer as an image feature extractor so that the network can capture both local geometric features and global semantic features for depth estimation. Also, we propose a Densely Cascaded Multi-scale Network (DCMNet) that connects every feature map directly with another from different scales via a top-down cascade pathway. This densely cascaded connectivity reinforces the interconnection between decoding layers and produces high-quality multi-scale depth outputs. The experiments on two different datasets, KITTI and Make3D, demonstrate that our proposed method outperforms existing state-of-the-art unsupervised algorithms.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 06:01:46 GMT" } ]
2023-01-18T00:00:00
[ [ "Shim", "Dongseok", "" ], [ "Kim", "H. Jin", "" ] ]
new_dataset
0.996419
2301.06721
Hai Lin
Hai Lin and Jinhong Yuan
On Delay-Doppler Plane Orthogonal Pulse
This paper was presented at the IEEE GLOBECOM 2022
null
10.1109/GLOBECOM48099.2022.10001406
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we analyze the recently discovered delay-Doppler plane orthogonal pulse (DDOP), which is essential for delay-Doppler plane multi-carrier modulation waveform. In particular, we introduce a local orthogonality property of pulses corresponding to Weyl-Heisenberg (WH) subset and justify the DDOP's existence, in contrast to global orthogonality corresponding to WH set governed by the WH frame theory. Then, sufficient conditions for locally-orthogonal pulses are presented and discussed. Based on the analysis, we propose a general DDOP design. We also derive the frequency domain representation of the DDOP, and compare the DDOP-based orthogonal delay-Doppler division multiplexing (ODDM) modulation with other modulation schemes, in terms of TF signal localization. Interestingly, we show perfect local orthogonality property of the DDOP with respect to delay-Doppler resolutions using its ambiguity function.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 06:43:10 GMT" } ]
2023-01-18T00:00:00
[ [ "Lin", "Hai", "" ], [ "Yuan", "Jinhong", "" ] ]
new_dataset
0.999563
2301.06727
Giovanni Finocchio
Giovanni Finocchio, Supriyo Bandyopadhyay, Peng Lin, Gang Pan, J. Joshua Yang, Riccardo Tomasello, Christos Panagopoulos, Mario Carpentieri, Vito Puliafito, Johan {\AA}kerman, Hiroki Takesue, Amit Ranjan Trivedi, Saibal Mukhopadhyay, Kaushik Roy, Vinod K. Sangwan, Mark C. Hersam, Anna Giordano, Huynsoo Yang, Julie Grollier, Kerem Camsari, Peter Mcmahon, Supriyo Datta, Jean Anne Incorvia, Joseph Friedman, Sorin Cotofana, Florin Ciubotaru, Andrii Chumak, Azad J. Naeemi, Brajesh Kumar Kaushik, Yao Zhu, Kang Wang, Belita Koiller, Gabriel Aguilar, Guilherme Tempor\~ao, Kremena Makasheva, Aida Tordi- Sanial, Jennifer Hasler, William Levy, Vwani Roychowdhury, Samiran Ganguly, Avik Ghosh, Davi Rodriquez, Satoshi Sunada, Karin Evershor-Sitte, Amit Lal, Shubham Jadhav, Massimiliano Di Ventra, Yuriy Pershin, Kosuke Tatsumura, Hayato Goto
Roadmap for Unconventional Computing with Nanotechnology
88 pages currently under peer review with Nano
null
null
null
cs.ET physics.app-ph
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the Beyond Moore Law era, with increasing edge intelligence, domain-specific computing embracing unconventional approaches will become increasingly prevalent. At the same time, the adoption of a wide variety of nanotechnologies will offer benefits in energy cost, computational speed, reduced footprint, cyber-resilience and processing prowess. The time is ripe to lay out a roadmap for unconventional computing with nanotechnologies to guide future research and this collection aims to fulfill that need. The authors provide a comprehensive roadmap for neuromorphic computing with electron spins, memristive devices, two-dimensional nanomaterials, nanomagnets and assorted dynamical systems. They also address other paradigms such as Ising machines, Bayesian inference engines, probabilistic computing with p-bits, processing in memory, quantum memories and algorithms, computing with skyrmions and spin waves, and brain inspired computing for incremental learning and solving problems in severely resource constrained environments. All of these approaches have advantages over conventional Boolean computing predicated on the von-Neumann architecture. With the computational need for artificial intelligence growing at a rate 50x faster than Moore law for electronics, more unconventional approaches to computing and signal processing will appear on the horizon and this roadmap will aid in identifying future needs and challenges.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 07:00:28 GMT" } ]
2023-01-18T00:00:00
[ [ "Finocchio", "Giovanni", "" ], [ "Bandyopadhyay", "Supriyo", "" ], [ "Lin", "Peng", "" ], [ "Pan", "Gang", "" ], [ "Yang", "J. Joshua", "" ], [ "Tomasello", "Riccardo", "" ], [ "Panagopoulos", "Christos", "" ], [ "Carpentieri", "Mario", "" ], [ "Puliafito", "Vito", "" ], [ "Åkerman", "Johan", "" ], [ "Takesue", "Hiroki", "" ], [ "Trivedi", "Amit Ranjan", "" ], [ "Mukhopadhyay", "Saibal", "" ], [ "Roy", "Kaushik", "" ], [ "Sangwan", "Vinod K.", "" ], [ "Hersam", "Mark C.", "" ], [ "Giordano", "Anna", "" ], [ "Yang", "Huynsoo", "" ], [ "Grollier", "Julie", "" ], [ "Camsari", "Kerem", "" ], [ "Mcmahon", "Peter", "" ], [ "Datta", "Supriyo", "" ], [ "Incorvia", "Jean Anne", "" ], [ "Friedman", "Joseph", "" ], [ "Cotofana", "Sorin", "" ], [ "Ciubotaru", "Florin", "" ], [ "Chumak", "Andrii", "" ], [ "Naeemi", "Azad J.", "" ], [ "Kaushik", "Brajesh Kumar", "" ], [ "Zhu", "Yao", "" ], [ "Wang", "Kang", "" ], [ "Koiller", "Belita", "" ], [ "Aguilar", "Gabriel", "" ], [ "Temporão", "Guilherme", "" ], [ "Makasheva", "Kremena", "" ], [ "Sanial", "Aida Tordi-", "" ], [ "Hasler", "Jennifer", "" ], [ "Levy", "William", "" ], [ "Roychowdhury", "Vwani", "" ], [ "Ganguly", "Samiran", "" ], [ "Ghosh", "Avik", "" ], [ "Rodriquez", "Davi", "" ], [ "Sunada", "Satoshi", "" ], [ "Evershor-Sitte", "Karin", "" ], [ "Lal", "Amit", "" ], [ "Jadhav", "Shubham", "" ], [ "Di Ventra", "Massimiliano", "" ], [ "Pershin", "Yuriy", "" ], [ "Tatsumura", "Kosuke", "" ], [ "Goto", "Hayato", "" ] ]
new_dataset
0.967926
2301.06736
Kavya Manohar
Kavya Manohar, A. R. Jayan, Rajeev Rajan
Syllable Subword Tokens for Open Vocabulary Speech Recognition in Malayalam
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In a hybrid automatic speech recognition (ASR) system, a pronunciation lexicon (PL) and a language model (LM) are essential to correctly retrieve spoken word sequences. Being a morphologically complex language, the vocabulary of Malayalam is so huge and it is impossible to build a PL and an LM that cover all diverse word forms. Usage of subword tokens to build PL and LM, and combining them to form words after decoding, enables the recovery of many out of vocabulary words. In this work we investigate the impact of using syllables as subword tokens instead of words in Malayalam ASR, and evaluate the relative improvement in lexicon size, model memory requirement and word error rate.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 07:29:47 GMT" } ]
2023-01-18T00:00:00
[ [ "Manohar", "Kavya", "" ], [ "Jayan", "A. R.", "" ], [ "Rajan", "Rajeev", "" ] ]
new_dataset
0.956764
2301.06741
Tianyi Zhou
Zhao Song, Tianyi Zhou
Faster Sinkhorn's Algorithm with Small Treewidth
null
null
null
null
cs.DS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Computing optimal transport (OT) distances such as the earth mover's distance is a fundamental problem in machine learning, statistics, and computer vision. In this paper, we study the problem of approximating the general OT distance between two discrete distributions of size $n$. Given the cost matrix $C=AA^\top$ where $A \in \mathbb{R}^{n \times d}$, we proposed a faster Sinkhorn's Algorithm to approximate the OT distance when matrix $A$ has treewidth $\tau$. To approximate the OT distance, our algorithm improves the state-of-the-art results [Dvurechensky, Gasnikov, and Kroshnin ICML 2018] from $\widetilde{O}(\epsilon^{-2} n^2)$ time to $\widetilde{O}(\epsilon^{-2} n \tau)$ time.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 07:55:15 GMT" } ]
2023-01-18T00:00:00
[ [ "Song", "Zhao", "" ], [ "Zhou", "Tianyi", "" ] ]
new_dataset
0.995921
2301.06762
Pragma Kar
Pragma Kar, Shyamvanshikumar Singh, Avijit Mandal, Samiran Chattopadhyay, Sandip Chakraborty
ExpresSense: Exploring a Standalone Smartphone to Sense Engagement of Users from Facial Expressions Using Acoustic Sensing
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Facial expressions have been considered a metric reflecting a person's engagement with a task. While the evolution of expression detection methods is consequential, the foundation remains mostly on image processing techniques that suffer from occlusion, ambient light, and privacy concerns. In this paper, we propose ExpresSense, a lightweight application for standalone smartphones that relies on near-ultrasound acoustic signals for detecting users' facial expressions. ExpresSense has been tested on different users in lab-scaled and large-scale studies for both posed as well as natural expressions. By achieving a classification accuracy of ~75% over various basic expressions, we discuss the potential of a standalone smartphone to sense expressions through acoustic sensing.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 08:55:59 GMT" } ]
2023-01-18T00:00:00
[ [ "Kar", "Pragma", "" ], [ "Singh", "Shyamvanshikumar", "" ], [ "Mandal", "Avijit", "" ], [ "Chattopadhyay", "Samiran", "" ], [ "Chakraborty", "Sandip", "" ] ]
new_dataset
0.995531
2301.06782
Chongshan Lu
Chongshan Lu, Fukun Yin, Xin Chen, Tao Chen, Gang YU, Jiayuan Fan
A Large-Scale Outdoor Multi-modal Dataset and Benchmark for Novel View Synthesis and Implicit Scene Reconstruction
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Neural Radiance Fields (NeRF) has achieved impressive results in single object scene reconstruction and novel view synthesis, which have been demonstrated on many single modality and single object focused indoor scene datasets like DTU, BMVS, and NeRF Synthetic.However, the study of NeRF on large-scale outdoor scene reconstruction is still limited, as there is no unified outdoor scene dataset for large-scale NeRF evaluation due to expensive data acquisition and calibration costs. In this paper, we propose a large-scale outdoor multi-modal dataset, OMMO dataset, containing complex land objects and scenes with calibrated images, point clouds and prompt annotations. Meanwhile, a new benchmark for several outdoor NeRF-based tasks is established, such as novel view synthesis, surface reconstruction, and multi-modal NeRF. To create the dataset, we capture and collect a large number of real fly-view videos and select high-quality and high-resolution clips from them. Then we design a quality review module to refine images, remove low-quality frames and fail-to-calibrate scenes through a learning-based automatic evaluation plus manual review. Finally, a number of volunteers are employed to add the text descriptions for each scene and key-frame to meet the potential multi-modal requirements in the future. Compared with existing NeRF datasets, our dataset contains abundant real-world urban and natural scenes with various scales, camera trajectories, and lighting conditions. Experiments show that our dataset can benchmark most state-of-the-art NeRF methods on different tasks. We will release the dataset and model weights very soon.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 10:15:32 GMT" } ]
2023-01-18T00:00:00
[ [ "Lu", "Chongshan", "" ], [ "Yin", "Fukun", "" ], [ "Chen", "Xin", "" ], [ "Chen", "Tao", "" ], [ "YU", "Gang", "" ], [ "Fan", "Jiayuan", "" ] ]
new_dataset
0.999792
2301.06790
Michel Pl\"uss
Michel Pl\"uss, Yanick Schraner, Christian Scheller, Manfred Vogel
2nd Swiss German Speech to Standard German Text Shared Task at SwissText 2022
3 pages, 0 figures, to appear in proceedings of SwissText 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We present the results and findings of the 2nd Swiss German speech to Standard German text shared task at SwissText 2022. Participants were asked to build a sentence-level Swiss German speech to Standard German text system specialized on the Grisons dialect. The objective was to maximize the BLEU score on a test set of Grisons speech. 3 teams participated, with the best-performing system achieving a BLEU score of 70.1.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 10:31:11 GMT" } ]
2023-01-18T00:00:00
[ [ "Plüss", "Michel", "" ], [ "Schraner", "Yanick", "" ], [ "Scheller", "Christian", "" ], [ "Vogel", "Manfred", "" ] ]
new_dataset
0.994
2301.06826
Guanqun Cao
Guanqun Cao, Jiaqi Jiang, Ningtao Mao, Danushka Bollegala, Min Li, and Shan Luo
Vis2Hap: Vision-based Haptic Rendering by Cross-modal Generation
This paper is accepted at ICRA 2023
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To assist robots in teleoperation tasks, haptic rendering which allows human operators access a virtual touch feeling has been developed in recent years. Most previous haptic rendering methods strongly rely on data collected by tactile sensors. However, tactile data is not widely available for robots due to their limited reachable space and the restrictions of tactile sensors. To eliminate the need for tactile data, in this paper we propose a novel method named as Vis2Hap to generate haptic rendering from visual inputs that can be obtained from a distance without physical interaction. We take the surface texture of objects as key cues to be conveyed to the human operator. To this end, a generative model is designed to simulate the roughness and slipperiness of the object's surface. To embed haptic cues in Vis2Hap, we use height maps from tactile sensors and spectrograms from friction coefficients as the intermediate outputs of the generative model. Once Vis2Hap is trained, it can be used to generate height maps and spectrograms of new surface textures, from which a friction image can be obtained and displayed on a haptic display. The user study demonstrates that our proposed Vis2Hap method enables users to access a realistic haptic feeling similar to that of physical objects. The proposed vision-based haptic rendering has the potential to enhance human operators' perception of the remote environment and facilitate robotic manipulation.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 12:07:40 GMT" } ]
2023-01-18T00:00:00
[ [ "Cao", "Guanqun", "" ], [ "Jiang", "Jiaqi", "" ], [ "Mao", "Ningtao", "" ], [ "Bollegala", "Danushka", "" ], [ "Li", "Min", "" ], [ "Luo", "Shan", "" ] ]
new_dataset
0.998466
2301.06868
Jarkko Kari
Jarkko Kari
Expansivity and periodicity in algebraic subshifts
DLT 2022 special issue
null
null
null
cs.DM math.CO math.DS
http://creativecommons.org/licenses/by/4.0/
A d-dimensional configuration c : Z^d -> A is a coloring of the d-dimensional infinite grid by elements of a finite alphabet A \subseteq Z. The configuration c has an annihilator if a non-trivial linear combination of finitely many translations of c is the zero configuration. Writing c as a d-variate formal power series, the annihilator is conveniently expressed as a d-variate Laurent polynomial f whose formal product with c is the zero power series. More generally, if the formal product is a strongly periodic configuration, we call the polynomial f a periodizer of c. A common annihilator (periodizer) of a set of configurations is called an annihilator (periodizer, respectively) of the set. In particular, we consider annihilators and periodizers of d-dimensional subshifts, that is, sets of configurations defined by disallowing some local patterns. We show that a (d-1)-dimensional linear subspace S \subseteq R^d is expansive for a subshift if the subshift has a periodizer whose support contains exactly one element of S. As a subshift is known to be finite if all (d-1)-dimensional subspaces are expansive, we obtain a simple necessary condition on the periodizers that guarantees finiteness of a subshift or, equivalently, strong periodicity of a configuration. We provide examples in terms of tilings of Z^d by translations of a single tile.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 13:22:44 GMT" } ]
2023-01-18T00:00:00
[ [ "Kari", "Jarkko", "" ] ]
new_dataset
0.998546
2301.06876
Junjie Xu H.
Junjie H. Xu and Yu Nakano and Lingrong Kong and Kojiro Iizuka
CS-lol: a Dataset of Viewer Comment with Scene in E-sports Live-streaming
5 pages, 3 figures, In ACM SIGIR Conference on Human Information Interaction and Retrieval (CHIIR 23)
null
10.1145/3576840.3578334
null
cs.MM cs.LG
http://creativecommons.org/licenses/by/4.0/
Billions of live-streaming viewers share their opinions on scenes they are watching in real-time and interact with the event, commentators as well as other viewers via text comments. Thus, there is necessary to explore viewers' comments with scenes in E-sport live-streaming events. In this paper, we developed CS-lol, a new large-scale dataset containing comments from viewers paired with descriptions of game scenes in E-sports live-streaming. Moreover, we propose a task, namely viewer comment retrieval, to retrieve the viewer comments for the scene of the live-streaming event. Results on a series of baseline retrieval methods derived from typical IR evaluation methods show our task as a challenging task. Finally, we release CS-lol and baseline implementation to the research community as a resource.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 13:34:06 GMT" } ]
2023-01-18T00:00:00
[ [ "Xu", "Junjie H.", "" ], [ "Nakano", "Yu", "" ], [ "Kong", "Lingrong", "" ], [ "Iizuka", "Kojiro", "" ] ]
new_dataset
0.999874
2301.06944
Xi Xu
Yu Gao, Xi Xu, Tianji Jiang, Siyuan Chen, Yi Yang, Yufeng Yue, Mengyin Fu
DR-WLC: Dimensionality Reduction cognition for object detection and pose estimation by Watching, Learning and Checking
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection and pose estimation are difficult tasks in robotics and autonomous driving. Existing object detection and pose estimation methods mostly adopt the same-dimensional data for training. For example, 2D object detection usually requires a large amount of 2D annotation data with high cost. Using high-dimensional information to supervise lower-dimensional tasks is a feasible way to reduce datasets size. In this work, the DR-WLC, a dimensionality reduction cognitive model, which can perform both object detection and pose estimation tasks at the same time is proposed. The model only requires 3D model of objects and unlabeled environment images (with or without objects) to finish the training. In addition, a bounding boxes generation strategy is also proposed to build the relationship between 3D model and 2D object detection task. Experiments show that our method can qualify the work without any manual annotations and it is easy to deploy for practical applications. Source code is at https://github.com/IN2-ViAUn/DR-WLC.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 15:08:32 GMT" } ]
2023-01-18T00:00:00
[ [ "Gao", "Yu", "" ], [ "Xu", "Xi", "" ], [ "Jiang", "Tianji", "" ], [ "Chen", "Siyuan", "" ], [ "Yang", "Yi", "" ], [ "Yue", "Yufeng", "" ], [ "Fu", "Mengyin", "" ] ]
new_dataset
0.996684
2301.06959
Sofia Reis M.D.
Sofia Reis, Corina Pasareanu, Rui Abreu, Hakan Erdogmus
SECOMlint: A linter for Security Commit Messages
null
null
null
null
cs.CR cs.SE
http://creativecommons.org/licenses/by/4.0/
Transparent and efficient vulnerability and patch disclosure are still a challenge in the security community, essentially because of the poor-quality documentation stemming from the lack of standards. SECOM is a recently-proposed standard convention for security commit messages that enables the writing of well-structured and complete commit messages for security patches. The convention prescribes different bits of security-related information essential for a better understanding of vulnerabilities by humans and tools. SECOMlint is an automated and configurable solution to help security and maintenance teams infer compliance against the SECOM standard when submitting patches to security vulnerabilities in their source version control systems. The tool leverages the natural language processing technique Named-Entity Recognition (NER) to extract security-related information from commit messages and uses it to match the compliance standards designed. We demonstrate SECOMlint at https://youtu.be/-1hzpMN_uFI; and documentation and its source code at https://tqrg.github.io/secomlint/.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 15:33:38 GMT" } ]
2023-01-18T00:00:00
[ [ "Reis", "Sofia", "" ], [ "Pasareanu", "Corina", "" ], [ "Abreu", "Rui", "" ], [ "Erdogmus", "Hakan", "" ] ]
new_dataset
0.999694
2301.06985
Carlos Pineda
Josu\'e Ely Molina, Jorge Flores, Carlos Gershenson and Carlos Pineda
Statistical analysis of word flow among five Indo-European languages
13 pages
null
null
null
cs.CL physics.soc-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A recent increase in data availability has allowed the possibility to perform different statistical linguistic studies. Here we use the Google Books Ngram dataset to analyze word flow among English, French, German, Italian, and Spanish. We study what we define as ``migrant words'', a type of loanwords that do not change their spelling. We quantify migrant words from one language to another for different decades, and notice that most migrant words can be aggregated in semantic fields and associated to historic events. We also study the statistical properties of accumulated migrant words and their rank dynamics. We propose a measure of use of migrant words that could be used as a proxy of cultural influence. Our methodology is not exempt of caveats, but our results are encouraging to promote further studies in this direction.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 16:12:42 GMT" } ]
2023-01-18T00:00:00
[ [ "Molina", "Josué Ely", "" ], [ "Flores", "Jorge", "" ], [ "Gershenson", "Carlos", "" ], [ "Pineda", "Carlos", "" ] ]
new_dataset
0.950634
2301.06987
Bassel El Mabsout
Bassel El Mabsout, Shahin Roozkhosh, Siddharth Mysore, Kate Saenko, Renato Mancuso
The SwaNNFlight System: On-the-Fly Sim-to-Real Adaptation via Anchored Learning
null
null
null
null
cs.RO cs.LG
http://creativecommons.org/licenses/by/4.0/
Reinforcement Learning (RL) agents trained in simulated environments and then deployed in the real world are often sensitive to the differences in dynamics presented, commonly termed the sim-to-real gap. With the goal of minimizing this gap on resource-constrained embedded systems, we train and live-adapt agents on quadrotors built from off-the-shelf hardware. In achieving this we developed three novel contributions. (i) SwaNNFlight, an open-source firmware enabling wireless data capture and transfer of agents' observations. Fine-tuning agents with new data, and receiving and swapping onboard NN controllers -- all while in flight. We also design SwaNNFlight System (SwaNNFS) allowing new research in training and live-adapting learning agents on similar systems. (ii) Multiplicative value composition, a technique for preserving the importance of each policy optimization criterion, improving training performance and variability in learnt behavior. And (iii) anchor critics to help stabilize the fine-tuning of agents during sim-to-real transfer, online learning from real data while retaining behavior optimized in simulation. We train consistently flight-worthy control policies in simulation and deploy them on real quadrotors. We then achieve live controller adaptation via over-the-air updates of the onboard control policy from a ground station. Our results indicate that live adaptation unlocks a near-50\% reduction in power consumption, attributed to the sim-to-real gap. Finally, we tackle the issues of catastrophic forgetting and controller instability, showing the effectiveness of our novel methods. Project Website: https://github.com/BU-Cyber-Physical-Systems-Lab/SwaNNFS
[ { "version": "v1", "created": "Tue, 17 Jan 2023 16:16:53 GMT" } ]
2023-01-18T00:00:00
[ [ "Mabsout", "Bassel El", "" ], [ "Roozkhosh", "Shahin", "" ], [ "Mysore", "Siddharth", "" ], [ "Saenko", "Kate", "" ], [ "Mancuso", "Renato", "" ] ]
new_dataset
0.992466
2301.06993
Lakmal Meegahapola
Emma Bouton--Bessac, Lakmal Meegahapola, Daniel Gatica-Perez
Your Day in Your Pocket: Complex Activity Recognition from Smartphone Accelerometers
16th EAI International Conference on Pervasive Computing Technologies for Healthcare (PervasiveHealth) 2022
null
null
null
cs.HC cs.AI cs.MM
http://creativecommons.org/licenses/by/4.0/
Human Activity Recognition (HAR) enables context-aware user experiences where mobile apps can alter content and interactions depending on user activities. Hence, smartphones have become valuable for HAR as they allow large, and diversified data collection. Although previous work in HAR managed to detect simple activities (i.e., sitting, walking, running) with good accuracy using inertial sensors (i.e., accelerometer), the recognition of complex daily activities remains an open problem, specially in remote work/study settings when people are more sedentary. Moreover, understanding the everyday activities of a person can support the creation of applications that aim to support their well-being. This paper investigates the recognition of complex activities exclusively using smartphone accelerometer data. We used a large smartphone sensing dataset collected from over 600 users in five countries during the pandemic and showed that deep learning-based, binary classification of eight complex activities (sleeping, eating, watching videos, online communication, attending a lecture, sports, shopping, studying) can be achieved with AUROC scores up to 0.76 with partially personalized models. This shows encouraging signs toward assessing complex activities only using phone accelerometer data in the post-pandemic world.
[ { "version": "v1", "created": "Tue, 17 Jan 2023 16:22:30 GMT" } ]
2023-01-18T00:00:00
[ [ "Bouton--Bessac", "Emma", "" ], [ "Meegahapola", "Lakmal", "" ], [ "Gatica-Perez", "Daniel", "" ] ]
new_dataset
0.986618
1806.00749
Yang Yang
Yang Yang, Lei Zheng, Jiawei Zhang, Qingcai Cui, Zhoujun Li, Philip S. Yu
TI-CNN: Convolutional Neural Networks for Fake News Detection
null
null
null
null
cs.CL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the development of social networks, fake news for various commercial and political purposes has been appearing in large numbers and gotten widespread in the online world. With deceptive words, people can get infected by the fake news very easily and will share them without any fact-checking. For instance, during the 2016 US president election, various kinds of fake news about the candidates widely spread through both official news media and the online social networks. These fake news is usually released to either smear the opponents or support the candidate on their side. The erroneous information in the fake news is usually written to motivate the voters' irrational emotion and enthusiasm. Such kinds of fake news sometimes can bring about devastating effects, and an important goal in improving the credibility of online social networks is to identify the fake news timely. In this paper, we propose to study the fake news detection problem. Automatic fake news identification is extremely hard, since pure model based fact-checking for news is still an open problem, and few existing models can be applied to solve the problem. With a thorough investigation of a fake news data, lots of useful explicit features are identified from both the text words and images used in the fake news. Besides the explicit features, there also exist some hidden patterns in the words and images used in fake news, which can be captured with a set of latent features extracted via the multiple convolutional layers in our model. A model named as TI-CNN (Text and Image information based Convolutinal Neural Network) is proposed in this paper. By projecting the explicit and latent features into a unified feature space, TI-CNN is trained with both the text and image information simultaneously. Extensive experiments carried on the real-world fake news datasets have demonstrate the effectiveness of TI-CNN.
[ { "version": "v1", "created": "Sun, 3 Jun 2018 08:09:58 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 06:57:16 GMT" }, { "version": "v3", "created": "Fri, 13 Jan 2023 02:42:37 GMT" } ]
2023-01-16T00:00:00
[ [ "Yang", "Yang", "" ], [ "Zheng", "Lei", "" ], [ "Zhang", "Jiawei", "" ], [ "Cui", "Qingcai", "" ], [ "Li", "Zhoujun", "" ], [ "Yu", "Philip S.", "" ] ]
new_dataset
0.998359
2008.07292
Kees Middelburg
C. A. Middelburg
A classical-logic view of a paraconsistent logic
17 pages, error in the distinguishing laws of logical equivalence corrected
null
null
null
cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper is concerned with the first-order paraconsistent logic LPQ$^{\supset,\mathsf{F}}$. A sequent-style natural deduction proof system for this logic is presented and, for this proof system, both a model-theoretic justification and a logical justification by means of an embedding into first-order classical logic is given. For no logic that is essentially the same as LPQ$^{\supset,\mathsf{F}}$, a natural deduction proof system is currently available in the literature. The given embedding provides both a classical-logic explanation of this logic and a logical justification of its proof system. The major properties of LPQ$^{\supset,\mathsf{F}}$ are also treated.
[ { "version": "v1", "created": "Mon, 17 Aug 2020 13:17:25 GMT" }, { "version": "v2", "created": "Fri, 8 Jan 2021 13:31:18 GMT" }, { "version": "v3", "created": "Wed, 6 Jul 2022 15:10:20 GMT" }, { "version": "v4", "created": "Sat, 27 Aug 2022 11:46:24 GMT" }, { "version": "v5", "created": "Sun, 23 Oct 2022 09:05:13 GMT" }, { "version": "v6", "created": "Thu, 12 Jan 2023 21:14:59 GMT" } ]
2023-01-16T00:00:00
[ [ "Middelburg", "C. A.", "" ] ]
new_dataset
0.994306
2205.15979
Johannes Betz Dr.
Johannes Betz, Tobias Betz, Felix Fent, Maximilian Geisslinger, Alexander Heilmeier, Leonhard Hermansdorfer, Thomas Herrmann, Sebastian Huch, Phillip Karle, Markus Lienkamp, Boris Lohmann, Felix Nobis, Levent \"Ogretmen, Matthias Rowold, Florian Sauerbeck, Tim Stahl, Rainer Trauth, Frederik Werner, Alexander Wischnewski
TUM autonomous motorsport: An autonomous racing software for the Indy Autonomous Challenge
37 pages, 18 figures, 2 tables
Journal of Field Robotics, 2023, 1-27
10.1002/rob.22153
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
For decades, motorsport has been an incubator for innovations in the automotive sector and brought forth systems like disk brakes or rearview mirrors. Autonomous racing series such as Roborace, F1Tenth, or the Indy Autonomous Challenge (IAC) are envisioned as playing a similar role within the autonomous vehicle sector, serving as a proving ground for new technology at the limits of the autonomous systems capabilities. This paper outlines the software stack and approach of the TUM Autonomous Motorsport team for their participation in the Indy Autonomous Challenge, which holds two competitions: A single-vehicle competition on the Indianapolis Motor Speedway and a passing competition at the Las Vegas Motor Speedway. Nine university teams used an identical vehicle platform: A modified Indy Lights chassis equipped with sensors, a computing platform, and actuators. All the teams developed different algorithms for object detection, localization, planning, prediction, and control of the race cars. The team from TUM placed first in Indianapolis and secured second place in Las Vegas. During the final of the passing competition, the TUM team reached speeds and accelerations close to the limit of the vehicle, peaking at around 270 km/h and 28 ms2. This paper will present details of the vehicle hardware platform, the developed algorithms, and the workflow to test and enhance the software applied during the two-year project. We derive deep insights into the autonomous vehicle's behavior at high speed and high acceleration by providing a detailed competition analysis. Based on this, we deduce a list of lessons learned and provide insights on promising areas of future work based on the real-world evaluation of the displayed concepts.
[ { "version": "v1", "created": "Tue, 31 May 2022 17:35:52 GMT" }, { "version": "v2", "created": "Fri, 13 Jan 2023 08:33:31 GMT" } ]
2023-01-16T00:00:00
[ [ "Betz", "Johannes", "" ], [ "Betz", "Tobias", "" ], [ "Fent", "Felix", "" ], [ "Geisslinger", "Maximilian", "" ], [ "Heilmeier", "Alexander", "" ], [ "Hermansdorfer", "Leonhard", "" ], [ "Herrmann", "Thomas", "" ], [ "Huch", "Sebastian", "" ], [ "Karle", "Phillip", "" ], [ "Lienkamp", "Markus", "" ], [ "Lohmann", "Boris", "" ], [ "Nobis", "Felix", "" ], [ "Ögretmen", "Levent", "" ], [ "Rowold", "Matthias", "" ], [ "Sauerbeck", "Florian", "" ], [ "Stahl", "Tim", "" ], [ "Trauth", "Rainer", "" ], [ "Werner", "Frederik", "" ], [ "Wischnewski", "Alexander", "" ] ]
new_dataset
0.999594
2207.08794
Weicai Ye
Weicai Ye, Xingyuan Yu, Xinyue Lan, Yuhang Ming, Jinyu Li, Hujun Bao, Zhaopeng Cui and Guofeng Zhang
DeFlowSLAM: Self-Supervised Scene Motion Decomposition for Dynamic Dense SLAM
Homepage: https://zju3dv.github.io/deflowslam
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present a novel dual-flow representation of scene motion that decomposes the optical flow into a static flow field caused by the camera motion and another dynamic flow field caused by the objects' movements in the scene. Based on this representation, we present a dynamic SLAM, dubbed DeFlowSLAM, that exploits both static and dynamic pixels in the images to solve the camera poses, rather than simply using static background pixels as other dynamic SLAM systems do. We propose a dynamic update module to train our DeFlowSLAM in a self-supervised manner, where a dense bundle adjustment layer takes in estimated static flow fields and the weights controlled by the dynamic mask and outputs the residual of the optimized static flow fields, camera poses, and inverse depths. The static and dynamic flow fields are estimated by warping the current image to the neighboring images, and the optical flow can be obtained by summing the two fields. Extensive experiments demonstrate that DeFlowSLAM generalizes well to both static and dynamic scenes as it exhibits comparable performance to the state-of-the-art DROID-SLAM in static and less dynamic scenes while significantly outperforming DROID-SLAM in highly dynamic environments. The code and pre-trained model will be available on the project webpage: \urlstyle{tt} \textcolor{url_color}{\url{https://zju3dv.github.io/deflowslam/}}.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 17:47:39 GMT" }, { "version": "v2", "created": "Fri, 13 Jan 2023 15:08:01 GMT" } ]
2023-01-16T00:00:00
[ [ "Ye", "Weicai", "" ], [ "Yu", "Xingyuan", "" ], [ "Lan", "Xinyue", "" ], [ "Ming", "Yuhang", "" ], [ "Li", "Jinyu", "" ], [ "Bao", "Hujun", "" ], [ "Cui", "Zhaopeng", "" ], [ "Zhang", "Guofeng", "" ] ]
new_dataset
0.995545
2208.07363
Nolan Wagener
Nolan Wagener, Andrey Kolobov, Felipe Vieira Frujeri, Ricky Loynd, Ching-An Cheng, Matthew Hausknecht
MoCapAct: A Multi-Task Dataset for Simulated Humanoid Control
Appearing in NeurIPS 2022 Datasets and Benchmarks Track
null
null
null
cs.RO cs.GR cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
Simulated humanoids are an appealing research domain due to their physical capabilities. Nonetheless, they are also challenging to control, as a policy must drive an unstable, discontinuous, and high-dimensional physical system. One widely studied approach is to utilize motion capture (MoCap) data to teach the humanoid agent low-level skills (e.g., standing, walking, and running) that can then be re-used to synthesize high-level behaviors. However, even with MoCap data, controlling simulated humanoids remains very hard, as MoCap data offers only kinematic information. Finding physical control inputs to realize the demonstrated motions requires computationally intensive methods like reinforcement learning. Thus, despite the publicly available MoCap data, its utility has been limited to institutions with large-scale compute. In this work, we dramatically lower the barrier for productive research on this topic by training and releasing high-quality agents that can track over three hours of MoCap data for a simulated humanoid in the dm_control physics-based environment. We release MoCapAct (Motion Capture with Actions), a dataset of these expert agents and their rollouts, which contain proprioceptive observations and actions. We demonstrate the utility of MoCapAct by using it to train a single hierarchical policy capable of tracking the entire MoCap dataset within dm_control and show the learned low-level component can be re-used to efficiently learn downstream high-level tasks. Finally, we use MoCapAct to train an autoregressive GPT model and show that it can control a simulated humanoid to perform natural motion completion given a motion prompt. Videos of the results and links to the code and dataset are available at https://microsoft.github.io/MoCapAct.
[ { "version": "v1", "created": "Mon, 15 Aug 2022 17:57:33 GMT" }, { "version": "v2", "created": "Thu, 13 Oct 2022 15:14:56 GMT" }, { "version": "v3", "created": "Fri, 13 Jan 2023 14:42:44 GMT" } ]
2023-01-16T00:00:00
[ [ "Wagener", "Nolan", "" ], [ "Kolobov", "Andrey", "" ], [ "Frujeri", "Felipe Vieira", "" ], [ "Loynd", "Ricky", "" ], [ "Cheng", "Ching-An", "" ], [ "Hausknecht", "Matthew", "" ] ]
new_dataset
0.999771
2210.04975
Vijja Wichitwechkarn
Vijja Wichitwechkarn and Charles Fox
MACARONS: A Modular and Open-Sourced Automation System for Vertical Farming
null
null
10.5334/joh.53
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The Modular Automated Crop Array Online System (MACARONS) is an extensible, scalable, open hardware system for plant transport in automated horticulture systems such as vertical farms. It is specified to move trays of plants up to 1060mm x 630mm and 12.5kg at a rate of 100mm/s along the guide rails and 41.7mm/s up the lifts, such as between stations for monitoring and actuating plants. The cost for the construction of one grow unit of MACARONS is 144.96USD which equates to 128.85USD/m2 of grow area. The designs are released and meets the requirements of CERN-OSH-W, which includes step-by-step graphical build instructions and can be built by a typical technical person in one day at a cost of 1535.50USD. Integrated tests are included in the build instructions are used to validate against the specifications, and we report on a successful build. Through a simple analysis, we demonstrate that MACARONS can operate at a rate sufficient to automate tray loading/unloading, to reduce labour costs in a vertical farm.
[ { "version": "v1", "created": "Mon, 10 Oct 2022 19:18:36 GMT" }, { "version": "v2", "created": "Tue, 29 Nov 2022 15:28:13 GMT" } ]
2023-01-16T00:00:00
[ [ "Wichitwechkarn", "Vijja", "" ], [ "Fox", "Charles", "" ] ]
new_dataset
0.997961
2211.04993
Mauro Martini
Andrea Eirale, Mauro Martini, Marcello Chiaberge
RL-DWA Omnidirectional Motion Planning for Person Following in Domestic Assistance and Monitoring
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
Robot assistants are emerging as high-tech solutions to support people in everyday life. Following and assisting the user in the domestic environment requires flexible mobility to safely move in cluttered spaces. We introduce a new approach to person following for assistance and monitoring. Our methodology exploits an omnidirectional robotic platform to detach the computation of linear and angular velocities and navigate within the domestic environment without losing track of the assisted person. While linear velocities are managed by a conventional Dynamic Window Approach (DWA) local planner, we trained a Deep Reinforcement Learning (DRL) agent to predict optimized angular velocities commands and maintain the orientation of the robot towards the user. We evaluate our navigation system on a real omnidirectional platform in various indoor scenarios, demonstrating the competitive advantage of our solution compared to a standard differential steering following.
[ { "version": "v1", "created": "Wed, 9 Nov 2022 16:11:41 GMT" }, { "version": "v2", "created": "Fri, 13 Jan 2023 14:12:08 GMT" } ]
2023-01-16T00:00:00
[ [ "Eirale", "Andrea", "" ], [ "Martini", "Mauro", "" ], [ "Chiaberge", "Marcello", "" ] ]
new_dataset
0.997652
2212.12117
Alexander Barg
Alexander Barg, Moshe Schwartz, and Lev Yohananov
Storage codes on coset graphs with asymptotically unit rate
14 pages. In v2 we expanded the introduction to account for some new references of which we were previously not aware
null
null
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A storage code on a graph $G$ is a set of assignments of symbols to the vertices such that every vertex can recover its value by looking at its neighbors. We consider the question of constructing large-size storage codes on triangle-free graphs constructed as coset graphs of binary linear codes. Previously it was shown that there are infinite families of binary storage codes on coset graphs with rate converging to 3/4. Here we show that codes on such graphs can attain rate asymptotically approaching 1. Equivalently, this question can be phrased as a version of hat-guessing games on graphs (e.g., P.J. Cameron e.a., \emph{Electronic J. Comb.} 2016). In this language, we construct triangle-free graphs with success probability of the players approaching one as the number of vertices tends to infinity. Equivalently again, there exist linear index codes on such graphs of rate approaching zero. Another family of storage codes on triangle-free graphs of rate approaching 1 was constructed earlier by A. Golovnev and I. Haviv (36th Computational Complexity Conf., 2021) relying on a different family of graphs.
[ { "version": "v1", "created": "Fri, 23 Dec 2022 03:01:10 GMT" }, { "version": "v2", "created": "Fri, 13 Jan 2023 03:25:05 GMT" } ]
2023-01-16T00:00:00
[ [ "Barg", "Alexander", "" ], [ "Schwartz", "Moshe", "" ], [ "Yohananov", "Lev", "" ] ]
new_dataset
0.99987