id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2207.14140
Pravin Game
Jerin Paul Selvan, Pravin S. Game
Playing a 2D Game Indefinitely using NEAT and Reinforcement Learning
5 pages, 7 figures, 3 tables
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
For over a decade now, robotics and the use of artificial agents have become a common thing.Testing the performance of new path finding or search space optimization algorithms has also become a challenge as they require simulation or an environment to test them.The creation of artificial environments with artificial agents is one of the methods employed to test such algorithms.Games have also become an environment to test them.The performance of the algorithms can be compared by using artificial agents that will behave according to the algorithm in the environment they are put in.The performance parameters can be, how quickly the agent is able to differentiate between rewarding actions and hostile actions.This can be tested by placing the agent in an environment with different types of hurdles and the goal of the agent is to reach the farthest by taking decisions on actions that will lead to avoiding all the obstacles.The environment chosen is a game called "Flappy Bird".The goal of the game is to make the bird fly through a set of pipes of random heights.The bird must go in between these pipes and must not hit the top, the bottom, or the pipes themselves.The actions that the bird can take are either to flap its wings or drop down with gravity.The algorithms that are enforced on the artificial agents are NeuroEvolution of Augmenting Topologies (NEAT) and Reinforcement Learning.The NEAT algorithm takes an "N" initial population of artificial agents.They follow genetic algorithms by considering an objective function, crossover, mutation, and augmenting topologies.Reinforcement learning, on the other hand, remembers the state, the action taken at that state, and the reward received for the action taken using a single agent and a Deep Q-learning Network.The performance of the NEAT algorithm improves as the initial population of the artificial agents is increased.
[ { "version": "v1", "created": "Thu, 28 Jul 2022 15:01:26 GMT" } ]
2022-07-29T00:00:00
[ [ "Selvan", "Jerin Paul", "" ], [ "Game", "Pravin S.", "" ] ]
new_dataset
0.969128
2207.14166
Guijie Zhu
Guijie Zhu, Zhun Fan, Jiacheng Liu, Duan Yuan, Peili Ma, Meihua Wang, Weihua Sheng, Kelvin C. P. Wang
RHA-Net: An Encoder-Decoder Network with Residual Blocks and Hybrid Attention Mechanisms for Pavement Crack Segmentation
null
null
null
null
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The acquisition and evaluation of pavement surface data play an essential role in pavement condition evaluation. In this paper, an efficient and effective end-to-end network for automatic pavement crack segmentation, called RHA-Net, is proposed to improve the pavement crack segmentation accuracy. The RHA-Net is built by integrating residual blocks (ResBlocks) and hybrid attention blocks into the encoder-decoder architecture. The ResBlocks are used to improve the ability of RHA-Net to extract high-level abstract features. The hybrid attention blocks are designed to fuse both low-level features and high-level features to help the model focus on correct channels and areas of cracks, thereby improving the feature presentation ability of RHA-Net. An image data set containing 789 pavement crack images collected by a self-designed mobile robot is constructed and used for training and evaluating the proposed model. Compared with other state-of-the-art networks, the proposed model achieves better performance and the functionalities of adding residual blocks and hybrid attention mechanisms are validated in a comprehensive ablation study. Additionally, a light-weighted version of the model generated by introducing depthwise separable convolution achieves better a performance and a much faster processing speed with 1/30 of the number of U-Net parameters. The developed system can segment pavement crack in real-time on an embedded device Jetson TX2 (25 FPS). The video taken in real-time experiments is released at https://youtu.be/3XIogk0fiG4.
[ { "version": "v1", "created": "Thu, 28 Jul 2022 15:26:01 GMT" } ]
2022-07-29T00:00:00
[ [ "Zhu", "Guijie", "" ], [ "Fan", "Zhun", "" ], [ "Liu", "Jiacheng", "" ], [ "Yuan", "Duan", "" ], [ "Ma", "Peili", "" ], [ "Wang", "Meihua", "" ], [ "Sheng", "Weihua", "" ], [ "Wang", "Kelvin C. P.", "" ] ]
new_dataset
0.995648
2207.14205
Chayan Sarkar
Pradip Pramanick, Chayan Sarkar, Sayan Paul, Ruddra dev Roychoudhury, Brojeshwar Bhowmick
DoRO: Disambiguation of referred object for embodied agents
Accepted in IEEE Robotics & Automation Letters (RA-L)
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Robotic task instructions often involve a referred object that the robot must locate (ground) within the environment. While task intent understanding is an essential part of natural language understanding, less effort is made to resolve ambiguity that may arise while grounding the task. Existing works use vision-based task grounding and ambiguity detection, suitable for a fixed view and a static robot. However, the problem magnifies for a mobile robot, where the ideal view is not known beforehand. Moreover, a single view may not be sufficient to locate all the object instances in the given area, which leads to inaccurate ambiguity detection. Human intervention is helpful only if the robot can convey the kind of ambiguity it is facing. In this article, we present DoRO (Disambiguation of Referred Object), a system that can help an embodied agent to disambiguate the referred object by raising a suitable query whenever required. Given an area where the intended object is, DoRO finds all the instances of the object by aggregating observations from multiple views while exploring & scanning the area. It then raises a suitable query using the information from the grounded object instances. Experiments conducted with the AI2Thor simulator show that DoRO not only detects the ambiguity more accurately but also raises verbose queries with more accurate information from the visual-language grounding.
[ { "version": "v1", "created": "Thu, 28 Jul 2022 16:21:19 GMT" } ]
2022-07-29T00:00:00
[ [ "Pramanick", "Pradip", "" ], [ "Sarkar", "Chayan", "" ], [ "Paul", "Sayan", "" ], [ "Roychoudhury", "Ruddra dev", "" ], [ "Bhowmick", "Brojeshwar", "" ] ]
new_dataset
0.999739
1907.00239
Dmitriy Zhuk
Dmitriy Zhuk and Barnaby Martin
QCSP monsters and the demise of the Chen Conjecture
Lemma 17 was retracted and the boundary between co-NP-complete and PSpace-complete has shifted
null
null
null
cs.CC cs.LO math.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a surprising classification for the computational complexity of the Quantified Constraint Satisfaction Problem over a constraint language $\Gamma$, QCSP$(\Gamma)$, where $\Gamma$ is a finite language over $3$ elements which contains all constants. In particular, such problems are either in P, NP-complete, co-NP-complete or PSpace-complete. Our classification refutes the hitherto widely-believed Chen Conjecture. Additionally, we show that already on a 4-element domain there exists a constraint language $\Gamma$ such that QCSP$(\Gamma)$ is DP-complete (from Boolean Hierarchy), and on a 10-element domain there exists a constraint language giving the complexity class $\Theta_{2}^{P}$. Meanwhile, we prove the Chen Conjecture for finite conservative languages $\Gamma$. If the polymorphism clone of $\Gamma$ has the polynomially generated powers (PGP) property then QCSP$(\Gamma)$ is in NP. Otherwise, the polymorphism clone of $\Gamma$ has the exponentially generated powers (EGP) property and QCSP$(\Gamma)$ is PSpace-complete.
[ { "version": "v1", "created": "Sat, 29 Jun 2019 17:13:13 GMT" }, { "version": "v2", "created": "Sun, 4 Aug 2019 20:41:38 GMT" }, { "version": "v3", "created": "Sat, 2 Nov 2019 07:35:11 GMT" }, { "version": "v4", "created": "Wed, 27 Jul 2022 07:53:00 GMT" } ]
2022-07-28T00:00:00
[ [ "Zhuk", "Dmitriy", "" ], [ "Martin", "Barnaby", "" ] ]
new_dataset
0.960291
2107.02317
Luis Carlos Garcia-Peraza-Herrera
Caspar Gruijthuijsen, Luis C. Garcia-Peraza-Herrera, Gianni Borghesan, Dominiek Reynaerts, Jan Deprest, Sebastien Ourselin, Tom Vercauteren, Emmanuel Vander Poorten
Robotic Endoscope Control via Autonomous Instrument Tracking
Caspar Gruijthuijsen and Luis C. Garcia-Peraza-Herrera have contributed equally to this work and share first authorship
null
10.3389/frobt.2022.832208
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Many keyhole interventions rely on bi-manual handling of surgical instruments, forcing the main surgeon to rely on a second surgeon to act as a camera assistant. In addition to the burden of excessively involving surgical staff, this may lead to reduced image stability, increased task completion time and sometimes errors due to the monotony of the task. Robotic endoscope holders, controlled by a set of basic instructions, have been proposed as an alternative, but their unnatural handling may increase the cognitive load of the (solo) surgeon, which hinders their clinical acceptance. More seamless integration in the surgical workflow would be achieved if robotic endoscope holders collaborated with the operating surgeon via semantically rich instructions that closely resemble instructions that would otherwise be issued to a human camera assistant, such as "focus on my right-hand instrument". As a proof of concept, this paper presents a novel system that paves the way towards a synergistic interaction between surgeons and robotic endoscope holders. The proposed platform allows the surgeon to perform a bimanual coordination and navigation task, while a robotic arm autonomously performs the endoscope positioning tasks. Within our system, we propose a novel tooltip localization method based on surgical tool segmentation and a novel visual servoing approach that ensures smooth and appropriate motion of the endoscope camera. We validate our vision pipeline and run a user study of this system. The clinical relevance of the study is ensured through the use of a laparoscopic exercise validated by the European Academy of Gynaecological Surgery which involves bi-manual coordination and navigation. Successful application of our proposed system provides a promising starting point towards broader clinical adoption of robotic endoscope holders.
[ { "version": "v1", "created": "Mon, 5 Jul 2021 23:24:46 GMT" }, { "version": "v2", "created": "Mon, 13 Dec 2021 20:28:35 GMT" }, { "version": "v3", "created": "Wed, 23 Feb 2022 16:46:46 GMT" } ]
2022-07-28T00:00:00
[ [ "Gruijthuijsen", "Caspar", "" ], [ "Garcia-Peraza-Herrera", "Luis C.", "" ], [ "Borghesan", "Gianni", "" ], [ "Reynaerts", "Dominiek", "" ], [ "Deprest", "Jan", "" ], [ "Ourselin", "Sebastien", "" ], [ "Vercauteren", "Tom", "" ], [ "Poorten", "Emmanuel Vander", "" ] ]
new_dataset
0.99462
2111.03393
Emilio Garcia-Fidalgo
Emilio Garcia-Fidalgo, Joan P. Company-Corcoles, Francisco Bonnin-Pascual, Alberto Ortiz
LiODOM: Adaptive Local Mapping for Robust LiDAR-Only Odometry
In press
Robotics and Autonomous Systems, 2022
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the last decades, Light Detection And Ranging (LiDAR) technology has been extensively explored as a robust alternative for self-localization and mapping. These approaches typically state ego-motion estimation as a non-linear optimization problem dependent on the correspondences established between the current point cloud and a map, whatever its scope, local or global. This paper proposes LiODOM, a novel LiDAR-only ODOmetry and Mapping approach for pose estimation and map-building, based on minimizing a loss function derived from a set of weighted point-to-line correspondences with a local map abstracted from the set of available point clouds. Furthermore, this work places a particular emphasis on map representation given its relevance for quick data association. To efficiently represent the environment, we propose a data structure that combined with a hashing scheme allows for fast access to any section of the map. LiODOM is validated by means of a set of experiments on public datasets, for which it compares favourably against other solutions. Its performance on-board an aerial platform is also reported.
[ { "version": "v1", "created": "Fri, 5 Nov 2021 11:07:44 GMT" }, { "version": "v2", "created": "Wed, 27 Jul 2022 12:12:16 GMT" } ]
2022-07-28T00:00:00
[ [ "Garcia-Fidalgo", "Emilio", "" ], [ "Company-Corcoles", "Joan P.", "" ], [ "Bonnin-Pascual", "Francisco", "" ], [ "Ortiz", "Alberto", "" ] ]
new_dataset
0.966579
2111.12085
Zhengyuan Yang
Zhengyuan Yang, Zhe Gan, Jianfeng Wang, Xiaowei Hu, Faisal Ahmed, Zicheng Liu, Yumao Lu, Lijuan Wang
UniTAB: Unifying Text and Box Outputs for Grounded Vision-Language Modeling
ECCV 2022 (Oral Presentation)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose UniTAB that Unifies Text And Box outputs for grounded vision-language (VL) modeling. Grounded VL tasks such as grounded captioning require the model to generate a text description and align predicted words with object regions. To achieve this, models must generate desired text and box outputs together, and meanwhile indicate the alignments between words and boxes. In contrast to existing solutions that use multiple separate modules for different outputs, UniTAB represents both text and box outputs with a shared token sequence, and introduces a special <obj> token to naturally indicate word-box alignments in the sequence. UniTAB thus could provide a more comprehensive and interpretable image description, by freely grounding generated words to object regions. On grounded captioning, UniTAB presents a simpler solution with a single output head, and significantly outperforms state of the art in both grounding and captioning evaluations. On general VL tasks that have different desired output formats (i.e., text, box, or their combination), UniTAB with a single network achieves better or comparable performance than task-specific state of the art. Experiments cover 7 VL benchmarks, including grounded captioning, visual grounding, image captioning, and visual question answering. Furthermore, UniTAB's unified multi-task network and the task-agnostic output sequence design make the model parameter efficient and generalizable to new tasks.
[ { "version": "v1", "created": "Tue, 23 Nov 2021 18:59:14 GMT" }, { "version": "v2", "created": "Wed, 27 Jul 2022 17:56:35 GMT" } ]
2022-07-28T00:00:00
[ [ "Yang", "Zhengyuan", "" ], [ "Gan", "Zhe", "" ], [ "Wang", "Jianfeng", "" ], [ "Hu", "Xiaowei", "" ], [ "Ahmed", "Faisal", "" ], [ "Liu", "Zicheng", "" ], [ "Lu", "Yumao", "" ], [ "Wang", "Lijuan", "" ] ]
new_dataset
0.987474
2111.13981
Dominic Baril
Dominic Baril, Simon-Pierre Desch\^enes, Olivier Gamache, Maxime Vaidis, Damien LaRocque, Johann Laconte, Vladim\'ir Kubelka, Philippe Gigu\`ere, Fran\c{c}ois Pomerleau
Kilometer-scale autonomous navigation in subarctic forests: challenges and lessons learned
Published in Field Robotics Volume 2. Paper number 50
null
10.55417/fr.2022050
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Challenges inherent to autonomous wintertime navigation in forests include lack of reliable a Global Navigation Satellite System (GNSS) signal, low feature contrast, high illumination variations and changing environment. This type of off-road environment is an extreme case of situations autonomous cars could encounter in northern regions. Thus, it is important to understand the impact of this harsh environment on autonomous navigation systems. To this end, we present a field report analyzing teach-and-repeat navigation in a subarctic forest while subject to fluctuating weather, including light and heavy snow, rain and drizzle. First, we describe the system, which relies on point cloud registration to localize a mobile robot through a boreal forest, while simultaneously building a map. We experimentally evaluate this system in over 18.8 km of autonomous navigation in the teach-and-repeat mode. Over 14 repeat runs, only four manual interventions were required, three of which were due to localization failure and another one caused by battery power outage. We show that dense vegetation perturbs the GNSS signal, rendering it unsuitable for navigation in forest trails. Furthermore, we highlight the increased uncertainty related to localizing using point cloud registration in forest trails. We demonstrate that it is not snow precipitation, but snow accumulation, that affects our system's ability to localize within the environment. Finally, we expose some challenges and lessons learned from our field campaign to support better experimental work in winter conditions. Our dataset is available online.
[ { "version": "v1", "created": "Sat, 27 Nov 2021 20:39:53 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 20:44:10 GMT" } ]
2022-07-28T00:00:00
[ [ "Baril", "Dominic", "" ], [ "Deschênes", "Simon-Pierre", "" ], [ "Gamache", "Olivier", "" ], [ "Vaidis", "Maxime", "" ], [ "LaRocque", "Damien", "" ], [ "Laconte", "Johann", "" ], [ "Kubelka", "Vladimír", "" ], [ "Giguère", "Philippe", "" ], [ "Pomerleau", "François", "" ] ]
new_dataset
0.996549
2203.16875
Xiangjun Gao
Xiangjun Gao, Jiaolong Yang, Jongyoo Kim, Sida Peng, Zicheng Liu, Xin Tong
MPS-NeRF: Generalizable 3D Human Rendering from Multiview Images
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There has been rapid progress recently on 3D human rendering, including novel view synthesis and pose animation, based on the advances of neural radiance fields (NeRF). However, most existing methods focus on person-specific training and their training typically requires multi-view videos. This paper deals with a new challenging task -- rendering novel views and novel poses for a person unseen in training, using only multiview images as input. For this task, we propose a simple yet effective method to train a generalizable NeRF with multiview images as conditional input. The key ingredient is a dedicated representation combining a canonical NeRF and a volume deformation scheme. Using a canonical space enables our method to learn shared properties of human and easily generalize to different people. Volume deformation is used to connect the canonical space with input and target images and query image features for radiance and density prediction. We leverage the parametric 3D human model fitted on the input images to derive the deformation, which works quite well in practice when combined with our canonical NeRF. The experiments on both real and synthetic data with the novel view synthesis and pose animation tasks collectively demonstrate the efficacy of our method.
[ { "version": "v1", "created": "Thu, 31 Mar 2022 08:09:03 GMT" }, { "version": "v2", "created": "Wed, 27 Jul 2022 06:10:50 GMT" } ]
2022-07-28T00:00:00
[ [ "Gao", "Xiangjun", "" ], [ "Yang", "Jiaolong", "" ], [ "Kim", "Jongyoo", "" ], [ "Peng", "Sida", "" ], [ "Liu", "Zicheng", "" ], [ "Tong", "Xin", "" ] ]
new_dataset
0.990914
2204.00833
Jing He
Jing He, Yiyi Zhou, Qi Zhang, Jun Peng, Yunhang Shen, Xiaoshuai Sun, Chao Chen, Rongrong Ji
PixelFolder: An Efficient Progressive Pixel Synthesis Network for Image Generation
Accepted by ECCV2022. The code is available at https://github.com/BlingHe/PixelFolder
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pixel synthesis is a promising research paradigm for image generation, which can well exploit pixel-wise prior knowledge for generation. However, existing methods still suffer from excessive memory footprint and computation overhead. In this paper, we propose a progressive pixel synthesis network towards efficient image generation, coined as PixelFolder. Specifically, PixelFolder formulates image generation as a progressive pixel regression problem and synthesizes images via a multi-stage structure, which can greatly reduce the overhead caused by large tensor transformations. In addition, we introduce novel pixel folding operations to further improve model efficiency while maintaining pixel-wise prior knowledge for end-to-end regression. With these innovative designs, we greatly reduce the expenditure of pixel synthesis, e.g., reducing 89% computation and 53% parameters compared with the latest pixel synthesis method CIPS. To validate our approach, we conduct extensive experiments on two benchmark datasets, namely FFHQ and LSUN Church. The experimental results show that with much less expenditure, PixelFolder obtains new state-of-the-art (SOTA) performance on two benchmark datasets, i.e., 3.77 FID and 2.45 FID on FFHQ and LSUN Church, respectively.Meanwhile, PixelFolder is also more efficient than the SOTA methods like StyleGAN2, reducing about 72% computation and 31% parameters, respectively. These results greatly validate the effectiveness of the proposed PixelFolder.
[ { "version": "v1", "created": "Sat, 2 Apr 2022 10:55:11 GMT" }, { "version": "v2", "created": "Sun, 5 Jun 2022 06:24:44 GMT" }, { "version": "v3", "created": "Mon, 25 Jul 2022 04:13:03 GMT" }, { "version": "v4", "created": "Wed, 27 Jul 2022 06:40:18 GMT" } ]
2022-07-28T00:00:00
[ [ "He", "Jing", "" ], [ "Zhou", "Yiyi", "" ], [ "Zhang", "Qi", "" ], [ "Peng", "Jun", "" ], [ "Shen", "Yunhang", "" ], [ "Sun", "Xiaoshuai", "" ], [ "Chen", "Chao", "" ], [ "Ji", "Rongrong", "" ] ]
new_dataset
0.999328
2204.06681
Youngho Kim
Jihoon Ryoo, Byungkon Kang, Dongyeob Lee, Seunghyeon Kim, Youngho Kim
MINSU (Mobile Inventory And Scanning Unit):Computer Vision and AI
Needs to be updated
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The MINSU(Mobile Inventory and Scanning Unit) algorithm uses the computational vision analysis method to record the residual quantity/fullness of the cabinet. To do so, it goes through a five-step method: object detection, foreground subtraction, K-means clustering, percentage estimation, and counting. The input image goes through the object detection method to analyze the specific position of the cabinets in terms of coordinates. After doing so, it goes through the foreground subtraction method to make the image more focus-able to the cabinet itself by removing the background (some manual work may have to be done such as selecting the parts that were not grab cut by the algorithm). In the K-means clustering method, the multi-colored image turns into a 3 colored monotonous image for quicker and more accurate analysis. At last, the image goes through percentage estimation and counting. In these two methods, the proportion that the material inside the cabinet is found in percentage which then is used to approximate the number of materials inside. Had this project been successful, the residual quantity management could solve the problem addressed earlier in the introduction.
[ { "version": "v1", "created": "Thu, 14 Apr 2022 00:21:14 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 12:19:05 GMT" }, { "version": "v3", "created": "Wed, 27 Jul 2022 06:17:15 GMT" } ]
2022-07-28T00:00:00
[ [ "Ryoo", "Jihoon", "" ], [ "Kang", "Byungkon", "" ], [ "Lee", "Dongyeob", "" ], [ "Kim", "Seunghyeon", "" ], [ "Kim", "Youngho", "" ] ]
new_dataset
0.99917
2206.10708
Zhiyang Chen
Zhiyang Chen, Sidi Mohamed Beillahi, Fan Long
FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven Approximation
29 pages, 8 figures, technical report
null
null
null
cs.PL cs.SE
http://creativecommons.org/licenses/by/4.0/
In decentralized finance (DeFi) ecosystem, lenders can offer flash loans to borrowers, i.e., loans that are only valid within a blockchain transaction and must be repaid with some fees by the end of that transaction. Unlike normal loans, flash loans allow borrowers to borrow a large amount of assets without upfront collaterals deposits. Malicious adversaries can use flash loans to gather large amount of assets to launch costly exploitations targeting DeFi protocols. In this paper, we introduce a new framework for automated synthesis of adversarial contracts that exploit DeFi protocols using flash loans. To bypass the complexity of a DeFi protocol, we propose a new technique to approximate the DeFi protocol functional behaviors using numerical methods (polynomial linear regression and nearest-neighbor interpolation). We then construct an optimization query using the approximated functions of the DeFi protocol to find an adversarial attack constituted of a sequence of functions invocations with optimal parameters that gives the maximum profit. To improve the accuracy of the approximation, we propose a new counterexamples-driven approximation refinement technique. We implement our framework in a tool called FlashSyn. We evaluate FlashSyn on 12 DeFi protocols that were victims to flash loan attacks and DeFi protocols from Damn Vulnerable DeFi challenges. FlashSyn automatically synthesizes an adversarial attack for each one of them.
[ { "version": "v1", "created": "Tue, 21 Jun 2022 19:56:54 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 20:52:53 GMT" } ]
2022-07-28T00:00:00
[ [ "Chen", "Zhiyang", "" ], [ "Beillahi", "Sidi Mohamed", "" ], [ "Long", "Fan", "" ] ]
new_dataset
0.978563
2207.10763
Yijiong Lin
Yijiong Lin, John Lloyd, Alex Church, Nathan F. Lepora
Tactile Gym 2.0: Sim-to-real Deep Reinforcement Learning for Comparing Low-cost High-Resolution Robot Touch
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
High-resolution optical tactile sensors are increasingly used in robotic learning environments due to their ability to capture large amounts of data directly relating to agent-environment interaction. However, there is a high barrier of entry to research in this area due to the high cost of tactile robot platforms, specialised simulation software, and sim-to-real methods that lack generality across different sensors. In this letter we extend the Tactile Gym simulator to include three new optical tactile sensors (TacTip, DIGIT and DigiTac) of the two most popular types, Gelsight-style (image-shading based) and TacTip-style (marker based). We demonstrate that a single sim-to-real approach can be used with these three different sensors to achieve strong real-world performance despite the significant differences between real tactile images. Additionally, we lower the barrier of entry to the proposed tasks by adapting them to an inexpensive 4-DoF robot arm, further enabling the dissemination of this benchmark. We validate the extended environment on three physically-interactive tasks requiring a sense of touch: object pushing, edge following and surface following. The results of our experimental validation highlight some differences between these sensors, which may help future researchers select and customize the physical characteristics of tactile sensors for different manipulations scenarios.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 21:24:24 GMT" }, { "version": "v2", "created": "Wed, 27 Jul 2022 15:55:19 GMT" } ]
2022-07-28T00:00:00
[ [ "Lin", "Yijiong", "" ], [ "Lloyd", "John", "" ], [ "Church", "Alex", "" ], [ "Lepora", "Nathan F.", "" ] ]
new_dataset
0.968704
2207.12585
Yijun Yan Dr
Jing Geng, Li'e Ma, Xiaoquan Li, Yijun Yan
PTGCF: Printing Texture Guided Color Fusion for Impressionism Oil Painting Style Rendering
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
As a major branch of Non-Photorealistic Rendering (NPR), image stylization mainly uses the computer algorithms to render a photo into an artistic painting. Recent work has shown that the extraction of style information such as stroke texture and color of the target style image is the key to image stylization. Given its stroke texture and color characteristics, a new stroke rendering method is proposed, which fully considers the tonal characteristics and the representative color of the original oil painting, in order to fit the tone of the original oil painting image into the stylized image and make it close to the artist's creative effect. The experiments have validated the efficacy of the proposed model. This method would be more suitable for the works of pointillism painters with a relatively uniform sense of direction, especially for natural scenes. When the original painting brush strokes have a clearer sense of direction, using this method to simulate brushwork texture features can be less satisfactory.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 00:31:23 GMT" }, { "version": "v2", "created": "Wed, 27 Jul 2022 10:12:12 GMT" } ]
2022-07-28T00:00:00
[ [ "Geng", "Jing", "" ], [ "Ma", "Li'e", "" ], [ "Li", "Xiaoquan", "" ], [ "Yan", "Yijun", "" ] ]
new_dataset
0.998311
2207.13147
Nofel Yaseen
Nofel Yaseen, Liangcheng Yu, Caleb Stanford, Ryan Beckett, Vincent Liu
FP4: Line-rate Greybox Fuzz Testing for P4 Switches
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Compared to fixed-function switches, the flexibility of programmable switches comes at a cost, as programmer mistakes frequently result in subtle bugs in the network data plane. In this paper, we present the design and implementation of FP4, a fuzz-testing framework for P4 switches that achieves high expressiveness, coverage, and scalability. FP4 directly tests running switches by generating semi-random input packets and observing their resulting execution in the data plane. To achieve high coverage and scalability, at runtime, FP4 leverages P4 itself with another "tester" switch that generates and mutates billions of test packets per second entirely in the dataplane. Because testing some program branches requires navigating complex semantic input requirements, FP4 additionally leverages the programmability of P4 by instrumenting the tested program to pass coverage information back to the tester through the packet header. We present case studies showing that FP4 can validate both safety and stateful properties, improves efficiency over existing random packet generation baselines, and reaches 100% coverage in under a minute on a wide range of examples.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 18:59:50 GMT" } ]
2022-07-28T00:00:00
[ [ "Yaseen", "Nofel", "" ], [ "Yu", "Liangcheng", "" ], [ "Stanford", "Caleb", "" ], [ "Beckett", "Ryan", "" ], [ "Liu", "Vincent", "" ] ]
new_dataset
0.996676
2207.13259
Wangmeng Xiang
Wangmeng Xiang, Chao Li, Biao Wang, Xihan Wei, Xian-Sheng Hua, Lei Zhang
Spatiotemporal Self-attention Modeling with Temporal Patch Shift for Action Recognition
Accepted by ECCV22
null
null
null
cs.CV cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Transformer-based methods have recently achieved great advancement on 2D image-based vision tasks. For 3D video-based tasks such as action recognition, however, directly applying spatiotemporal transformers on video data will bring heavy computation and memory burdens due to the largely increased number of patches and the quadratic complexity of self-attention computation. How to efficiently and effectively model the 3D self-attention of video data has been a great challenge for transformers. In this paper, we propose a Temporal Patch Shift (TPS) method for efficient 3D self-attention modeling in transformers for video-based action recognition. TPS shifts part of patches with a specific mosaic pattern in the temporal dimension, thus converting a vanilla spatial self-attention operation to a spatiotemporal one with little additional cost. As a result, we can compute 3D self-attention using nearly the same computation and memory cost as 2D self-attention. TPS is a plug-and-play module and can be inserted into existing 2D transformer models to enhance spatiotemporal feature learning. The proposed method achieves competitive performance with state-of-the-arts on Something-something V1 & V2, Diving-48, and Kinetics400 while being much more efficient on computation and memory cost. The source code of TPS can be found at https://github.com/MartinXM/TPS.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 02:47:07 GMT" } ]
2022-07-28T00:00:00
[ [ "Xiang", "Wangmeng", "" ], [ "Li", "Chao", "" ], [ "Wang", "Biao", "" ], [ "Wei", "Xihan", "" ], [ "Hua", "Xian-Sheng", "" ], [ "Zhang", "Lei", "" ] ]
new_dataset
0.968407
2207.13264
Rohan Pratap Singh
Rohan Pratap Singh, Iori Kumagai, Antonio Gabas, Mehdi Benallegue, Yusuke Yoshiyasu, Fumio Kanehiro
Instance-specific 6-DoF Object Pose Estimation from Minimal Annotations
GitHub code: https://github.com/rohanpsingh/ObjectKeypointTrainer
2020 IEEE/SICE International Symposium on System Integration (SII)
10.1109/SII46433.2020.9026239
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
In many robotic applications, the environment setting in which the 6-DoF pose estimation of a known, rigid object and its subsequent grasping is to be performed, remains nearly unchanging and might even be known to the robot in advance. In this paper, we refer to this problem as instance-specific pose estimation: the robot is expected to estimate the pose with a high degree of accuracy in only a limited set of familiar scenarios. Minor changes in the scene, including variations in lighting conditions and background appearance, are acceptable but drastic alterations are not anticipated. To this end, we present a method to rapidly train and deploy a pipeline for estimating the continuous 6-DoF pose of an object from a single RGB image. The key idea is to leverage known camera poses and rigid body geometry to partially automate the generation of a large labeled dataset. The dataset, along with sufficient domain randomization, is then used to supervise the training of deep neural networks for predicting semantic keypoints. Experimentally, we demonstrate the convenience and effectiveness of our proposed method to accurately estimate object pose requiring only a very small amount of manual annotation for training.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 03:00:28 GMT" } ]
2022-07-28T00:00:00
[ [ "Singh", "Rohan Pratap", "" ], [ "Kumagai", "Iori", "" ], [ "Gabas", "Antonio", "" ], [ "Benallegue", "Mehdi", "" ], [ "Yoshiyasu", "Yusuke", "" ], [ "Kanehiro", "Fumio", "" ] ]
new_dataset
0.996716
2207.13315
Yixuan Fan
Yixuan Fan, Zhaopeng Dou, Yali Li, Shengjin Wang
Portrait Interpretation and a Benchmark
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a task we name Portrait Interpretation and construct a dataset named Portrait250K for it. Current researches on portraits such as human attribute recognition and person re-identification have achieved many successes, but generally, they: 1) may lack mining the interrelationship between various tasks and the possible benefits it may bring; 2) design deep models specifically for each task, which is inefficient; 3) may be unable to cope with the needs of a unified model and comprehensive perception in actual scenes. In this paper, the proposed portrait interpretation recognizes the perception of humans from a new systematic perspective. We divide the perception of portraits into three aspects, namely Appearance, Posture, and Emotion, and design corresponding sub-tasks for each aspect. Based on the framework of multi-task learning, portrait interpretation requires a comprehensive description of static attributes and dynamic states of portraits. To invigorate research on this new task, we construct a new dataset that contains 250,000 images labeled with identity, gender, age, physique, height, expression, and posture of the whole body and arms. Our dataset is collected from 51 movies, hence covering extensive diversity. Furthermore, we focus on representation learning for portrait interpretation and propose a baseline that reflects our systematic perspective. We also propose an appropriate metric for this task. Our experimental results demonstrate that combining the tasks related to portrait interpretation can yield benefits. Code and dataset will be made public.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 06:25:09 GMT" } ]
2022-07-28T00:00:00
[ [ "Fan", "Yixuan", "" ], [ "Dou", "Zhaopeng", "" ], [ "Li", "Yali", "" ], [ "Wang", "Shengjin", "" ] ]
new_dataset
0.999577
2207.13326
Daizong Liu
Daizong Liu, Wei Hu, Xin Li
Point Cloud Attacks in Graph Spectral Domain: When 3D Geometry Meets Graph Signal Processing
arXiv admin note: substantial text overlap with arXiv:2202.07261
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the increasing attention in various 3D safety-critical applications, point cloud learning models have been shown to be vulnerable to adversarial attacks. Although existing 3D attack methods achieve high success rates, they delve into the data space with point-wise perturbation, which may neglect the geometric characteristics. Instead, we propose point cloud attacks from a new perspective -- the graph spectral domain attack, aiming to perturb graph transform coefficients in the spectral domain that corresponds to varying certain geometric structure. Specifically, leveraging on graph signal processing, we first adaptively transform the coordinates of points onto the spectral domain via graph Fourier transform (GFT) for compact representation. Then, we analyze the influence of different spectral bands on the geometric structure, based on which we propose to perturb the GFT coefficients via a learnable graph spectral filter. Considering the low-frequency components mainly contribute to the rough shape of the 3D object, we further introduce a low-frequency constraint to limit perturbations within imperceptible high-frequency components. Finally, the adversarial point cloud is generated by transforming the perturbed spectral representation back to the data domain via the inverse GFT. Experimental results demonstrate the effectiveness of the proposed attack in terms of both the imperceptibility and attack success rates.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 07:02:36 GMT" } ]
2022-07-28T00:00:00
[ [ "Liu", "Daizong", "" ], [ "Hu", "Wei", "" ], [ "Li", "Xin", "" ] ]
new_dataset
0.99567
2207.13332
Jungo Kasai
Jungo Kasai, Keisuke Sakaguchi, Yoichi Takahashi, Ronan Le Bras, Akari Asai, Xinyan Yu, Dragomir Radev, Noah A. Smith, Yejin Choi, Kentaro Inui
RealTime QA: What's the Answer Right Now?
RealTime QA Website: https://realtimeqa.github.io/
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We introduce RealTime QA, a dynamic question answering (QA) platform that announces questions and evaluates systems on a regular basis (weekly in this version). RealTime QA inquires about the current world, and QA systems need to answer questions about novel events or information. It therefore challenges static, conventional assumptions in open domain QA datasets and pursues, instantaneous applications. We build strong baseline models upon large pretrained language models, including GPT-3 and T5. Our benchmark is an ongoing effort, and this preliminary report presents real-time evaluation results over the past month. Our experimental results show that GPT-3 can often properly update its generation results, based on newly-retrieved documents, highlighting the importance of up-to-date information retrieval. Nonetheless, we find that GPT-3 tends to return outdated answers when retrieved documents do not provide sufficient information to find an answer. This suggests an important avenue for future research: can an open domain QA system identify such unanswerable cases and communicate with the user or even the retrieval module to modify the retrieval results? We hope that RealTime QA will spur progress in instantaneous applications of question answering and beyond.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 07:26:01 GMT" } ]
2022-07-28T00:00:00
[ [ "Kasai", "Jungo", "" ], [ "Sakaguchi", "Keisuke", "" ], [ "Takahashi", "Yoichi", "" ], [ "Bras", "Ronan Le", "" ], [ "Asai", "Akari", "" ], [ "Yu", "Xinyan", "" ], [ "Radev", "Dragomir", "" ], [ "Smith", "Noah A.", "" ], [ "Choi", "Yejin", "" ], [ "Inui", "Kentaro", "" ] ]
new_dataset
0.998915
2207.13370
Haoran Xie
Mikiya Kusunoki, Shogo Yoshida, Haoran Xie
MagGlove: A Haptic Glove with Movable Magnetic Force for Manipulation Learning
4 pages, 8 figures, accepted in the proceedings of Cyberworlds 2022
null
null
null
cs.HC cs.RO
http://creativecommons.org/licenses/by/4.0/
Recently, haptic gloves have been extensively explored for various practical applications, such as manipulation learning. Previous glove devices have different force-driven systems, such as shape memory alloys, servo motors and pneumatic actuators; however, these proposed devices may have difficulty in fast finger movement, easy reproduction, and safety issues. In this study, we propose MagGlove, a novel haptic glove with a movable magnet mechanism that has a linear motor, to solve these issues. The proposed MagGlove device is a compact system on the back of the wearer's hand with high responsiveness, ease of use, and good safety. The proposed device is adaptive with the modification of the magnitude of the current flowing through the coil. Based on our evaluation study, it is verified that the proposed device can achieve finger motion in the given tasks. Therefore, MagGlove can provide flexible support tailored to the wearers' learning levels in manipulation learning tasks.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 08:54:35 GMT" } ]
2022-07-28T00:00:00
[ [ "Kusunoki", "Mikiya", "" ], [ "Yoshida", "Shogo", "" ], [ "Xie", "Haoran", "" ] ]
new_dataset
0.999294
2207.13419
Chintan Patel
Chintan Patela, Ali Kashif Bashirb, Ahmad Ali AlZubic, Rutvij H Jhaveri
EBAKE-SE: A Novel ECC Based Authenticated Key Exchange between Industrial IoT Devices using Secure Element
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Industrial IoT (IIoT) aims to enhance services provided by various industries such as manufacturing and product processing. IIoT suffers from various challenges and security is one of the key challenge among those challenges. Authentication and access control are two notable challenges for any Industrial IoT (IIoT) based industrial deployment. Any IoT based Industry 4.0 enterprise designs networks between hundreds of tiny devices such as sensors, actuators, fog devices and gateways. Thus, articulating a secure authentication protocol between sensing devices or a sensing device and user devices is an essential step in IoT security. In this paper, first, we present cryptanalysis for the certificate-based scheme proposed for similar environment by Das et al. and prove that their scheme is vulnerable to various traditional attacks such as device anonymity, MITM, and DoS. We then put forward an inter-device authentication scheme using an ECC (Elliptic Curve Cryptography) that is highly secure and lightweight compared to other schemes for a similar environment. Furthermore, we set forth a formal security analysis using the random oracle based ROR model and informal security analysis over the Doleve-Yao channel. In this paper, we present the comparison of the proposed scheme with existing schemes based on communication cost, computation cost and security index to prove that the proposed EBAKE-SE is highly efficient, reliable, and trustworthy compared to other existing schemes for inter-device authentication. At long last, we present an implementation for the proposed EBAKE-SE using MQTT protocol
[ { "version": "v1", "created": "Wed, 27 Jul 2022 09:58:11 GMT" } ]
2022-07-28T00:00:00
[ [ "Patela", "Chintan", "" ], [ "Bashirb", "Ali Kashif", "" ], [ "AlZubic", "Ahmad Ali", "" ], [ "Jhaveri", "Rutvij H", "" ] ]
new_dataset
0.999162
2207.13479
Xiaojie Jin Mr.
Yaojie Shen, Libo Zhang, Kai Xu, Xiaojie Jin
AutoTransition: Learning to Recommend Video Transition Effects
To appear at ECCV 2022
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
Video transition effects are widely used in video editing to connect shots for creating cohesive and visually appealing videos. However, it is challenging for non-professionals to choose best transitions due to the lack of cinematographic knowledge and design skills. In this paper, we present the premier work on performing automatic video transitions recommendation (VTR): given a sequence of raw video shots and companion audio, recommend video transitions for each pair of neighboring shots. To solve this task, we collect a large-scale video transition dataset using publicly available video templates on editing softwares. Then we formulate VTR as a multi-modal retrieval problem from vision/audio to video transitions and propose a novel multi-modal matching framework which consists of two parts. First we learn the embedding of video transitions through a video transition classification task. Then we propose a model to learn the matching correspondence from vision/audio inputs to video transitions. Specifically, the proposed model employs a multi-modal transformer to fuse vision and audio information, as well as capture the context cues in sequential transition outputs. Through both quantitative and qualitative experiments, we clearly demonstrate the effectiveness of our method. Notably, in the comprehensive user study, our method receives comparable scores compared with professional editors while improving the video editing efficiency by \textbf{300\scalebox{1.25}{$\times$}}. We hope our work serves to inspire other researchers to work on this new task. The dataset and codes are public at \url{https://github.com/acherstyx/AutoTransition}.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 12:00:42 GMT" } ]
2022-07-28T00:00:00
[ [ "Shen", "Yaojie", "" ], [ "Zhang", "Libo", "" ], [ "Xu", "Kai", "" ], [ "Jin", "Xiaojie", "" ] ]
new_dataset
0.994539
2207.13560
Weiqi Li
Weiqi Li, Bin Chen, Jian Zhang
D3C2-Net: Dual-Domain Deep Convolutional Coding Network for Compressive Sensing
null
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
Mapping optimization algorithms into neural networks, deep unfolding networks (DUNs) have achieved impressive success in compressive sensing (CS). From the perspective of optimization, DUNs inherit a well-defined and interpretable structure from iterative steps. However, from the viewpoint of neural network design, most existing DUNs are inherently established based on traditional image-domain unfolding, which takes one-channel images as inputs and outputs between adjacent stages, resulting in insufficient information transmission capability and inevitable loss of the image details. In this paper, to break the above bottleneck, we first propose a generalized dual-domain optimization framework, which is general for inverse imaging and integrates the merits of both (1) image-domain and (2) convolutional-coding-domain priors to constrain the feasible region in the solution space. By unfolding the proposed framework into deep neural networks, we further design a novel Dual-Domain Deep Convolutional Coding Network (D3C2-Net) for CS imaging with the capability of transmitting high-throughput feature-level image representation through all the unfolded stages. Experiments on natural and MR images demonstrate that our D3C2-Net achieves higher performance and better accuracy-complexity trade-offs than other state-of-the-arts.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 14:52:32 GMT" } ]
2022-07-28T00:00:00
[ [ "Li", "Weiqi", "" ], [ "Chen", "Bin", "" ], [ "Zhang", "Jian", "" ] ]
new_dataset
0.971029
2207.13638
Muhammed Yusuf Ozkaya
M. Yusuf \"Ozkaya and \"Umit V. \c{C}ataly\"urek
A Simple and Elegant Mathematical Formulation for the Acyclic DAG Partitioning Problem
10+2 pages, 1 figure (4 subfigures)
null
null
null
cs.DS cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work addresses the NP-Hard problem of acyclic directed acyclic graph (DAG) partitioning problem. The acyclic partitioning problem is defined as partitioning the vertex set of a given directed acyclic graph into disjoint and collectively exhaustive subsets (parts). Parts are to be assigned such that the total sum of the vertex weights within each part satisfies a common upper bound and the total sum of the edge costs that connect nodes across different parts is minimized. Additionally, the quotient graph, i.e., the induced graph where all nodes that are assigned to the same part are contracted to a single node and edges of those are replaced with cumulative edges towards other nodes, is also a directed acyclic graph. That is, the quotient graph itself is also a graph that contains no cycles. Many computational and real-life applications such as in computational task scheduling, RTL simulations, scheduling of rail-rail transshipment tasks and Very Large Scale Integration (VLSI) design make use of acyclic DAG partitioning. We address the need for a simple and elegant mathematical formulation for the acyclic DAG partitioning problem that enables easier understanding, communication, implementation, and experimentation on the problem.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 16:57:00 GMT" } ]
2022-07-28T00:00:00
[ [ "Özkaya", "M. Yusuf", "" ], [ "Çatalyürek", "Ümit V.", "" ] ]
new_dataset
0.995154
2207.13642
Nishant Kumar
Nishant Kumar, Sudhan Majhi, and A.K. Upadhyay
A Direct Construction of Complete Complementary Code with Zero Correlation Zone property for Prime-Power Length
null
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose a direct construction of a novel type of code set, which has combined properties of complete complementary code (CCC) and zero-correlation zone (ZCZ) sequences and called it complete complementary-ZCZ (CC-ZCZ) code set. The code set is constructed by using multivariable functions. The proposed construction also provides Golay-ZCZ codes with new lengths, i.e., prime-power lengths. The proposed Golay-ZCZ codes are optimal and asymptotically optimal for binary and non-binary cases, respectively, by \emph{Tang-Fan-Matsufuzi} bound. Furthermore, the proposed direct construction provides novel ZCZ sequences of length $p^k$, where $k$ is an integer $\geq 2$. We establish a relationship between the proposed CC-ZCZ code set and the first-order generalized Reed-Muller (GRM) code, and proved that both have the same Hamming distance. We also counted the number of CC-ZCZ code set in first-order GRM codes. The column sequence peak-to-mean envelope power ratio (PMEPR) of the proposed CC-ZCZ construction is derived and compared with existing works. The proposed construction is also deduced to Golay-ZCZ and ZCZ sequences which are compared to the existing work. The proposed construction generalizes many of the existing work.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 17:02:23 GMT" } ]
2022-07-28T00:00:00
[ [ "Kumar", "Nishant", "" ], [ "Majhi", "Sudhan", "" ], [ "Upadhyay", "A. K.", "" ] ]
new_dataset
0.995941
2207.13648
Rushit Dave
Zachary Deridder, Nyle Siddiqui, Thomas Reither, Rushit Dave, Brendan Pelto, Naeem Seliya, Mounika Vanamala
Continuous User Authentication Using Machine Learning and Multi-Finger Mobile Touch Dynamics with a Novel Dataset
null
null
null
null
cs.HC cs.CR
http://creativecommons.org/licenses/by/4.0/
As technology grows and evolves rapidly, it is increasingly clear that mobile devices are more commonly used for sensitive matters than ever before. A need to authenticate users continuously is sought after as a single-factor or multi factor authentication may only initially validate a user, which does not help if an impostor can bypass this initial validation. The field of touch dynamics emerges as a clear way to non intrusively collect data about a user and their behaviors in order to develop and make imperative security related decisions in real time. In this paper we present a novel dataset consisting of tracking 25 users playing two mobile games Snake.io and Minecraft each for 10 minutes, along with their relevant gesture data. From this data, we ran machine learning binary classifiers namely Random Forest and K Nearest Neighbor to attempt to authenticate whether a sample of a particular users actions were genuine. Our strongest model returned an average accuracy of roughly 93% for both games, showing touch dynamics can differentiate users effectively and is a feasible consideration for authentication schemes. Our dataset can be observed at https://github.com/zderidder/MC-Snake-Results
[ { "version": "v1", "created": "Wed, 27 Jul 2022 17:10:03 GMT" } ]
2022-07-28T00:00:00
[ [ "Deridder", "Zachary", "" ], [ "Siddiqui", "Nyle", "" ], [ "Reither", "Thomas", "" ], [ "Dave", "Rushit", "" ], [ "Pelto", "Brendan", "" ], [ "Seliya", "Naeem", "" ], [ "Vanamala", "Mounika", "" ] ]
new_dataset
0.99984
2207.13685
Alan C. Calder
Alan C. Calder, Catherine Feldman, Eva Siegmann, John Dey, Anthony Curtis, Smeet Chheda, Robert J. Harrison
On Using Linux Kernel Huge Pages with FLASH, an Astrophysical Simulation Code
6 pages, 1 figure, accepted to Embracing Arm for HPC, An IEEE Cluster 2022 Workshop
null
null
null
cs.DC astro-ph.HE astro-ph.IM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present efforts at improving the performance of FLASH, a multi-scale, multi-physics simulation code principally for astrophysical applications, by using huge pages on Ookami, an HPE Apollo 80 A64FX platform. FLASH is written principally in modern Fortran and makes use of the PARAMESH library to manage a block-structured adaptive mesh. We explored options for enabling the use of huge pages with several compilers, but we were only able to successfully use huge pages when compiling with the Fujitsu compiler. The use of huge pages substantially reduced the number of translation lookaside buffer misses, but overall performance gains were marginal.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 17:55:01 GMT" } ]
2022-07-28T00:00:00
[ [ "Calder", "Alan C.", "" ], [ "Feldman", "Catherine", "" ], [ "Siegmann", "Eva", "" ], [ "Dey", "John", "" ], [ "Curtis", "Anthony", "" ], [ "Chheda", "Smeet", "" ], [ "Harrison", "Robert J.", "" ] ]
new_dataset
0.991577
2207.13691
Muhammad Zubair Irshad
Muhammad Zubair Irshad, Sergey Zakharov, Rares Ambrus, Thomas Kollar, Zsolt Kira, Adrien Gaidon
ShAPO: Implicit Representations for Multi-Object Shape, Appearance, and Pose Optimization
Accepted to European Conference on Computer Vision (ECCV), 2022
null
null
null
cs.CV cs.LG cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Our method studies the complex task of object-centric 3D understanding from a single RGB-D observation. As it is an ill-posed problem, existing methods suffer from low performance for both 3D shape and 6D pose and size estimation in complex multi-object scenarios with occlusions. We present ShAPO, a method for joint multi-object detection, 3D textured reconstruction, 6D object pose and size estimation. Key to ShAPO is a single-shot pipeline to regress shape, appearance and pose latent codes along with the masks of each object instance, which is then further refined in a sparse-to-dense fashion. A novel disentangled shape and appearance database of priors is first learned to embed objects in their respective shape and appearance space. We also propose a novel, octree-based differentiable optimization step, allowing us to further improve object shape, pose and appearance simultaneously under the learned latent space, in an analysis-by-synthesis fashion. Our novel joint implicit textured object representation allows us to accurately identify and reconstruct novel unseen objects without having access to their 3D meshes. Through extensive experiments, we show that our method, trained on simulated indoor scenes, accurately regresses the shape, appearance and pose of novel objects in the real-world with minimal fine-tuning. Our method significantly out-performs all baselines on the NOCS dataset with an 8% absolute improvement in mAP for 6D pose estimation. Project page: https://zubair-irshad.github.io/projects/ShAPO.html
[ { "version": "v1", "created": "Wed, 27 Jul 2022 17:59:31 GMT" } ]
2022-07-28T00:00:00
[ [ "Irshad", "Muhammad Zubair", "" ], [ "Zakharov", "Sergey", "" ], [ "Ambrus", "Rares", "" ], [ "Kollar", "Thomas", "" ], [ "Kira", "Zsolt", "" ], [ "Gaidon", "Adrien", "" ] ]
new_dataset
0.988696
2108.05563
Joshua Rego
Joshua D. Rego, Huaijin Chen, Shuai Li, Jinwei Gu, Suren Jayasuriya
Deep Camera Obscura: An Image Restoration Pipeline for Lensless Pinhole Photography
11 pages, 10 figures
null
10.1364/OE.460636
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The lensless pinhole camera is perhaps the earliest and simplest form of an imaging system using only a pinhole-sized aperture in place of a lens. They can capture an infinite depth-of-field and offer greater freedom from optical distortion over their lens-based counterparts. However, the inherent limitations of a pinhole system result in lower sharpness from blur caused by optical diffraction and higher noise levels due to low light throughput of the small aperture, requiring very long exposure times to capture well-exposed images. In this paper, we explore an image restoration pipeline using deep learning and domain-knowledge of the pinhole system to enhance the pinhole image quality through a joint denoise and deblur approach. Our approach allows for more practical exposure times for hand-held photography and provides higher image quality, making it more suitable for daily photography compared to other lensless cameras while keeping size and cost low. This opens up the potential of pinhole cameras to be used in smaller devices, such as smartphones.
[ { "version": "v1", "created": "Thu, 12 Aug 2021 07:03:00 GMT" } ]
2022-07-27T00:00:00
[ [ "Rego", "Joshua D.", "" ], [ "Chen", "Huaijin", "" ], [ "Li", "Shuai", "" ], [ "Gu", "Jinwei", "" ], [ "Jayasuriya", "Suren", "" ] ]
new_dataset
0.987165
2108.08719
Morteza Alipourlangouri
Morteza Alipourlangouri, Adam Mansfield, Fei Chiang, Yinghui Wu
Temporal Graph Functional Dependencies [Extended Version]
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data dependencies have been extended to graphs to characterize topological and value constraints. Existing data dependencies are defined to capture inconsistencies in static graphs. Nevertheless, inconsistencies may occur over evolving graphs and only for certain time periods. The need for capturing such inconsistencies in temporal graphs is evident in anomaly detection and predictive dynamic network analysis. This paper introduces a class of data dependencies called Temporal Graph Functional Dependencies (TGFDs). TGFDs generalize functional dependencies to temporal graphs as a sequence of graph snapshots that are induced by time intervals, and enforce both topological constraints and attribute value dependencies that must be satisfied by these snapshots. (1) We establish the complexity results for the satisfiability and implication problems of TGFDs. (2) We propose a sound and complete axiomatization system for TGFDs. (3) We also present efficient parallel algorithms to detect inconsistencies in temporal graphs as violations of TGFDs. The algorithm exploits data and temporal locality induced by time intervals, and uses incremental pattern matching and load balancing strategies to enable feasible error detection in large temporal graphs. Using real datasets, we experimentally verify that our algorithms achieve lower runtimes compared to existing baselines, while improving the accuracy over error detection using existing graph data constraints, e.g., GFDs and GTARs with 55% and 74% gain in F1-score, respectively.
[ { "version": "v1", "created": "Thu, 19 Aug 2021 14:40:37 GMT" }, { "version": "v2", "created": "Tue, 4 Jan 2022 01:21:09 GMT" }, { "version": "v3", "created": "Wed, 27 Apr 2022 23:41:18 GMT" }, { "version": "v4", "created": "Mon, 9 May 2022 03:05:46 GMT" }, { "version": "v5", "created": "Tue, 26 Jul 2022 02:14:40 GMT" } ]
2022-07-27T00:00:00
[ [ "Alipourlangouri", "Morteza", "" ], [ "Mansfield", "Adam", "" ], [ "Chiang", "Fei", "" ], [ "Wu", "Yinghui", "" ] ]
new_dataset
0.99877
2109.06325
Jacopo Panerati
Zhaocong Yuan, Adam W. Hall, Siqi Zhou, Lukas Brunke, Melissa Greeff, Jacopo Panerati, Angela P. Schoellig (University of Toronto Institute for Aerospace Studies, University of Toronto Robotics Institute, Vector Institute for Artificial Intelligence)
safe-control-gym: a Unified Benchmark Suite for Safe Learning-based Control and Reinforcement Learning in Robotics
8 pages, 8 figures
null
null
null
cs.RO cs.LG cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, both reinforcement learning and learning-based control -- as well as the study of their safety, which is crucial for deployment in real-world robots -- have gained significant traction. However, to adequately gauge the progress and applicability of new results, we need the tools to equitably compare the approaches proposed by the controls and reinforcement learning communities. Here, we propose a new open-source benchmark suite, called safe-control-gym, supporting both model-based and data-based control techniques. We provide implementations for three dynamic systems -- the cart-pole, the 1D, and 2D quadrotor -- and two control tasks -- stabilization and trajectory tracking. We propose to extend OpenAI's Gym API -- the de facto standard in reinforcement learning research -- with (i) the ability to specify (and query) symbolic dynamics and (ii) constraints, and (iii) (repeatably) inject simulated disturbances in the control inputs, state measurements, and inertial properties. To demonstrate our proposal and in an attempt to bring research communities closer together, we show how to use safe-control-gym to quantitatively compare the control performance, data efficiency, and safety of multiple approaches from the fields of traditional control, learning-based control, and reinforcement learning.
[ { "version": "v1", "created": "Mon, 13 Sep 2021 21:09:28 GMT" }, { "version": "v2", "created": "Sat, 18 Sep 2021 20:05:26 GMT" }, { "version": "v3", "created": "Fri, 25 Feb 2022 16:12:56 GMT" }, { "version": "v4", "created": "Tue, 26 Jul 2022 11:49:36 GMT" } ]
2022-07-27T00:00:00
[ [ "Yuan", "Zhaocong", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Hall", "Adam W.", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Zhou", "Siqi", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Brunke", "Lukas", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Greeff", "Melissa", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Panerati", "Jacopo", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ], [ "Schoellig", "Angela P.", "", "University of Toronto Institute for\n Aerospace Studies, University of Toronto Robotics Institute, Vector Institute\n for Artificial Intelligence" ] ]
new_dataset
0.99932
2111.03845
Qinghui Liu
Qinghui Liu, Michael Kampffmeyer, Robert Jenssen and Arnt-B{\o}rre Salberg
Multi-modal land cover mapping of remote sensing images using pyramid attention and gated fusion networks
24 pages, 11 figures, submitted to IJRS
null
10.1080/01431161.2022.2098078
null
cs.CV eess.IV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Multi-modality data is becoming readily available in remote sensing (RS) and can provide complementary information about the Earth's surface. Effective fusion of multi-modal information is thus important for various applications in RS, but also very challenging due to large domain differences, noise, and redundancies. There is a lack of effective and scalable fusion techniques for bridging multiple modality encoders and fully exploiting complementary information. To this end, we propose a new multi-modality network (MultiModNet) for land cover mapping of multi-modal remote sensing data based on a novel pyramid attention fusion (PAF) module and a gated fusion unit (GFU). The PAF module is designed to efficiently obtain rich fine-grained contextual representations from each modality with a built-in cross-level and cross-view attention fusion mechanism, and the GFU module utilizes a novel gating mechanism for early merging of features, thereby diminishing hidden redundancies and noise. This enables supplementary modalities to effectively extract the most valuable and complementary information for late feature fusion. Extensive experiments on two representative RS benchmark datasets demonstrate the effectiveness, robustness, and superiority of the MultiModNet for multi-modal land cover classification.
[ { "version": "v1", "created": "Sat, 6 Nov 2021 10:01:01 GMT" } ]
2022-07-27T00:00:00
[ [ "Liu", "Qinghui", "" ], [ "Kampffmeyer", "Michael", "" ], [ "Jenssen", "Robert", "" ], [ "Salberg", "Arnt-Børre", "" ] ]
new_dataset
0.991597
2111.09999
Bao Doan
Bao Gia Doan, Minhui Xue, Shiqing Ma, Ehsan Abbasnejad, Damith C. Ranasinghe
TnT Attacks! Universal Naturalistic Adversarial Patches Against Deep Neural Network Systems
Accepted for publication in the IEEE Transactions on Information Forensics & Security (TIFS)
null
null
null
cs.CV cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep neural networks are vulnerable to attacks from adversarial inputs and, more recently, Trojans to misguide or hijack the model's decision. We expose the existence of an intriguing class of spatially bounded, physically realizable, adversarial examples -- Universal NaTuralistic adversarial paTches -- we call TnTs, by exploring the superset of the spatially bounded adversarial example space and the natural input space within generative adversarial networks. Now, an adversary can arm themselves with a patch that is naturalistic, less malicious-looking, physically realizable, highly effective achieving high attack success rates, and universal. A TnT is universal because any input image captured with a TnT in the scene will: i) misguide a network (untargeted attack); or ii) force the network to make a malicious decision (targeted attack). Interestingly, now, an adversarial patch attacker has the potential to exert a greater level of control -- the ability to choose a location-independent, natural-looking patch as a trigger in contrast to being constrained to noisy perturbations -- an ability is thus far shown to be only possible with Trojan attack methods needing to interfere with the model building processes to embed a backdoor at the risk discovery; but, still realize a patch deployable in the physical world. Through extensive experiments on the large-scale visual classification task, ImageNet with evaluations across its entire validation set of 50,000 images, we demonstrate the realistic threat from TnTs and the robustness of the attack. We show a generalization of the attack to create patches achieving higher attack success rates than existing state-of-the-art methods. Our results show the generalizability of the attack to different visual classification tasks (CIFAR-10, GTSRB, PubFig) and multiple state-of-the-art deep neural networks such as WideResnet50, Inception-V3 and VGG-16.
[ { "version": "v1", "created": "Fri, 19 Nov 2021 01:35:10 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 02:21:11 GMT" } ]
2022-07-27T00:00:00
[ [ "Doan", "Bao Gia", "" ], [ "Xue", "Minhui", "" ], [ "Ma", "Shiqing", "" ], [ "Abbasnejad", "Ehsan", "" ], [ "Ranasinghe", "Damith C.", "" ] ]
new_dataset
0.997539
2201.07459
John Seon Keun Yi
John Seon Keun Yi, Minseok Seo, Jongchan Park, Dong-Geol Choi
PT4AL: Using Self-Supervised Pretext Tasks for Active Learning
Code is available at https://github.com/johnsk95/PT4AL Updated for ECCV 2022 submission
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Labeling a large set of data is expensive. Active learning aims to tackle this problem by asking to annotate only the most informative data from the unlabeled set. We propose a novel active learning approach that utilizes self-supervised pretext tasks and a unique data sampler to select data that are both difficult and representative. We discover that the loss of a simple self-supervised pretext task, such as rotation prediction, is closely correlated to the downstream task loss. Before the active learning iterations, the pretext task learner is trained on the unlabeled set, and the unlabeled data are sorted and split into batches by their pretext task losses. In each active learning iteration, the main task model is used to sample the most uncertain data in a batch to be annotated. We evaluate our method on various image classification and segmentation benchmarks and achieve compelling performances on CIFAR10, Caltech-101, ImageNet, and Cityscapes. We further show that our method performs well on imbalanced datasets, and can be an effective solution to the cold-start problem where active learning performance is affected by the randomly sampled initial labeled set.
[ { "version": "v1", "created": "Wed, 19 Jan 2022 07:58:06 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 13:07:47 GMT" }, { "version": "v3", "created": "Tue, 26 Jul 2022 09:21:37 GMT" } ]
2022-07-27T00:00:00
[ [ "Yi", "John Seon Keun", "" ], [ "Seo", "Minseok", "" ], [ "Park", "Jongchan", "" ], [ "Choi", "Dong-Geol", "" ] ]
new_dataset
0.977328
2203.00146
Jennie Rogers
Jennie Rogers, Elizabeth Adetoro, Johes Bater, Talia Canter, Dong Fu, Andrew Hamilton, Amro Hassan, Ashley Martinez, Erick Michalski, Vesna Mitrovic, Fred Rachman, Raj Shah, Matt Sterling, Kyra VanDoren, Theresa L. Walunas, Xiao Wang, and Abel Kho
VaultDB: A Real-World Pilot of Secure Multi-Party Computation within a Clinical Research Network
null
null
null
null
cs.DB cs.CR
http://creativecommons.org/licenses/by/4.0/
Electronic health records represent a rich and growing source of clinical data for research. Privacy, regulatory, and institutional concerns limit the speed and ease of sharing this data. VaultDB is a framework for securely computing SQL queries over private data from two or more sources. It evaluates queries using secure multiparty computation: cryptographic protocols that evaluate a function such that the only information revealed from running it is the query answer. We describe the development of a HIPAA-compliant version of VaultDB on the Chicago Area Patient Centered Outcomes Research Network (CAPriCORN). This multi-institutional clinical research network spans the electronic health records of nearly 13M patients over hundreds of clinics and hospitals in the Chicago metropolitan area. Our results from deploying at three health systems within this network show its efficiency and scalability for distributed clinical research analyses without moving patient records from their site of origin.
[ { "version": "v1", "created": "Mon, 28 Feb 2022 23:56:59 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 21:48:54 GMT" } ]
2022-07-27T00:00:00
[ [ "Rogers", "Jennie", "" ], [ "Adetoro", "Elizabeth", "" ], [ "Bater", "Johes", "" ], [ "Canter", "Talia", "" ], [ "Fu", "Dong", "" ], [ "Hamilton", "Andrew", "" ], [ "Hassan", "Amro", "" ], [ "Martinez", "Ashley", "" ], [ "Michalski", "Erick", "" ], [ "Mitrovic", "Vesna", "" ], [ "Rachman", "Fred", "" ], [ "Shah", "Raj", "" ], [ "Sterling", "Matt", "" ], [ "VanDoren", "Kyra", "" ], [ "Walunas", "Theresa L.", "" ], [ "Wang", "Xiao", "" ], [ "Kho", "Abel", "" ] ]
new_dataset
0.997801
2205.05677
Soshi Shimada
Soshi Shimada, Vladislav Golyanik, Zhi Li, Patrick P\'erez, Weipeng Xu, Christian Theobalt
HULC: 3D Human Motion Capture with Pose Manifold Sampling and Dense Contact Guidance
null
null
null
null
cs.CV cs.GR cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Marker-less monocular 3D human motion capture (MoCap) with scene interactions is a challenging research topic relevant for extended reality, robotics and virtual avatar generation. Due to the inherent depth ambiguity of monocular settings, 3D motions captured with existing methods often contain severe artefacts such as incorrect body-scene inter-penetrations, jitter and body floating. To tackle these issues, we propose HULC, a new approach for 3D human MoCap which is aware of the scene geometry. HULC estimates 3D poses and dense body-environment surface contacts for improved 3D localisations, as well as the absolute scale of the subject. Furthermore, we introduce a 3D pose trajectory optimisation based on a novel pose manifold sampling that resolves erroneous body-environment inter-penetrations. Although the proposed method requires less structured inputs compared to existing scene-aware monocular MoCap algorithms, it produces more physically-plausible poses: HULC significantly and consistently outperforms the existing approaches in various experiments and on different metrics. Project page: https://vcai.mpi-inf.mpg.de/projects/HULC/.
[ { "version": "v1", "created": "Wed, 11 May 2022 17:59:31 GMT" }, { "version": "v2", "created": "Mon, 23 May 2022 16:26:38 GMT" }, { "version": "v3", "created": "Tue, 24 May 2022 10:34:20 GMT" }, { "version": "v4", "created": "Tue, 26 Jul 2022 08:17:40 GMT" } ]
2022-07-27T00:00:00
[ [ "Shimada", "Soshi", "" ], [ "Golyanik", "Vladislav", "" ], [ "Li", "Zhi", "" ], [ "Pérez", "Patrick", "" ], [ "Xu", "Weipeng", "" ], [ "Theobalt", "Christian", "" ] ]
new_dataset
0.998799
2207.00401
Jorge F. Lazo
Jorge F. Lazo and Chun-Feng Lai and Sara Moccia and Benoit Rosa and Michele Catellani and Michel de Mathelin and Giancarlo Ferrigno and Paul Breedveld and Jenny Dankelman and Elena De Momi
Autonomous Intraluminal Navigation of a Soft Robot using Deep-Learning-based Visual Servoing
null
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Navigation inside luminal organs is an arduous task that requires non-intuitive coordination between the movement of the operator's hand and the information obtained from the endoscopic video. The development of tools to automate certain tasks could alleviate the physical and mental load of doctors during interventions, allowing them to focus on diagnosis and decision-making tasks. In this paper, we present a synergic solution for intraluminal navigation consisting of a 3D printed endoscopic soft robot that can move safely inside luminal structures. Visual servoing, based on Convolutional Neural Networks (CNNs) is used to achieve the autonomous navigation task. The CNN is trained with phantoms and in-vivo data to segment the lumen, and a model-less approach is presented to control the movement in constrained environments. The proposed robot is validated in anatomical phantoms in different path configurations. We analyze the movement of the robot using different metrics such as task completion time, smoothness, error in the steady-state, and mean and maximum error. We show that our method is suitable to navigate safely in hollow environments and conditions which are different than the ones the network was originally trained on.
[ { "version": "v1", "created": "Fri, 1 Jul 2022 13:17:45 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 10:01:28 GMT" } ]
2022-07-27T00:00:00
[ [ "Lazo", "Jorge F.", "" ], [ "Lai", "Chun-Feng", "" ], [ "Moccia", "Sara", "" ], [ "Rosa", "Benoit", "" ], [ "Catellani", "Michele", "" ], [ "de Mathelin", "Michel", "" ], [ "Ferrigno", "Giancarlo", "" ], [ "Breedveld", "Paul", "" ], [ "Dankelman", "Jenny", "" ], [ "De Momi", "Elena", "" ] ]
new_dataset
0.998674
2207.03160
Zelin Zang
Zelin Zang and Siyuan Li and Di Wu and Ge Wang and Lei Shang and Baigui Sun and Hao Li and Stan Z. Li
DLME: Deep Local-flatness Manifold Embedding
16 pages, 7 figures
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Manifold learning (ML) aims to seek low-dimensional embedding from high-dimensional data. The problem is challenging on real-world datasets, especially with under-sampling data, and we find that previous methods perform poorly in this case. Generally, ML methods first transform input data into a low-dimensional embedding space to maintain the data's geometric structure and subsequently perform downstream tasks therein. The poor local connectivity of under-sampling data in the former step and inappropriate optimization objectives in the latter step leads to two problems: structural distortion and underconstrained embedding. This paper proposes a novel ML framework named Deep Local-flatness Manifold Embedding (DLME) to solve these problems. The proposed DLME constructs semantic manifolds by data augmentation and overcomes the structural distortion problem using a smoothness constrained based on a local flatness assumption about the manifold. To overcome the underconstrained embedding problem, we design a loss and theoretically demonstrate that it leads to a more suitable embedding based on the local flatness. Experiments on three types of datasets (toy, biological, and image) for various downstream tasks (classification, clustering, and visualization) show that our proposed DLME outperforms state-of-the-art ML and contrastive learning methods.
[ { "version": "v1", "created": "Thu, 7 Jul 2022 08:46:17 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 00:47:01 GMT" } ]
2022-07-27T00:00:00
[ [ "Zang", "Zelin", "" ], [ "Li", "Siyuan", "" ], [ "Wu", "Di", "" ], [ "Wang", "Ge", "" ], [ "Shang", "Lei", "" ], [ "Sun", "Baigui", "" ], [ "Li", "Hao", "" ], [ "Li", "Stan Z.", "" ] ]
new_dataset
0.999305
2207.04429
Dhruv Shah
Dhruv Shah, Blazej Osinski, Brian Ichter, Sergey Levine
LM-Nav: Robotic Navigation with Large Pre-Trained Models of Language, Vision, and Action
Project page https://sites.google.com/view/lmnav
null
null
null
cs.RO cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Goal-conditioned policies for robotic navigation can be trained on large, unannotated datasets, providing for good generalization to real-world settings. However, particularly in vision-based settings where specifying goals requires an image, this makes for an unnatural interface. Language provides a more convenient modality for communication with robots, but contemporary methods typically require expensive supervision, in the form of trajectories annotated with language descriptions. We present a system, LM-Nav, for robotic navigation that enjoys the benefits of training on unannotated large datasets of trajectories, while still providing a high-level interface to the user. Instead of utilizing a labeled instruction following dataset, we show that such a system can be constructed entirely out of pre-trained models for navigation (ViNG), image-language association (CLIP), and language modeling (GPT-3), without requiring any fine-tuning or language-annotated robot data. We instantiate LM-Nav on a real-world mobile robot and demonstrate long-horizon navigation through complex, outdoor environments from natural language instructions. For videos of our experiments, code release, and an interactive Colab notebook that runs in your browser, please check out our project page https://sites.google.com/view/lmnav
[ { "version": "v1", "created": "Sun, 10 Jul 2022 10:41:50 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 10:46:15 GMT" } ]
2022-07-27T00:00:00
[ [ "Shah", "Dhruv", "" ], [ "Osinski", "Blazej", "" ], [ "Ichter", "Brian", "" ], [ "Levine", "Sergey", "" ] ]
new_dataset
0.998914
2207.06803
Yifei He
Yifei He, Artur Podobas, M{\aa}ns I. Andersson, and Stefano Markidis
FFTc: An MLIR Dialect for Developing HPC Fast Fourier Transform Libraries
null
null
null
null
cs.MS cs.CL
http://creativecommons.org/licenses/by/4.0/
Discrete Fourier Transform (DFT) libraries are one of the most critical software components for scientific computing. Inspired by FFTW, a widely used library for DFT HPC calculations, we apply compiler technologies for the development of HPC Fourier transform libraries. In this work, we introduce FFTc, a domain-specific language, based on Multi-Level Intermediate Representation (MLIR), for expressing Fourier Transform algorithms. We present the initial design, implementation, and preliminary results of FFTc.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 10:31:21 GMT" }, { "version": "v2", "created": "Tue, 26 Jul 2022 13:48:10 GMT" } ]
2022-07-27T00:00:00
[ [ "He", "Yifei", "" ], [ "Podobas", "Artur", "" ], [ "Andersson", "Måns I.", "" ], [ "Markidis", "Stefano", "" ] ]
new_dataset
0.998725
2207.11617
Wei-Sheng Lai
Wei-Sheng Lai, YiChang Shih, Lun-Cheng Chu, Xiaotong Wu, Sung-Fang Tsai, Michael Krainin, Deqing Sun, Chia-Kai Liang
Face Deblurring using Dual Camera Fusion on Mobile Phones
Accepted to SIGGRAPH 2022 (ACM TOG). Project websit: https://www.wslai.net/publications/fusion_deblur/
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Motion blur of fast-moving subjects is a longstanding problem in photography and very common on mobile phones due to limited light collection efficiency, particularly in low-light conditions. While we have witnessed great progress in image deblurring in recent years, most methods require significant computational power and have limitations in processing high-resolution photos with severe local motions. To this end, we develop a novel face deblurring system based on the dual camera fusion technique for mobile phones. The system detects subject motion to dynamically enable a reference camera, e.g., ultrawide angle camera commonly available on recent premium phones, and captures an auxiliary photo with faster shutter settings. While the main shot is low noise but blurry, the reference shot is sharp but noisy. We learn ML models to align and fuse these two shots and output a clear photo without motion blur. Our algorithm runs efficiently on Google Pixel 6, which takes 463 ms overhead per shot. Our experiments demonstrate the advantage and robustness of our system against alternative single-image, multi-frame, face-specific, and video deblurring algorithms as well as commercial products. To the best of our knowledge, our work is the first mobile solution for face motion deblurring that works reliably and robustly over thousands of images in diverse motion and lighting conditions.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 22:50:46 GMT" } ]
2022-07-27T00:00:00
[ [ "Lai", "Wei-Sheng", "" ], [ "Shih", "YiChang", "" ], [ "Chu", "Lun-Cheng", "" ], [ "Wu", "Xiaotong", "" ], [ "Tsai", "Sung-Fang", "" ], [ "Krainin", "Michael", "" ], [ "Sun", "Deqing", "" ], [ "Liang", "Chia-Kai", "" ] ]
new_dataset
0.997712
2207.12406
Maaz Amjad
Maaz Amjad, Grigori Sidorov, Alisa Zhila, Alexander Gelbukh and Paolo Rosso
UrduFake@FIRE2020: Shared Track on Fake News Identification in Urdu
arXiv admin note: substantial text overlap with arXiv:2207.11893
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
This paper gives the overview of the first shared task at FIRE 2020 on fake news detection in the Urdu language. This is a binary classification task in which the goal is to identify fake news using a dataset composed of 900 annotated news articles for training and 400 news articles for testing. The dataset contains news in five domains: (i) Health, (ii) Sports, (iii) Showbiz, (iv) Technology, and (v) Business. 42 teams from 6 different countries (India, China, Egypt, Germany, Pakistan, and the UK) registered for the task. 9 teams submitted their experimental results. The participants used various machine learning methods ranging from feature-based traditional machine learning to neural network techniques. The best performing system achieved an F-score value of 0.90, showing that the BERT-based approach outperforms other machine learning classifiers.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 03:46:51 GMT" } ]
2022-07-27T00:00:00
[ [ "Amjad", "Maaz", "" ], [ "Sidorov", "Grigori", "" ], [ "Zhila", "Alisa", "" ], [ "Gelbukh", "Alexander", "" ], [ "Rosso", "Paolo", "" ] ]
new_dataset
0.999891
2207.12456
Yasharth Bajpai
Yuhao Zhang, Yasharth Bajpai, Priyanshu Gupta, Ameya Ketkar, Miltiadis Allamanis, Titus Barik, Sumit Gulwani, Arjun Radhakrishna, Mohammad Raza, Gustavo Soares, Ashish Tiwari
Overwatch: Learning Patterns in Code Edit Sequences
25 pages, 7 Figures, 4 Algorithms, 3 Tables
null
null
null
cs.PL cs.AI cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Integrated Development Environments (IDEs) provide tool support to automate many source code editing tasks. Traditionally, IDEs use only the spatial context, i.e., the location where the developer is editing, to generate candidate edit recommendations. However, spatial context alone is often not sufficient to confidently predict the developer's next edit, and thus IDEs generate many suggestions at a location. Therefore, IDEs generally do not actively offer suggestions and instead, the developer is usually required to click on a specific icon or menu and then select from a large list of potential suggestions. As a consequence, developers often miss the opportunity to use the tool support because they are not aware it exists or forget to use it. To better understand common patterns in developer behavior and produce better edit recommendations, we can additionally use the temporal context, i.e., the edits that a developer was recently performing. To enable edit recommendations based on temporal context, we present Overwatch, a novel technique for learning edit sequence patterns from traces of developers' edits performed in an IDE. Our experiments show that Overwatch has 78% precision and that Overwatch not only completed edits when developers missed the opportunity to use the IDE tool support but also predicted new edits that have no tool support in the IDE.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 18:24:58 GMT" } ]
2022-07-27T00:00:00
[ [ "Zhang", "Yuhao", "" ], [ "Bajpai", "Yasharth", "" ], [ "Gupta", "Priyanshu", "" ], [ "Ketkar", "Ameya", "" ], [ "Allamanis", "Miltiadis", "" ], [ "Barik", "Titus", "" ], [ "Gulwani", "Sumit", "" ], [ "Radhakrishna", "Arjun", "" ], [ "Raza", "Mohammad", "" ], [ "Soares", "Gustavo", "" ], [ "Tiwari", "Ashish", "" ] ]
new_dataset
0.983722
2207.12537
Sarah Ostadabbas
Zhouping Wang and Sarah Ostadabbas
Live Stream Temporally Embedded 3D Human Body Pose and Shape Estimation
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
3D Human body pose and shape estimation within a temporal sequence can be quite critical for understanding human behavior. Despite the significant progress in human pose estimation in the recent years, which are often based on single images or videos, human motion estimation on live stream videos is still a rarely-touched area considering its special requirements for real-time output and temporal consistency. To address this problem, we present a temporally embedded 3D human body pose and shape estimation (TePose) method to improve the accuracy and temporal consistency of pose estimation in live stream videos. TePose uses previous predictions as a bridge to feedback the error for better estimation in the current frame and to learn the correspondence between data frames and predictions in the history. A multi-scale spatio-temporal graph convolutional network is presented as the motion discriminator for adversarial training using datasets without any 3D labeling. We propose a sequential data loading strategy to meet the special start-to-end data processing requirement of live stream. We demonstrate the importance of each proposed module with extensive experiments. The results show the effectiveness of TePose on widely-used human pose benchmarks with state-of-the-art performance.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 21:21:59 GMT" } ]
2022-07-27T00:00:00
[ [ "Wang", "Zhouping", "" ], [ "Ostadabbas", "Sarah", "" ] ]
new_dataset
0.96516
2207.12544
Hongyu Wang
Hongyu Wang, Nikolas Martelaro
End-User Puppeteering of Expressive Movements
Presented at PD/EUP Workshop, 2022 (arXiv:cs/4404636)
null
null
PDEUP/2022/05
cs.RO cs.HC
http://creativecommons.org/licenses/by/4.0/
The end-user programming of social robot behavior is usually limited by a predefined set of movements. We are proposing a puppeteering robotic interface that provides a more intuitive method of programming robot expressive movements. As the user manipulates the puppet of a robot, the actual robot replicates the movements, providing real-time visual feedback. Through this proposed interface, even with limited training, a novice user can design and program expressive movements efficiently. We present our preliminary user study results in this extended abstract.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 21:39:06 GMT" } ]
2022-07-27T00:00:00
[ [ "Wang", "Hongyu", "" ], [ "Martelaro", "Nikolas", "" ] ]
new_dataset
0.995579
2207.12552
Vishnu Rajendran S Mr
Vishnu Rajendran S, Soran Parsa, Simon Parsons, Amir Ghalamzan Esfahani
Peduncle Gripping and Cutting Force for Strawberry Harvesting Robotic End-effector Design
This work has been submitted to the IEEE for possible publication(4th International Conference on Control and Robotics (ICCR 2022)). Copyright may be transferred without notice, after which this version may no longer be accessible
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Robotic harvesting of strawberries has gained much interest in the recent past. Although there are many innovations, they haven't yet reached a level that is comparable to an expert human picker. The end effector unit plays a major role in defining the efficiency of such a robotic harvesting system. Even though there are reports on various end effectors for strawberry harvesting, but there they lack a picture of certain parameters that the researchers can rely upon to develop new end effectors. These parameters include the limit of gripping force that can be applied on the peduncle for effective gripping, the force required to cut the strawberry peduncle, etc. These estimations would be helpful in the design cycle of the end effectors that target to grip and cut the strawberry peduncle during the harvesting action. This paper studies the estimation and analysis of these parameters experimentally. It has been estimated that the peduncle gripping force can be limited to 10 N. This enables an end effector to grip a strawberry of mass up to 50 grams with a manipulation acceleration of 50 m/s$^2$ without squeezing the peduncle. The study on peduncle cutting force reveals that a force of 15 N is sufficient to cut a strawberry peduncle using a blade with a wedge angle of 16.6 degrees at a 30-degree orientation.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 22:14:46 GMT" } ]
2022-07-27T00:00:00
[ [ "S", "Vishnu Rajendran", "" ], [ "Parsa", "Soran", "" ], [ "Parsons", "Simon", "" ], [ "Esfahani", "Amir Ghalamzan", "" ] ]
new_dataset
0.99434
2207.12560
Pieter Gijsbers
Pieter Gijsbers, Marcos L. P. Bueno, Stefan Coors, Erin LeDell, S\'ebastien Poirier, Janek Thomas, Bernd Bischl, Joaquin Vanschoren
AMLB: an AutoML Benchmark
Submitted to JMLR
null
null
null
cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Comparing different AutoML frameworks is notoriously challenging and often done incorrectly. We introduce an open and extensible benchmark that follows best practices and avoids common mistakes when comparing AutoML frameworks. We conduct a thorough comparison of 9 well-known AutoML frameworks across 71 classification and 33 regression tasks. The differences between the AutoML frameworks are explored with a multi-faceted analysis, evaluating model accuracy, its trade-offs with inference time, and framework failures. We also use Bradley-Terry trees to discover subsets of tasks where the relative AutoML framework rankings differ. The benchmark comes with an open-source tool that integrates with many AutoML frameworks and automates the empirical evaluation process end-to-end: from framework installation and resource allocation to in-depth evaluation. The benchmark uses public data sets, can be easily extended with other AutoML frameworks and tasks, and has a website with up-to-date results.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 22:34:08 GMT" } ]
2022-07-27T00:00:00
[ [ "Gijsbers", "Pieter", "" ], [ "Bueno", "Marcos L. P.", "" ], [ "Coors", "Stefan", "" ], [ "LeDell", "Erin", "" ], [ "Poirier", "Sébastien", "" ], [ "Thomas", "Janek", "" ], [ "Bischl", "Bernd", "" ], [ "Vanschoren", "Joaquin", "" ] ]
new_dataset
0.978482
2207.12572
Ruocheng Wang
Ruocheng Wang, Yunzhi Zhang, Jiayuan Mao, Chin-Yi Cheng, Jiajun Wu
Translating a Visual LEGO Manual to a Machine-Executable Plan
ECCV 2022. Project page: https://cs.stanford.edu/~rcwang/projects/lego_manual
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We study the problem of translating an image-based, step-by-step assembly manual created by human designers into machine-interpretable instructions. We formulate this problem as a sequential prediction task: at each step, our model reads the manual, locates the components to be added to the current shape, and infers their 3D poses. This task poses the challenge of establishing a 2D-3D correspondence between the manual image and the real 3D object, and 3D pose estimation for unseen 3D objects, since a new component to be added in a step can be an object built from previous steps. To address these two challenges, we present a novel learning-based framework, the Manual-to-Executable-Plan Network (MEPNet), which reconstructs the assembly steps from a sequence of manual images. The key idea is to integrate neural 2D keypoint detection modules and 2D-3D projection algorithms for high-precision prediction and strong generalization to unseen components. The MEPNet outperforms existing methods on three newly collected LEGO manual datasets and a Minecraft house dataset.
[ { "version": "v1", "created": "Mon, 25 Jul 2022 23:35:46 GMT" } ]
2022-07-27T00:00:00
[ [ "Wang", "Ruocheng", "" ], [ "Zhang", "Yunzhi", "" ], [ "Mao", "Jiayuan", "" ], [ "Cheng", "Chin-Yi", "" ], [ "Wu", "Jiajun", "" ] ]
new_dataset
0.9997
2207.12584
Jun Zhang
Jun Zhang and Daqing Wan
On Deep Holes of Elliptic Curve Codes
19 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We give a method to construct deep holes for elliptic curve codes. For long elliptic curve codes, we conjecture that our construction is complete in the sense that it gives all deep holes. Some evidence and heuristics on the completeness are provided via the connection with problems and results in finite geometry.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 00:29:11 GMT" } ]
2022-07-27T00:00:00
[ [ "Zhang", "Jun", "" ], [ "Wan", "Daqing", "" ] ]
new_dataset
0.996625
2207.12716
Tai Wang
Tai Wang, Qing Lian, Chenming Zhu, Xinge Zhu, Wenwei Zhang
MV-FCOS3D++: Multi-View Camera-Only 4D Object Detection with Pretrained Monocular Backbones
Technical report
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this technical report, we present our solution, dubbed MV-FCOS3D++, for the Camera-Only 3D Detection track in Waymo Open Dataset Challenge 2022. For multi-view camera-only 3D detection, methods based on bird-eye-view or 3D geometric representations can leverage the stereo cues from overlapped regions between adjacent views and directly perform 3D detection without hand-crafted post-processing. However, it lacks direct semantic supervision for 2D backbones, which can be complemented by pretraining simple monocular-based detectors. Our solution is a multi-view framework for 4D detection following this paradigm. It is built upon a simple monocular detector FCOS3D++, pretrained only with object annotations of Waymo, and converts multi-view features to a 3D grid space to detect 3D objects thereon. A dual-path neck for single-frame understanding and temporal stereo matching is devised to incorporate multi-frame information. Our method finally achieves 49.75% mAPL with a single model and wins 2nd place in the WOD challenge, without any LiDAR-based depth supervision during training. The code will be released at https://github.com/Tai-Wang/Depth-from-Motion.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 08:10:29 GMT" } ]
2022-07-27T00:00:00
[ [ "Wang", "Tai", "" ], [ "Lian", "Qing", "" ], [ "Zhu", "Chenming", "" ], [ "Zhu", "Xinge", "" ], [ "Zhang", "Wenwei", "" ] ]
new_dataset
0.999753
2207.12720
Marco Boresta
Marco Boresta, Tommaso Colombo, Alberto De Santis
Convolutional neural networks and multi-threshold analysis for contamination detection in the apparel industry
23 pages, 8 figures
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Quality control of apparel items is mandatory in modern textile industry, as consumer's awareness and expectations about the highest possible standard is constantly increasing in favor of sustainable and ethical textile products. Such a level of quality is achieved by checking the product throughout its life cycle, from raw materials to boxed stock. Checks may include color shading tests, fasteners fatigue tests, fabric weigh tests, contamination tests, etc. This work deals specifically with the automatic detection of contaminations given by small parts in the finished product such as raw material like little stones and plastic bits or materials from the construction process, like a whole needle or a clip. Identification is performed by a two-level processing of X-ray images of the items: in the first, a multi-threshold analysis recognizes the contaminations by gray level and shape attributes; the second level consists of a deep learning classifier that has been trained to distinguish between true positives and false positives. The automatic detector was successfully deployed in an actual production plant, since the results satisfy the technical specification of the process, namely a number of false negatives smaller than 3% and a number of false positives smaller than 15%.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 08:21:41 GMT" } ]
2022-07-27T00:00:00
[ [ "Boresta", "Marco", "" ], [ "Colombo", "Tommaso", "" ], [ "De Santis", "Alberto", "" ] ]
new_dataset
0.970409
2207.12730
Jiang Bian
Jiang Bian, Qingzhong Wang, Haoyi Xiong, Jun Huang, Chen Liu, Xuhong Li, Jun Cheng, Jun Zhao, Feixiang Lu, Dejing Dou
$\textbf{P$^2$A}$: A Dataset and Benchmark for Dense Action Detection from Table Tennis Match Broadcasting Videos
null
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While deep learning has been widely used for video analytics, such as video classification and action detection, dense action detection with fast-moving subjects from sports videos is still challenging. In this work, we release yet another sports video dataset $\textbf{P$^2$A}$ for $\underline{P}$ing $\underline{P}$ong-$\underline{A}$ction detection, which consists of 2,721 video clips collected from the broadcasting videos of professional table tennis matches in World Table Tennis Championships and Olympiads. We work with a crew of table tennis professionals and referees to obtain fine-grained action labels (in 14 classes) for every ping-pong action that appeared in the dataset and formulate two sets of action detection problems - action localization and action recognition. We evaluate a number of commonly-seen action recognition (e.g., TSM, TSN, Video SwinTransformer, and Slowfast) and action localization models (e.g., BSN, BSN++, BMN, TCANet), using $\textbf{P$^2$A}$ for both problems, under various settings. These models can only achieve 48% area under the AR-AN curve for localization and 82% top-one accuracy for recognition since the ping-pong actions are dense with fast-moving subjects but broadcasting videos are with only 25 FPS. The results confirm that $\textbf{P$^2$A}$ is still a challenging task and can be used as a benchmark for action detection from videos.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 08:34:17 GMT" } ]
2022-07-27T00:00:00
[ [ "Bian", "Jiang", "" ], [ "Wang", "Qingzhong", "" ], [ "Xiong", "Haoyi", "" ], [ "Huang", "Jun", "" ], [ "Liu", "Chen", "" ], [ "Li", "Xuhong", "" ], [ "Cheng", "Jun", "" ], [ "Zhao", "Jun", "" ], [ "Lu", "Feixiang", "" ], [ "Dou", "Dejing", "" ] ]
new_dataset
0.999661
2207.12886
Mohamed Essam
Mohamed Essam, Nagia M. Ghanem and Mohamed A. Ismail
Detection of road traffic crashes based on collision estimation
11 pages , 9 figures
ICDIPV (CS & IT) , 2022
10.5121/csit.2022.121213
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
This paper introduces a framework based on computer vision that can detect road traffic crashes (RCTs) by using the installed surveillance/CCTV camera and report them to the emergency in real-time with the exact location and time of occurrence of the accident. The framework is built of five modules. We start with the detection of vehicles by using YOLO architecture; The second module is the tracking of vehicles using MOSSE tracker, Then the third module is a new approach to detect accidents based on collision estimation. Then the fourth module for each vehicle, we detect if there is a car accident or not based on the violent flow descriptor (ViF) followed by an SVM classifier for crash prediction. Finally, in the last stage, if there is a car accident, the system will send a notification to the emergency by using a GPS module that provides us with the location, time, and date of the accident to be sent to the emergency with the help of the GSM module. The main objective is to achieve higher accuracy with fewer false alarms and to implement a simple system based on pipelining technique.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 13:21:15 GMT" } ]
2022-07-27T00:00:00
[ [ "Essam", "Mohamed", "" ], [ "Ghanem", "Nagia M.", "" ], [ "Ismail", "Mohamed A.", "" ] ]
new_dataset
0.986866
2207.12909
Zerui Chen
Zerui Chen, Yana Hasson, Cordelia Schmid, Ivan Laptev
AlignSDF: Pose-Aligned Signed Distance Fields for Hand-Object Reconstruction
Accepted by ECCV 2022. Project Page: https://zerchen.github.io/projects/alignsdf.html
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent work achieved impressive progress towards joint reconstruction of hands and manipulated objects from monocular color images. Existing methods focus on two alternative representations in terms of either parametric meshes or signed distance fields (SDFs). On one side, parametric models can benefit from prior knowledge at the cost of limited shape deformations and mesh resolutions. Mesh models, hence, may fail to precisely reconstruct details such as contact surfaces of hands and objects. SDF-based methods, on the other side, can represent arbitrary details but are lacking explicit priors. In this work we aim to improve SDF models using priors provided by parametric representations. In particular, we propose a joint learning framework that disentangles the pose and the shape. We obtain hand and object poses from parametric models and use them to align SDFs in 3D space. We show that such aligned SDFs better focus on reconstructing shape details and improve reconstruction accuracy both for hands and objects. We evaluate our method and demonstrate significant improvements over the state of the art on the challenging ObMan and DexYCB benchmarks.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 13:58:59 GMT" } ]
2022-07-27T00:00:00
[ [ "Chen", "Zerui", "" ], [ "Hasson", "Yana", "" ], [ "Schmid", "Cordelia", "" ], [ "Laptev", "Ivan", "" ] ]
new_dataset
0.999102
2207.12935
Mingming Fan
Xiaofu Jin and Mingming Fan
"I Used To Carry A Wallet, Now I Just Need To Carry My Phone": Understanding Current Banking Practices and Challenges Among Older Adults in China
The 24th International ACM SIGACCESS Conference on Computers and Accessibility (ASSETS '22)
null
10.1145/3517428.3544820
null
cs.HC cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Managing finances is crucial for older adults who are retired and may rely on savings to ensure their life quality. As digital banking platforms (e.g., mobile apps, electronic payment) gradually replace physical ones, it is critical to understand how they adapt to digital banking and the potential frictions they experience. We conducted semi-structured interviews with 16 older adults in China, where the aging population is the largest and digital banking grows fast. We also interviewed bank employees to gain complementary perspectives of these help givers. Our findings show that older adults used both physical and digital platforms as an ecosystem based on perceived pros and cons. Perceived usefulness, self-confidence, and social influence were key motivators for learning digital banking. They experienced app-related (e.g., insufficient error-recovery support) and user-related challenges (e.g., trust, security and privacy concerns, low perceived self-efficacy) and developed coping strategies. We discuss design considerations to improve their banking experiences.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 14:44:09 GMT" } ]
2022-07-27T00:00:00
[ [ "Jin", "Xiaofu", "" ], [ "Fan", "Mingming", "" ] ]
new_dataset
0.979197
2207.12955
Chuhui Xue
Chuhui Xue, Jiaxing Huang, Shijian Lu, Changhu Wang, Song Bai
Contextual Text Block Detection towards Scene Text Understanding
Accepted by ECCV2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most existing scene text detectors focus on detecting characters or words that only capture partial text messages due to missing contextual information. For a better understanding of text in scenes, it is more desired to detect contextual text blocks (CTBs) which consist of one or multiple integral text units (e.g., characters, words, or phrases) in natural reading order and transmit certain complete text messages. This paper presents contextual text detection, a new setup that detects CTBs for better understanding of texts in scenes. We formulate the new setup by a dual detection task which first detects integral text units and then groups them into a CTB. To this end, we design a novel scene text clustering technique that treats integral text units as tokens and groups them (belonging to the same CTB) into an ordered token sequence. In addition, we create two datasets SCUT-CTW-Context and ReCTS-Context to facilitate future research, where each CTB is well annotated by an ordered sequence of integral text units. Further, we introduce three metrics that measure contextual text detection in local accuracy, continuity, and global accuracy. Extensive experiments show that our method accurately detects CTBs which effectively facilitates downstream tasks such as text classification and translation. The project is available at https://sg-vilab.github.io/publication/xue2022contextual/.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 14:59:25 GMT" } ]
2022-07-27T00:00:00
[ [ "Xue", "Chuhui", "" ], [ "Huang", "Jiaxing", "" ], [ "Lu", "Shijian", "" ], [ "Wang", "Changhu", "" ], [ "Bai", "Song", "" ] ]
new_dataset
0.99927
2207.12966
Yijun Yan Dr
Yijun Yan, Jinchang Ren, He Sun
Nondestructive Quality Control in Powder Metallurgy using Hyperspectral Imaging
8 pages, 9 figures, 3 tables
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Measuring the purity in the metal powder is critical for preserving the quality of additive manufacturing products. Contamination is one of the most headache problems which can be caused by multiple reasons and lead to the as-built components cracking and malfunctions. Existing methods for metallurgical condition assessment are mostly time-consuming and mainly focus on the physical integrity of structure rather than material composition. Through capturing spectral data from a wide frequency range along with the spatial information, hyperspectral imaging (HSI) can detect minor differences in terms of temperature, moisture and chemical composition. Therefore, HSI can provide a unique way to tackle this challenge. In this paper, with the use of a near-infrared HSI camera, applications of HSI for the non-destructive inspection of metal powders are introduced. Technical assumptions and solutions on three step-by-step case studies are presented in detail, including powder characterization, contamination detection, and band selection analysis. Experimental results have fully demonstrated the great potential of HSI and related AI techniques for NDT of powder metallurgy, especially the potential to satisfy the industrial manufacturing environment.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 15:20:35 GMT" } ]
2022-07-27T00:00:00
[ [ "Yan", "Yijun", "" ], [ "Ren", "Jinchang", "" ], [ "Sun", "He", "" ] ]
new_dataset
0.992652
2207.13005
Zhenran Xu
Zhenran Xu, Zifei Shan, Yuxin Li, Baotian Hu, Bing Qin
Hansel: A Chinese Few-Shot and Zero-Shot Entity Linking Benchmark
19 pages, 3 figures, 12 tables. Dataset available at https://github.com/imryanxu/Hansel
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Modern Entity Linking (EL) systems entrench a popularity bias, yet there is no dataset focusing on tail and emerging entities in languages other than English. We present Hansel, a new benchmark in Chinese that fills the vacancy of non-English few-shot and zero-shot EL challenges. The test set of Hansel is human annotated and reviewed, created with a novel method for collecting zero-shot EL datasets. It covers 10K diverse documents in news, social media posts and other web articles, with Wikidata as its target Knowledge Base. We demonstrate that the existing state-of-the-art EL system performs poorly on Hansel (R@1 of 36.6% on Few-Shot). We then establish a strong baseline that scores a R@1 of 46.2% on Few-Shot and 76.6% on Zero-Shot on our dataset. We also show that our baseline achieves competitive results on TAC-KBP2015 Chinese Entity Linking task.
[ { "version": "v1", "created": "Tue, 26 Jul 2022 16:09:07 GMT" } ]
2022-07-27T00:00:00
[ [ "Xu", "Zhenran", "" ], [ "Shan", "Zifei", "" ], [ "Li", "Yuxin", "" ], [ "Hu", "Baotian", "" ], [ "Qin", "Bing", "" ] ]
new_dataset
0.996789
2207.13075
Tarik A. Rashid
Maryam T. Abdulkhaleq, Tarik A. Rashid, Abeer Alsadoon, Bryar A. Hassan, Mokhtar Mohammadi, Jaza M. Abdullah, Amit Chhabra, Sazan L. Ali, Rawshan N. Othman, Hadil A. Hasan, Sara Azad, Naz A. Mahmood, Sivan S. Abdalrahman, Hezha O. Rasul, Nebojsa Bacanin, S.Vimal
Harmony Search: Current Studies and Uses on Healthcare Systems
37 pages
Artificial Intelligence in Medicine, 2022
10.1016/j.artmed.2022.102348
null
cs.NE cs.CY
http://creativecommons.org/licenses/by/4.0/
One of the popular metaheuristic search algorithms is Harmony Search (HS). It has been verified that HS can find solutions to optimization problems due to its balanced exploratory and convergence behavior and its simple and flexible structure. This capability makes the algorithm preferable to be applied in several real-world applications in various fields, including healthcare systems, different engineering fields, and computer science. The popularity of HS urges us to provide a comprehensive survey of the literature on HS and its variants on health systems, analyze its strengths and weaknesses, and suggest future research directions. In this review paper, the current studies and uses of harmony search are studied in four main domains. (i) The variants of HS, including its modifications and hybridization. (ii) Summary of the previous review works. (iii) Applications of HS in healthcare systems. (iv) And finally, an operational framework is proposed for the applications of HS in healthcare systems. The main contribution of this review is intended to provide a thorough examination of HS in healthcare systems while also serving as a valuable resource for prospective scholars who want to investigate or implement this method.
[ { "version": "v1", "created": "Tue, 19 Jul 2022 08:37:42 GMT" } ]
2022-07-27T00:00:00
[ [ "Abdulkhaleq", "Maryam T.", "" ], [ "Rashid", "Tarik A.", "" ], [ "Alsadoon", "Abeer", "" ], [ "Hassan", "Bryar A.", "" ], [ "Mohammadi", "Mokhtar", "" ], [ "Abdullah", "Jaza M.", "" ], [ "Chhabra", "Amit", "" ], [ "Ali", "Sazan L.", "" ], [ "Othman", "Rawshan N.", "" ], [ "Hasan", "Hadil A.", "" ], [ "Azad", "Sara", "" ], [ "Mahmood", "Naz A.", "" ], [ "Abdalrahman", "Sivan S.", "" ], [ "Rasul", "Hezha O.", "" ], [ "Bacanin", "Nebojsa", "" ], [ "Vimal", "S.", "" ] ]
new_dataset
0.957146
2104.09035
Liang Peng
Liang Peng, Fei Liu, Zhengxu Yu, Senbo Yan, Dan Deng, Zheng Yang, Haifeng Liu, Deng Cai
Lidar Point Cloud Guided Monocular 3D Object Detection
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Monocular 3D object detection is a challenging task in the self-driving and computer vision community. As a common practice, most previous works use manually annotated 3D box labels, where the annotating process is expensive. In this paper, we find that the precisely and carefully annotated labels may be unnecessary in monocular 3D detection, which is an interesting and counterintuitive finding. Using rough labels that are randomly disturbed, the detector can achieve very close accuracy compared to the one using the ground-truth labels. We delve into this underlying mechanism and then empirically find that: concerning the label accuracy, the 3D location part in the label is preferred compared to other parts of labels. Motivated by the conclusions above and considering the precise LiDAR 3D measurement, we propose a simple and effective framework, dubbed LiDAR point cloud guided monocular 3D object detection (LPCG). This framework is capable of either reducing the annotation costs or considerably boosting the detection accuracy without introducing extra annotation costs. Specifically, It generates pseudo labels from unlabeled LiDAR point clouds. Thanks to accurate LiDAR 3D measurements in 3D space, such pseudo labels can replace manually annotated labels in the training of monocular 3D detectors, since their 3D location information is precise. LPCG can be applied into any monocular 3D detector to fully use massive unlabeled data in a self-driving system. As a result, in KITTI benchmark, we take the first place on both monocular 3D and BEV (bird's-eye-view) detection with a significant margin. In Waymo benchmark, our method using 10% labeled data achieves comparable accuracy to the baseline detector using 100% labeled data. The codes are released at https://github.com/SPengLiang/LPCG.
[ { "version": "v1", "created": "Mon, 19 Apr 2021 03:41:09 GMT" }, { "version": "v2", "created": "Wed, 8 Sep 2021 12:07:50 GMT" }, { "version": "v3", "created": "Mon, 25 Jul 2022 03:11:46 GMT" } ]
2022-07-26T00:00:00
[ [ "Peng", "Liang", "" ], [ "Liu", "Fei", "" ], [ "Yu", "Zhengxu", "" ], [ "Yan", "Senbo", "" ], [ "Deng", "Dan", "" ], [ "Yang", "Zheng", "" ], [ "Liu", "Haifeng", "" ], [ "Cai", "Deng", "" ] ]
new_dataset
0.951166
2106.00515
Pichao Wang
Pichao Wang and Xue Wang and Fan Wang and Ming Lin and Shuning Chang and Hao Li and Rong Jin
KVT: k-NN Attention for Boosting Vision Transformers
Accepted by ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Convolutional Neural Networks (CNNs) have dominated computer vision for years, due to its ability in capturing locality and translation invariance. Recently, many vision transformer architectures have been proposed and they show promising performance. A key component in vision transformers is the fully-connected self-attention which is more powerful than CNNs in modelling long range dependencies. However, since the current dense self-attention uses all image patches (tokens) to compute attention matrix, it may neglect locality of images patches and involve noisy tokens (e.g., clutter background and occlusion), leading to a slow training process and potential degradation of performance. To address these problems, we propose the $k$-NN attention for boosting vision transformers. Specifically, instead of involving all the tokens for attention matrix calculation, we only select the top-$k$ similar tokens from the keys for each query to compute the attention map. The proposed $k$-NN attention naturally inherits the local bias of CNNs without introducing convolutional operations, as nearby tokens tend to be more similar than others. In addition, the $k$-NN attention allows for the exploration of long range correlation and at the same time filters out irrelevant tokens by choosing the most similar tokens from the entire image. Despite its simplicity, we verify, both theoretically and empirically, that $k$-NN attention is powerful in speeding up training and distilling noise from input tokens. Extensive experiments are conducted by using 11 different vision transformer architectures to verify that the proposed $k$-NN attention can work with any existing transformer architectures to improve its prediction performance. The codes are available at \url{https://github.com/damo-cv/KVT}.
[ { "version": "v1", "created": "Fri, 28 May 2021 06:49:10 GMT" }, { "version": "v2", "created": "Wed, 12 Jan 2022 00:53:35 GMT" }, { "version": "v3", "created": "Fri, 22 Jul 2022 23:18:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Wang", "Pichao", "" ], [ "Wang", "Xue", "" ], [ "Wang", "Fan", "" ], [ "Lin", "Ming", "" ], [ "Chang", "Shuning", "" ], [ "Li", "Hao", "" ], [ "Jin", "Rong", "" ] ]
new_dataset
0.999185
2106.11239
Dan Jia
Dan Jia and Alexander Hermans and Bastian Leibe
2D vs. 3D LiDAR-based Person Detection on Mobile Robots
Shortened version accepted at the International Conference on Intelligent Robots and Systems (IROS) 2022
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Person detection is a crucial task for mobile robots navigating in human-populated environments. LiDAR sensors are promising for this task, thanks to their accurate depth measurements and large field of view. Two types of LiDAR sensors exist: the 2D LiDAR sensors, which scan a single plane, and the 3D LiDAR sensors, which scan multiple planes, thus forming a volume. How do they compare for the task of person detection? To answer this, we conduct a series of experiments, using the public, large-scale JackRabbot dataset and the state-of-the-art 2D and 3D LiDAR-based person detectors (DR-SPAAM and CenterPoint respectively). Our experiments include multiple aspects, ranging from the basic performance and speed comparison, to more detailed analysis on localization accuracy and robustness against distance and scene clutter. The insights from these experiments highlight the strengths and weaknesses of 2D and 3D LiDAR sensors as sources for person detection, and are especially valuable for designing mobile robots that will operate in close proximity to surrounding humans (e.g. service or social robot).
[ { "version": "v1", "created": "Mon, 21 Jun 2021 16:35:49 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 12:27:30 GMT" } ]
2022-07-26T00:00:00
[ [ "Jia", "Dan", "" ], [ "Hermans", "Alexander", "" ], [ "Leibe", "Bastian", "" ] ]
new_dataset
0.998239
2107.13824
Zeyu Hu
Zeyu Hu, Xuyang Bai, Jiaxiang Shang, Runze Zhang, Jiayu Dong, Xin Wang, Guangyuan Sun, Hongbo Fu, Chiew-Lan Tai
VMNet: Voxel-Mesh Network for Geodesic-Aware 3D Semantic Segmentation
V1: ICCV2021(Oral), supplementary materials included V2: TPAMI(ICCV2021 SI), supplementary materials included
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, sparse voxel-based methods have become the state-of-the-arts for 3D semantic segmentation of indoor scenes, thanks to the powerful 3D CNNs. Nevertheless, being oblivious to the underlying geometry, voxel-based methods suffer from ambiguous features on spatially close objects and struggle with handling complex and irregular geometries due to the lack of geodesic information. In view of this, we present Voxel-Mesh Network (VMNet), a novel 3D deep architecture that operates on the voxel and mesh representations leveraging both the Euclidean and geodesic information. Intuitively, the Euclidean information extracted from voxels can offer contextual cues representing interactions between nearby objects, while the geodesic information extracted from meshes can help separate objects that are spatially close but have disconnected surfaces. To incorporate such information from the two domains, we design an intra-domain attentive module for effective feature aggregation and an inter-domain attentive module for adaptive feature fusion. Experimental results validate the effectiveness of VMNet: specifically, on the challenging ScanNet dataset for large-scale segmentation of indoor scenes, it outperforms the state-of-the-art SparseConvNet and MinkowskiNet (74.6% vs 72.5% and 73.6% in mIoU) with a simpler network structure (17M vs 30M and 38M parameters). Code release: https://github.com/hzykent/VMNet
[ { "version": "v1", "created": "Thu, 29 Jul 2021 08:41:14 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 06:58:20 GMT" } ]
2022-07-26T00:00:00
[ [ "Hu", "Zeyu", "" ], [ "Bai", "Xuyang", "" ], [ "Shang", "Jiaxiang", "" ], [ "Zhang", "Runze", "" ], [ "Dong", "Jiayu", "" ], [ "Wang", "Xin", "" ], [ "Sun", "Guangyuan", "" ], [ "Fu", "Hongbo", "" ], [ "Tai", "Chiew-Lan", "" ] ]
new_dataset
0.97296
2108.01806
Binh-Son Hua
Hong-Wing Pang, Yingshu Chen, Phuoc-Hieu Le, Binh-Son Hua, Duc Thanh Nguyen, Sai-Kit Yeung
Neural Scene Decoration from a Single Photograph
ECCV 2022 paper. 14 pages of main content, 4 pages of references, and 11 pages of appendix
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Furnishing and rendering indoor scenes has been a long-standing task for interior design, where artists create a conceptual design for the space, build a 3D model of the space, decorate, and then perform rendering. Although the task is important, it is tedious and requires tremendous effort. In this paper, we introduce a new problem of domain-specific indoor scene image synthesis, namely neural scene decoration. Given a photograph of an empty indoor space and a list of decorations with layout determined by user, we aim to synthesize a new image of the same space with desired furnishing and decorations. Neural scene decoration can be applied to create conceptual interior designs in a simple yet effective manner. Our attempt to this research problem is a novel scene generation architecture that transforms an empty scene and an object layout into a realistic furnished scene photograph. We demonstrate the performance of our proposed method by comparing it with conditional image synthesis baselines built upon prevailing image translation approaches both qualitatively and quantitatively. We conduct extensive experiments to further validate the plausibility and aesthetics of our generated scenes. Our implementation is available at \url{https://github.com/hkust-vgd/neural_scene_decoration}.
[ { "version": "v1", "created": "Wed, 4 Aug 2021 01:44:21 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 14:11:37 GMT" } ]
2022-07-26T00:00:00
[ [ "Pang", "Hong-Wing", "" ], [ "Chen", "Yingshu", "" ], [ "Le", "Phuoc-Hieu", "" ], [ "Hua", "Binh-Son", "" ], [ "Nguyen", "Duc Thanh", "" ], [ "Yeung", "Sai-Kit", "" ] ]
new_dataset
0.998661
2108.12144
Qingyuan Liang
Qingyuan Liang, Zeyu Sun, Qihao Zhu, Wenjie Zhang, Lian Yu, Yingfei Xiong, Lu Zhang
Lyra: A Benchmark for Turducken-Style Code Generation
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, neural techniques have been used to generate source code automatically. While promising for declarative languages, these approaches achieve much poorer performance on datasets for imperative languages. Since a declarative language is typically embedded in an imperative language (i.e., the turducken-style programming) in real-world software development, the promising results on declarative languages can hardly lead to significant reduction of manual software development efforts. In this paper, we define a new code generation task: given a natural language comment, this task aims to generate a program in a base imperative language with an embedded declarative language. To our knowledge, this is the first turducken-style code generation task. For this task, we present Lyra: a dataset in Python with embedded SQL. This dataset contains 2,000 carefully annotated database manipulation programs from real-world projects. Each program is paired with both a Chinese comment and an English comment. In our experiment, we adopted Transformer, BERT-style, and GPT-style models as baselines. In the best setting, the generation performance of GPT-style models is better than others, where the AST exact matching accuracy is 24% and 25.5% when using Chinese and English comments, respectively. Therefore, we believe that Lyra provides a new challenge for code generation. Yet, overcoming this challenge may significantly boost the applicability of code generation techniques for real-world software development.
[ { "version": "v1", "created": "Fri, 27 Aug 2021 07:22:55 GMT" }, { "version": "v2", "created": "Wed, 4 May 2022 15:59:44 GMT" }, { "version": "v3", "created": "Sun, 24 Jul 2022 04:54:17 GMT" } ]
2022-07-26T00:00:00
[ [ "Liang", "Qingyuan", "" ], [ "Sun", "Zeyu", "" ], [ "Zhu", "Qihao", "" ], [ "Zhang", "Wenjie", "" ], [ "Yu", "Lian", "" ], [ "Xiong", "Yingfei", "" ], [ "Zhang", "Lu", "" ] ]
new_dataset
0.999652
2110.07718
Yunxiao Qin
Yunxiao Qin, Yuanhao Xiong, Jinfeng Yi, Lihong Cao, Cho-Jui Hsieh
Adversarial Attack across Datasets
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing transfer attack methods commonly assume that the attacker knows the training set (e.g., the label set, the input size) of the black-box victim models, which is usually unrealistic because in some cases the attacker cannot know this information. In this paper, we define a Generalized Transferable Attack (GTA) problem where the attacker doesn't know this information and is acquired to attack any randomly encountered images that may come from unknown datasets. To solve the GTA problem, we propose a novel Image Classification Eraser (ICE) that trains a particular attacker to erase classification information of any images from arbitrary datasets. Experiments on several datasets demonstrate that ICE greatly outperforms existing transfer attacks on GTA, and show that ICE uses similar texture-like noises to perturb different images from different datasets. Moreover, fast fourier transformation analysis indicates that the main components in each ICE noise are three sine waves for the R, G, and B image channels. Inspired by this interesting finding, we then design a novel Sine Attack (SA) method to optimize the three sine waves. Experiments show that SA performs comparably to ICE, indicating that the three sine waves are effective and enough to break DNNs under the GTA setting.
[ { "version": "v1", "created": "Wed, 13 Oct 2021 02:07:40 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 14:21:12 GMT" } ]
2022-07-26T00:00:00
[ [ "Qin", "Yunxiao", "" ], [ "Xiong", "Yuanhao", "" ], [ "Yi", "Jinfeng", "" ], [ "Cao", "Lihong", "" ], [ "Hsieh", "Cho-Jui", "" ] ]
new_dataset
0.998717
2110.09004
Diwei Sheng
Diwei Sheng, Yuxiang Chai, Xinru Li, Chen Feng, Jianzhe Lin, Claudio Silva, John-Ross Rizzo
NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences
8 pages, 10 figures, published in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2021)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Visual place recognition (VPR) is critical in not only localization and mapping for autonomous driving vehicles, but also in assistive navigation for the visually impaired population. To enable a long-term VPR system on a large scale, several challenges need to be addressed. First, different applications could require different image view directions, such as front views for self-driving cars while side views for the low vision people. Second, VPR in metropolitan scenes can often cause privacy concerns due to the imaging of pedestrian and vehicle identity information, calling for the need for data anonymization before VPR queries and database construction. Both factors could lead to VPR performance variations that are not well understood yet. To study their influences, we present the NYU-VPR dataset that contains more than 200,000 images over a 2km by 2km area near the New York University campus, taken within the whole year of 2016. We present benchmark results on several popular VPR algorithms showing that side views are significantly more challenging for current VPR methods while the influence of data anonymization is almost negligible, together with our hypothetical explanations and in-depth analysis.
[ { "version": "v1", "created": "Mon, 18 Oct 2021 03:56:33 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 05:43:04 GMT" } ]
2022-07-26T00:00:00
[ [ "Sheng", "Diwei", "" ], [ "Chai", "Yuxiang", "" ], [ "Li", "Xinru", "" ], [ "Feng", "Chen", "" ], [ "Lin", "Jianzhe", "" ], [ "Silva", "Claudio", "" ], [ "Rizzo", "John-Ross", "" ] ]
new_dataset
0.998923
2110.13214
Pan Lu
Pan Lu, Liang Qiu, Jiaqi Chen, Tony Xia, Yizhou Zhao, Wei Zhang, Zhou Yu, Xiaodan Liang, Song-Chun Zhu
IconQA: A New Benchmark for Abstract Diagram Understanding and Visual Language Reasoning
Corrected typos. Accepted to NeurIPS 2021, 27 pages, 18 figures. Data and code are available at https://iconqa.github.io
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
Current visual question answering (VQA) tasks mainly consider answering human-annotated questions for natural images. However, aside from natural images, abstract diagrams with semantic richness are still understudied in visual understanding and reasoning research. In this work, we introduce a new challenge of Icon Question Answering (IconQA) with the goal of answering a question in an icon image context. We release IconQA, a large-scale dataset that consists of 107,439 questions and three sub-tasks: multi-image-choice, multi-text-choice, and filling-in-the-blank. The IconQA dataset is inspired by real-world diagram word problems that highlight the importance of abstract diagram understanding and comprehensive cognitive reasoning. Thus, IconQA requires not only perception skills like object recognition and text understanding, but also diverse cognitive reasoning skills, such as geometric reasoning, commonsense reasoning, and arithmetic reasoning. To facilitate potential IconQA models to learn semantic representations for icon images, we further release an icon dataset Icon645 which contains 645,687 colored icons on 377 classes. We conduct extensive user studies and blind experiments and reproduce a wide range of advanced VQA methods to benchmark the IconQA task. Also, we develop a strong IconQA baseline Patch-TRM that applies a pyramid cross-modal Transformer with input diagram embeddings pre-trained on the icon dataset. IconQA and Icon645 are available at https://iconqa.github.io.
[ { "version": "v1", "created": "Mon, 25 Oct 2021 18:52:26 GMT" }, { "version": "v2", "created": "Sun, 7 Nov 2021 00:44:53 GMT" }, { "version": "v3", "created": "Sun, 20 Feb 2022 01:09:40 GMT" }, { "version": "v4", "created": "Mon, 25 Jul 2022 04:05:29 GMT" } ]
2022-07-26T00:00:00
[ [ "Lu", "Pan", "" ], [ "Qiu", "Liang", "" ], [ "Chen", "Jiaqi", "" ], [ "Xia", "Tony", "" ], [ "Zhao", "Yizhou", "" ], [ "Zhang", "Wei", "" ], [ "Yu", "Zhou", "" ], [ "Liang", "Xiaodan", "" ], [ "Zhu", "Song-Chun", "" ] ]
new_dataset
0.999858
2112.05892
Honglu Zhou
Honglu Zhou, Asim Kadav, Aviv Shamsian, Shijie Geng, Farley Lai, Long Zhao, Ting Liu, Mubbasir Kapadia, Hans Peter Graf
COMPOSER: Compositional Reasoning of Group Activity in Videos with Keypoint-Only Modality
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Group Activity Recognition detects the activity collectively performed by a group of actors, which requires compositional reasoning of actors and objects. We approach the task by modeling the video as tokens that represent the multi-scale semantic concepts in the video. We propose COMPOSER, a Multiscale Transformer based architecture that performs attention-based reasoning over tokens at each scale and learns group activity compositionally. In addition, prior works suffer from scene biases with privacy and ethical concerns. We only use the keypoint modality which reduces scene biases and prevents acquiring detailed visual data that may contain private or biased information of users. We improve the multiscale representations in COMPOSER by clustering the intermediate scale representations, while maintaining consistent cluster assignments between scales. Finally, we use techniques such as auxiliary prediction and data augmentations tailored to the keypoint signals to aid model training. We demonstrate the model's strength and interpretability on two widely-used datasets (Volleyball and Collective Activity). COMPOSER achieves up to +5.4% improvement with just the keypoint modality. Code is available at https://github.com/hongluzhou/composer
[ { "version": "v1", "created": "Sat, 11 Dec 2021 01:25:46 GMT" }, { "version": "v2", "created": "Sun, 20 Mar 2022 03:35:16 GMT" }, { "version": "v3", "created": "Mon, 25 Jul 2022 00:38:32 GMT" } ]
2022-07-26T00:00:00
[ [ "Zhou", "Honglu", "" ], [ "Kadav", "Asim", "" ], [ "Shamsian", "Aviv", "" ], [ "Geng", "Shijie", "" ], [ "Lai", "Farley", "" ], [ "Zhao", "Long", "" ], [ "Liu", "Ting", "" ], [ "Kapadia", "Mubbasir", "" ], [ "Graf", "Hans Peter", "" ] ]
new_dataset
0.998089
2201.05961
Hanjia Lyu
Arsal Imtiaz, Danish Khan, Hanjia Lyu, Jiebo Luo
Taking sides: Public Opinion over the Israel-Palestine Conflict in 2021
Accepted for publication in Proceedings of the International Workshop on Social Sensing (SocialSens 2022): Special Edition on Belief Dynamics, 2022
null
null
null
cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Israel-Palestine Conflict, one of the most enduring conflicts in history, dates back to the start of 20th century, with the establishment of the British Mandate in Palestine and has deeply rooted complex issues in politics, demography, religion, and other aspects, making it harder to attain resolve. To understand the conflict in 2021, we devise an observational study to aggregate stance held by English-speaking countries. We collect Twitter data using popular hashtags around and specific to the conflict portraying opinions neutral or partial to the two parties. We use different tools and methods to classify tweets into pro-Palestinian, pro-Israel, or neutral. This paper further describes the implementation of data mining methodologies to obtain insights and reason the stance held by citizens around the conflict.
[ { "version": "v1", "created": "Sun, 16 Jan 2022 04:03:36 GMT" }, { "version": "v2", "created": "Sat, 23 Jul 2022 23:56:36 GMT" } ]
2022-07-26T00:00:00
[ [ "Imtiaz", "Arsal", "" ], [ "Khan", "Danish", "" ], [ "Lyu", "Hanjia", "" ], [ "Luo", "Jiebo", "" ] ]
new_dataset
0.997858
2202.04800
Jack Hessel
Jack Hessel and Jena D. Hwang and Jae Sung Park and Rowan Zellers and Chandra Bhagavatula and Anna Rohrbach and Kate Saenko and Yejin Choi
The Abduction of Sherlock Holmes: A Dataset for Visual Abductive Reasoning
code, data, models at http://visualabduction.com/
ECCV 2022
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Humans have remarkable capacity to reason abductively and hypothesize about what lies beyond the literal content of an image. By identifying concrete visual clues scattered throughout a scene, we almost can't help but draw probable inferences beyond the literal scene based on our everyday experience and knowledge about the world. For example, if we see a "20 mph" sign alongside a road, we might assume the street sits in a residential area (rather than on a highway), even if no houses are pictured. Can machines perform similar visual reasoning? We present Sherlock, an annotated corpus of 103K images for testing machine capacity for abductive reasoning beyond literal image contents. We adopt a free-viewing paradigm: participants first observe and identify salient clues within images (e.g., objects, actions) and then provide a plausible inference about the scene, given the clue. In total, we collect 363K (clue, inference) pairs, which form a first-of-its-kind abductive visual reasoning dataset. Using our corpus, we test three complementary axes of abductive reasoning. We evaluate the capacity of models to: i) retrieve relevant inferences from a large candidate corpus; ii) localize evidence for inferences via bounding boxes, and iii) compare plausible inferences to match human judgments on a newly-collected diagnostic corpus of 19K Likert-scale judgments. While we find that fine-tuning CLIP-RN50x64 with a multitask objective outperforms strong baselines, significant headroom exists between model performance and human agreement. Data, models, and leaderboard available at http://visualabduction.com/
[ { "version": "v1", "created": "Thu, 10 Feb 2022 02:26:45 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 17:26:06 GMT" } ]
2022-07-26T00:00:00
[ [ "Hessel", "Jack", "" ], [ "Hwang", "Jena D.", "" ], [ "Park", "Jae Sung", "" ], [ "Zellers", "Rowan", "" ], [ "Bhagavatula", "Chandra", "" ], [ "Rohrbach", "Anna", "" ], [ "Saenko", "Kate", "" ], [ "Choi", "Yejin", "" ] ]
new_dataset
0.999796
2202.10448
Deepak Pathak
Aravind Sivakumar, Kenneth Shaw, Deepak Pathak
Robotic Telekinesis: Learning a Robotic Hand Imitator by Watching Humans on Youtube
RSS 2022 final version. Website and demos at https://robotic-telekinesis.github.io/
null
null
null
cs.RO cs.AI cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We build a system that enables any human to control a robot hand and arm, simply by demonstrating motions with their own hand. The robot observes the human operator via a single RGB camera and imitates their actions in real-time. Human hands and robot hands differ in shape, size, and joint structure, and performing this translation from a single uncalibrated camera is a highly underconstrained problem. Moreover, the retargeted trajectories must effectively execute tasks on a physical robot, which requires them to be temporally smooth and free of self-collisions. Our key insight is that while paired human-robot correspondence data is expensive to collect, the internet contains a massive corpus of rich and diverse human hand videos. We leverage this data to train a system that understands human hands and retargets a human video stream into a robot hand-arm trajectory that is smooth, swift, safe, and semantically similar to the guiding demonstration. We demonstrate that it enables previously untrained people to teleoperate a robot on various dexterous manipulation tasks. Our low-cost, glove-free, marker-free remote teleoperation system makes robot teaching more accessible and we hope that it can aid robots in learning to act autonomously in the real world. Videos at https://robotic-telekinesis.github.io/
[ { "version": "v1", "created": "Mon, 21 Feb 2022 18:59:59 GMT" }, { "version": "v2", "created": "Sun, 24 Jul 2022 06:08:35 GMT" } ]
2022-07-26T00:00:00
[ [ "Sivakumar", "Aravind", "" ], [ "Shaw", "Kenneth", "" ], [ "Pathak", "Deepak", "" ] ]
new_dataset
0.999193
2203.02118
Ruixiang Cao
Ruixiang Cao, Jun Gu, Chen Yu and Andre Rosendo
OmniWheg: An Omnidirectional Wheel-Leg Transformable Robot
6 pages, 10 figures, IROS
null
null
null
cs.RO cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents the design, analysis, and performance evaluation of an omnidirectional transformable wheel-leg robot called OmniWheg. We design a novel mechanism consisting of a separable omni-wheel and 4-bar linkages, allowing the robot to transform between omni-wheeled and legged modes smoothly. In wheeled mode, the robot can move in all directions and efficiently adjust the relative position of its wheels, while it can overcome common obstacles in legged mode, such as stairs and steps. Unlike other articles studying whegs, this implementation with omnidirectional wheels allows the correction of misalignments between right and left wheels before traversing obstacles, which effectively improves the success rate and simplifies the preparation process before the wheel-leg transformation. We describe the design concept, mechanism, and the dynamic characteristic of the wheel-leg structure. We then evaluate its performance in various scenarios, including passing obstacles, climbing steps of different heights, and turning/moving omnidirectionally. Our results confirm that this mobile platform can overcome common indoor obstacles and move flexibly on the flat ground with the new transformable wheel-leg mechanism, while keeping a high degree of stability.
[ { "version": "v1", "created": "Fri, 4 Mar 2022 03:23:02 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 13:16:46 GMT" } ]
2022-07-26T00:00:00
[ [ "Cao", "Ruixiang", "" ], [ "Gu", "Jun", "" ], [ "Yu", "Chen", "" ], [ "Rosendo", "Andre", "" ] ]
new_dataset
0.97217
2204.05483
Stefan Larson
Stefan Larson, Kevin Leach
Redwood: Using Collision Detection to Grow a Large-Scale Intent Classification Dataset
SIGDIAL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Dialog systems must be capable of incorporating new skills via updates over time in order to reflect new use cases or deployment scenarios. Similarly, developers of such ML-driven systems need to be able to add new training data to an already-existing dataset to support these new skills. In intent classification systems, problems can arise if training data for a new skill's intent overlaps semantically with an already-existing intent. We call such cases collisions. This paper introduces the task of intent collision detection between multiple datasets for the purposes of growing a system's skillset. We introduce several methods for detecting collisions, and evaluate our methods on real datasets that exhibit collisions. To highlight the need for intent collision detection, we show that model performance suffers if new data is added in such a way that does not arbitrate colliding intents. Finally, we use collision detection to construct and benchmark a new dataset, Redwood, which is composed of 451 ntent categories from 13 original intent classification datasets, making it the largest publicly available intent classification benchmark.
[ { "version": "v1", "created": "Tue, 12 Apr 2022 02:28:23 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 16:57:42 GMT" } ]
2022-07-26T00:00:00
[ [ "Larson", "Stefan", "" ], [ "Leach", "Kevin", "" ] ]
new_dataset
0.998129
2205.02301
Dorian F. Henning
Dorian F. Henning, Tristan Laidlow, Stefan Leutenegger
BodySLAM: Joint Camera Localisation, Mapping, and Human Motion Tracking
ECCV 2022. Video: https://youtu.be/0-SL3VeWEvU
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating human motion from video is an active research area due to its many potential applications. Most state-of-the-art methods predict human shape and posture estimates for individual images and do not leverage the temporal information available in video. Many "in the wild" sequences of human motion are captured by a moving camera, which adds the complication of conflated camera and human motion to the estimation. We therefore present BodySLAM, a monocular SLAM system that jointly estimates the position, shape, and posture of human bodies, as well as the camera trajectory. We also introduce a novel human motion model to constrain sequential body postures and observe the scale of the scene. Through a series of experiments on video sequences of human motion captured by a moving monocular camera, we demonstrate that BodySLAM improves estimates of all human body parameters and camera poses when compared to estimating these separately.
[ { "version": "v1", "created": "Wed, 4 May 2022 19:38:26 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 13:26:30 GMT" }, { "version": "v3", "created": "Sun, 24 Jul 2022 20:52:48 GMT" } ]
2022-07-26T00:00:00
[ [ "Henning", "Dorian F.", "" ], [ "Laidlow", "Tristan", "" ], [ "Leutenegger", "Stefan", "" ] ]
new_dataset
0.998188
2205.02428
Guanzhou Li
Guanzhou Li, Jianping Wu, Yujing He
HARL: A Novel Hierachical Adversary Reinforcement Learning for Automoumous Intersection Management
null
null
null
null
cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As an emerging technology, Connected Autonomous Vehicles (CAVs) are believed to have the ability to move through intersections in a faster and safer manner, through effective Vehicle-to-Everything (V2X) communication and global observation. Autonomous intersection management is a key path to efficient crossing at intersections, which reduces unnecessary slowdowns and stops through adaptive decision process of each CAV, enabling fuller utilization of the intersection space. Distributed reinforcement learning (DRL) offers a flexible, end-to-end model for AIM, adapting for many intersection scenarios. While DRL is prone to collisions as the actions of multiple sides in the complicated interactions are sampled from a generic policy, restricting the application of DRL in realistic scenario. To address this, we propose a hierarchical RL framework where models at different levels vary in receptive scope, action step length, and feedback period of reward. The upper layer model accelerate CAVs to prevent them from being clashed, while the lower layer model adjust the trends from upper layer model to avoid the change of mobile state causing new conflicts. And the real action of CAV at each step is co-determined by the trends from both levels, forming a real-time balance in the adversarial process. The proposed model is proven effective in the experiment undertaken in a complicated intersection with 4 branches and 4 lanes each branch, and show better performance compared with baselines.
[ { "version": "v1", "created": "Thu, 5 May 2022 04:07:13 GMT" }, { "version": "v2", "created": "Tue, 14 Jun 2022 06:19:46 GMT" }, { "version": "v3", "created": "Mon, 20 Jun 2022 07:31:04 GMT" }, { "version": "v4", "created": "Sat, 23 Jul 2022 15:34:27 GMT" } ]
2022-07-26T00:00:00
[ [ "Li", "Guanzhou", "" ], [ "Wu", "Jianping", "" ], [ "He", "Yujing", "" ] ]
new_dataset
0.999148
2205.03146
Piotr Mirowski
Piotr Mirowski, Dylan Banarse, Mateusz Malinowski, Simon Osindero, Chrisantha Fernando
CLIP-CLOP: CLIP-Guided Collage and Photomontage
5 pages, 7 figures, published at the International Conference on Computational Creativity (ICCC) 2022 as Short Paper: Demo
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
The unabated mystique of large-scale neural networks, such as the CLIP dual image-and-text encoder, popularized automatically generated art. Increasingly more sophisticated generators enhanced the artworks' realism and visual appearance, and creative prompt engineering enabled stylistic expression. Guided by an artist-in-the-loop ideal, we design a gradient-based generator to produce collages. It requires the human artist to curate libraries of image patches and to describe (with prompts) the whole image composition, with the option to manually adjust the patches' positions during generation, thereby allowing humans to reclaim some control of the process and achieve greater creative freedom. We explore the aesthetic potentials of high-resolution collages, and provide an open-source Google Colab as an artistic tool.
[ { "version": "v1", "created": "Fri, 6 May 2022 11:33:49 GMT" }, { "version": "v2", "created": "Thu, 19 May 2022 13:39:32 GMT" }, { "version": "v3", "created": "Sun, 24 Jul 2022 14:47:50 GMT" } ]
2022-07-26T00:00:00
[ [ "Mirowski", "Piotr", "" ], [ "Banarse", "Dylan", "" ], [ "Malinowski", "Mateusz", "" ], [ "Osindero", "Simon", "" ], [ "Fernando", "Chrisantha", "" ] ]
new_dataset
0.999233
2205.11925
Aldi Piroli
Aldi Piroli, Vinzenz Dallabetta, Marc Walessa, Daniel Meissner, Johannes Kopp, Klaus Dietmayer
Robust 3D Object Detection in Cold Weather Conditions
Oral
2022 IEEE Intelligent Vehicles Symposium (IV)
10.1109/IV51971.2022.9827398
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Adverse weather conditions can negatively affect LiDAR-based object detectors. In this work, we focus on the phenomenon of vehicle gas exhaust condensation in cold weather conditions. This everyday effect can influence the estimation of object sizes, orientations and introduce ghost object detections, compromising the reliability of the state of the art object detectors. We propose to solve this problem by using data augmentation and a novel training loss term. To effectively train deep neural networks, a large set of labeled data is needed. In case of adverse weather conditions, this process can be extremely laborious and expensive. We address this issue in two steps: First, we present a gas exhaust data generation method based on 3D surface reconstruction and sampling which allows us to generate large sets of gas exhaust clouds from a small pool of labeled data. Second, we introduce a point cloud augmentation process that can be used to add gas exhaust to datasets recorded in good weather conditions. Finally, we formulate a new training loss term that leverages the augmented point cloud to increase object detection robustness by penalizing predictions that include noise. In contrast to other works, our method can be used with both grid-based and point-based detectors. Moreover, since our approach does not require any network architecture changes, inference times remain unchanged. Experimental results on real data show that our proposed method greatly increases robustness to gas exhaust and noisy data.
[ { "version": "v1", "created": "Tue, 24 May 2022 09:37:07 GMT" }, { "version": "v2", "created": "Mon, 25 Jul 2022 14:18:03 GMT" } ]
2022-07-26T00:00:00
[ [ "Piroli", "Aldi", "" ], [ "Dallabetta", "Vinzenz", "" ], [ "Walessa", "Marc", "" ], [ "Meissner", "Daniel", "" ], [ "Kopp", "Johannes", "" ], [ "Dietmayer", "Klaus", "" ] ]
new_dataset
0.993627
2205.14412
Yuepeng Qian
Yuepeng Qian, Shuaishuai Han, Gabriel Aguirre-Ollinger, Chenglong Fu and Haoyong Yu
Design, Modelling, and Control of a Reconfigurable Rotary Series Elastic Actuator with Nonlinear Stiffness for Assistive Robots
null
Mechatronics 86 (2022) 102872
10.1016/j.mechatronics.2022.102872
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
In assistive robots, compliant actuator is a key component in establishing safe and satisfactory physical human-robot interaction (pHRI). The performance of compliant actuators largely depends on the stiffness of the elastic element. Generally, low stiffness is desirable to achieve low impedance, high fidelity of force control and safe pHRI, while high stiffness is required to ensure sufficient force bandwidth and output force. These requirements, however, are contradictory and often vary according to different tasks and conditions. In order to address the contradiction of stiffness selection and improve adaptability to different applications, we develop a reconfigurable rotary series elastic actuator with nonlinear stiffness (RRSEAns) for assistive robots. In this paper, an accurate model of the reconfigurable rotary series elastic element (RSEE) is presented and the adjusting principles are investigated, followed by detailed analysis and experimental validation. The RRSEAns can provide a wide range of stiffness from 0.095 Nm/deg to 2.33 Nm/deg, and different stiffness profiles can be yielded with respect to different configuration of the reconfigurable RSEE. The overall performance of the RRSEAns is verified by experiments on frequency response, torque control and pHRI, which is adequate for most applications in assistive robots. Specifically, the root-mean-square (RMS) error of the interaction torque results as low as 0.07 Nm in transparent/human-in-charge mode, demonstrating the advantages of the RRSEAns in pHRI.
[ { "version": "v1", "created": "Sat, 28 May 2022 12:11:23 GMT" } ]
2022-07-26T00:00:00
[ [ "Qian", "Yuepeng", "" ], [ "Han", "Shuaishuai", "" ], [ "Aguirre-Ollinger", "Gabriel", "" ], [ "Fu", "Chenglong", "" ], [ "Yu", "Haoyong", "" ] ]
new_dataset
0.975422
2206.10735
G\"okberk Erdo\u{g}an
G\"okberk Erdo\u{g}an, Georg Maringer, Nikita Polyanskii
Signature Codes for a Noisy Adder Multiple Access Channel
12 pages, 0 figures, submitted to 2022 IEEE Information Theory Workshop
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
In this work, we consider $q$-ary signature codes of length $k$ and size $n$ for a noisy adder multiple access channel. A signature code in this model has the property that any subset of codewords can be uniquely reconstructed based on any vector that is obtained from the sum (over integers) of these codewords. We show that there exists an algorithm to construct a signature code of length $k = \frac{2n\log{3}}{(1-2\tau)\left(\log{n} + (q-1)\log{\frac{\pi}{2}}\right)} +\mathcal{O}\left(\frac{n}{\log{n}(q+\log{n})}\right)$ capable of correcting $\tau k$ errors at the channel output, where $0\le \tau < \frac{q-1}{2q}$. Furthermore, we present an explicit construction of signature codewords with polynomial complexity being able to correct up to $\left( \frac{q-1}{8q} - \epsilon\right)k$ errors for a codeword length $k = \mathcal{O} \left ( \frac{n}{\log \log n} \right )$, where $\epsilon$ is a small non-negative number. Moreover, we prove several non-existence results (converse bounds) for $q$-ary signature codes enabling error correction.
[ { "version": "v1", "created": "Tue, 21 Jun 2022 21:17:49 GMT" }, { "version": "v2", "created": "Sat, 23 Jul 2022 07:40:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Erdoğan", "Gökberk", "" ], [ "Maringer", "Georg", "" ], [ "Polyanskii", "Nikita", "" ] ]
new_dataset
0.986381
2207.00642
Leon Abdillah
Rahayu Agustina, Leon Andretti Abdillah
Analisis Kepuasan Pengguna Aplikasi Bintang Cash & Credit Menggunakan Metode End User Computing Satisfaction (EUCS)
10 pages, conference (2021). arXiv admin note: substantial text overlap with arXiv:2207.00006
null
null
null
cs.HC cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The use of android application technology has advanced rapidly in recent years, making it one of the alternative media for distributing information in a variety of industries, including e-commerce, that consumers may access at any time and from any location. The Bintang Cash & Credit store in Palembang is one of the stores that has already used the Android application. In EUCS there are seven variables: content, accuracy, format, ease of use and timeliness, security, and speed of response. The data of this research were collected by distributing questionnaires to 95 respondents using a random sampling technique. Furthermore, the data obtained were processed using SPSS version 25 software. The data analysis method used was a quantitative analysis method using validity and reliability tests, classical assumption tests, multiple regression tests, and hypothesis testing. From the results of this study, there is a positive influence on the satisfaction of users of the Bintang Cash & Credit application.
[ { "version": "v1", "created": "Mon, 27 Jun 2022 07:10:00 GMT" } ]
2022-07-26T00:00:00
[ [ "Agustina", "Rahayu", "" ], [ "Abdillah", "Leon Andretti", "" ] ]
new_dataset
0.968985
2207.07861
Hongtao Wen
Hongtao Wen, Jianhang Yan, Wanli Peng, Yi Sun
TransGrasp: Grasp Pose Estimation of a Category of Objects by Transferring Grasps from Only One Labeled Instance
Accepted to European Conference on Computer Vision (ECCV) 2022
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Grasp pose estimation is an important issue for robots to interact with the real world. However, most of existing methods require exact 3D object models available beforehand or a large amount of grasp annotations for training. To avoid these problems, we propose TransGrasp, a category-level grasp pose estimation method that predicts grasp poses of a category of objects by labeling only one object instance. Specifically, we perform grasp pose transfer across a category of objects based on their shape correspondences and propose a grasp pose refinement module to further fine-tune grasp pose of grippers so as to ensure successful grasps. Experiments demonstrate the effectiveness of our method on achieving high-quality grasps with the transferred grasp poses. Our code is available at https://github.com/yanjh97/TransGrasp.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 07:27:27 GMT" }, { "version": "v2", "created": "Wed, 20 Jul 2022 02:44:56 GMT" }, { "version": "v3", "created": "Mon, 25 Jul 2022 07:46:20 GMT" } ]
2022-07-26T00:00:00
[ [ "Wen", "Hongtao", "" ], [ "Yan", "Jianhang", "" ], [ "Peng", "Wanli", "" ], [ "Sun", "Yi", "" ] ]
new_dataset
0.972572
2207.08631
Chao Chen
Chao Chen, Yu-Shen Liu, Zhizhong Han
Latent Partition Implicit with Surface Codes for 3D Representation
20pages,14figures. Accepted by ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep implicit functions have shown remarkable shape modeling ability in various 3D computer vision tasks. One drawback is that it is hard for them to represent a 3D shape as multiple parts. Current solutions learn various primitives and blend the primitives directly in the spatial space, which still struggle to approximate the 3D shape accurately. To resolve this problem, we introduce a novel implicit representation to represent a single 3D shape as a set of parts in the latent space, towards both highly accurate and plausibly interpretable shape modeling. Our insight here is that both the part learning and the part blending can be conducted much easier in the latent space than in the spatial space. We name our method Latent Partition Implicit (LPI), because of its ability of casting the global shape modeling into multiple local part modeling, which partitions the global shape unity. LPI represents a shape as Signed Distance Functions (SDFs) using surface codes. Each surface code is a latent code representing a part whose center is on the surface, which enables us to flexibly employ intrinsic attributes of shapes or additional surface properties. Eventually, LPI can reconstruct both the shape and the parts on the shape, both of which are plausible meshes. LPI is a multi-level representation, which can partition a shape into different numbers of parts after training. LPI can be learned without ground truth signed distances, point normals or any supervision for part partition. LPI outperforms the latest methods under the widely used benchmarks in terms of reconstruction accuracy and modeling interpretability. Our code, data and models are available at https://github.com/chenchao15/LPI.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 14:24:46 GMT" }, { "version": "v2", "created": "Thu, 21 Jul 2022 02:22:32 GMT" }, { "version": "v3", "created": "Sat, 23 Jul 2022 08:22:27 GMT" } ]
2022-07-26T00:00:00
[ [ "Chen", "Chao", "" ], [ "Liu", "Yu-Shen", "" ], [ "Han", "Zhizhong", "" ] ]
new_dataset
0.975097
2207.11280
Gerasimos Lampouras
Fenia Christopoulou, Gerasimos Lampouras, Milan Gritta, Guchun Zhang, Yinpeng Guo, Zhongqi Li, Qi Zhang, Meng Xiao, Bo Shen, Lin Li, Hao Yu, Li Yan, Pingyi Zhou, Xin Wang, Yuchi Ma, Ignacio Iacobacci, Yasheng Wang, Guangtai Liang, Jiansheng Wei, Xin Jiang, Qianxiang Wang, Qun Liu
PanGu-Coder: Program Synthesis with Function-Level Language Modeling
27 pages
null
null
null
cs.LG cs.AI cs.CL cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present PanGu-Coder, a pretrained decoder-only language model adopting the PanGu-Alpha architecture for text-to-code generation, i.e. the synthesis of programming language solutions given a natural language problem description. We train PanGu-Coder using a two-stage strategy: the first stage employs Causal Language Modelling (CLM) to pre-train on raw programming language data, while the second stage uses a combination of Causal Language Modelling and Masked Language Modelling (MLM) training objectives that focus on the downstream task of text-to-code generation and train on loosely curated pairs of natural language program definitions and code functions. Finally, we discuss PanGu-Coder-FT, which is fine-tuned on a combination of competitive programming problems and code with continuous integration tests. We evaluate PanGu-Coder with a focus on whether it generates functionally correct programs and demonstrate that it achieves equivalent or better performance than similarly sized models, such as CodeX, while attending a smaller context window and training on less data.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 18:08:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Christopoulou", "Fenia", "" ], [ "Lampouras", "Gerasimos", "" ], [ "Gritta", "Milan", "" ], [ "Zhang", "Guchun", "" ], [ "Guo", "Yinpeng", "" ], [ "Li", "Zhongqi", "" ], [ "Zhang", "Qi", "" ], [ "Xiao", "Meng", "" ], [ "Shen", "Bo", "" ], [ "Li", "Lin", "" ], [ "Yu", "Hao", "" ], [ "Yan", "Li", "" ], [ "Zhou", "Pingyi", "" ], [ "Wang", "Xin", "" ], [ "Ma", "Yuchi", "" ], [ "Iacobacci", "Ignacio", "" ], [ "Wang", "Yasheng", "" ], [ "Liang", "Guangtai", "" ], [ "Wei", "Jiansheng", "" ], [ "Jiang", "Xin", "" ], [ "Wang", "Qianxiang", "" ], [ "Liu", "Qun", "" ] ]
new_dataset
0.999734
2207.11300
Stefan Bosse
Stefan Bosse
JAM: The JavaScript Agent Machine for Distributed Computing and Simulation with reactive and mobile Multi-agent Systems -- A Technical Report
null
null
null
null
cs.AI cs.DC cs.HC cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agent-based modelling (ABM), simulation (ABS), and distributed computation (ABC) are established methods. The Internet and Web-based technologies are suitable carriers. This paper is a technical report with some tutorial aspects of the JavaScript Agent Machine (JAM) platform and the programming of agents with AgentJS, a sub-set of the widely used JavaScript programming language for the programming of mobile state-based reactive agents. In addition to explaining the motivation for particular design choices and introducing core concepts of the architecture and the programming of agents in JavaScript, short examples illustrate the power of the JAM platform and its components for the deployment of large-scale multi-agent system in strong heterogeneous environments like the Internet. JAM is suitable for the deployment in strong heterogeneous and mobile environments. Finally, JAM can be used for ABC as well as for ABS in an unified methodology, finally enabling mobile crowd sensing coupled with simulation (ABS).
[ { "version": "v1", "created": "Fri, 22 Jul 2022 19:01:48 GMT" } ]
2022-07-26T00:00:00
[ [ "Bosse", "Stefan", "" ] ]
new_dataset
0.996508
2207.11350
Li Zhou
Li Zhou, Gilles Barthe, Pierre-Yves Strub, Junyi Liu, Mingsheng Ying
CoqQ: Foundational Verification of Quantum Programs
null
null
null
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
CoqQ is a framework for reasoning about quantum programs in the Coq proof assistant. Its main components are: a deeply embedded quantum programming language, in which classic quantum algorithms are easily expressed, and an expressive program logic for proving properties of programs. CoqQ is foundational: the program logic is formally proved sound with respect to a denotational semantics based on state-of-art mathematical libraries (mathcomp and mathcomp analysis). CoqQ is also practical: assertions can use Dirac expressions, which eases concise specifications, and proofs can exploit local and parallel reasoning, which minimizes verification effort. We illustrate the applicability of CoqQ with many examples from the literature.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 21:41:11 GMT" } ]
2022-07-26T00:00:00
[ [ "Zhou", "Li", "" ], [ "Barthe", "Gilles", "" ], [ "Strub", "Pierre-Yves", "" ], [ "Liu", "Junyi", "" ], [ "Ying", "Mingsheng", "" ] ]
new_dataset
0.99916
2207.11357
Molly Jane Nicholas
Molly Jane Nicholas, Eric Paulos
PREPRINT: Found Object Puppeteering as a Tool for Rapid Movement Sketching in 3D Animation
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Both expert and novice animators have a need to engage in movement sketching -- low-cost, rapid iteration on a character's movement style -- especially early on in the ideation process. Yet animation tools currently focus on low-level character control mechanisms rather than encouraging engagement with and deep observation of movement. We identify Found Object puppeteering -- where puppeteers manipulate everyday physical objects with their hands -- as a creative practice whose use of material "jigs" is uniquely well-positioned to scaffold the novice animator's developing skills. In this paper, we draw on the practice of an expert puppeteer practitioner to inform the design of a system that incorporates physical objects into the animation workflow to scaffold novices into diverse movement exploration while manipulating digital puppets.
[ { "version": "v1", "created": "Fri, 22 Jul 2022 22:22:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Nicholas", "Molly Jane", "" ], [ "Paulos", "Eric", "" ] ]
new_dataset
0.998946
2207.11384
Xinyu Zhang
Xinyu Zhang, Jiangeng Huang, Yuanhao Huang, Kangyao Huang, Lei Yang, Yan Han, Li Wang, Huaping Liu, Jianxi Luo and Jun Li
Intelligent Amphibious Ground-Aerial Vehicles: State of the Art Technology for Future Transportation
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Amphibious ground-aerial vehicles fuse flying and driving modes to enable more flexible air-land mobility and have received growing attention recently. By analyzing the existing amphibious vehicles, we highlight the autonomous fly-driving functionality for the effective uses of amphibious vehicles in complex three-dimensional urban transportation systems. We review and summarize the key enabling technologies for intelligent flying-driving in existing amphibious vehicle designs, identify major technological barriers and propose potential solutions for future research and innovation. This paper aims to serve as a guide for research and development of intelligent amphibious vehicles for urban transportation toward the future.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 00:57:34 GMT" } ]
2022-07-26T00:00:00
[ [ "Zhang", "Xinyu", "" ], [ "Huang", "Jiangeng", "" ], [ "Huang", "Yuanhao", "" ], [ "Huang", "Kangyao", "" ], [ "Yang", "Lei", "" ], [ "Han", "Yan", "" ], [ "Wang", "Li", "" ], [ "Liu", "Huaping", "" ], [ "Luo", "Jianxi", "" ], [ "Li", "Jun", "" ] ]
new_dataset
0.9989
2207.11432
Christopher Mutschler
Sebastian Rietsch, Shih-Yuan Huang, Georgios Kontes, Axel Plinge, Christopher Mutschler
Driver Dojo: A Benchmark for Generalizable Reinforcement Learning for Autonomous Driving
19 pages, 8 figures
null
null
null
cs.LG cs.AI cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reinforcement learning (RL) has shown to reach super human-level performance across a wide range of tasks. However, unlike supervised machine learning, learning strategies that generalize well to a wide range of situations remains one of the most challenging problems for real-world RL. Autonomous driving (AD) provides a multi-faceted experimental field, as it is necessary to learn the correct behavior over many variations of road layouts and large distributions of possible traffic situations, including individual driver personalities and hard-to-predict traffic events. In this paper we propose a challenging benchmark for generalizable RL for AD based on a configurable, flexible, and performant code base. Our benchmark uses a catalog of randomized scenario generators, including multiple mechanisms for road layout and traffic variations, different numerical and visual observation types, distinct action spaces, diverse vehicle models, and allows for use under static scenario definitions. In addition to purely algorithmic insights, our application-oriented benchmark also enables a better understanding of the impact of design decisions such as action and observation space on the generalizability of policies. Our benchmark aims to encourage researchers to propose solutions that are able to successfully generalize across scenarios, a task in which current RL methods fail. The code for the benchmark is available at https://github.com/seawee1/driver-dojo.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 06:29:43 GMT" } ]
2022-07-26T00:00:00
[ [ "Rietsch", "Sebastian", "" ], [ "Huang", "Shih-Yuan", "" ], [ "Kontes", "Georgios", "" ], [ "Plinge", "Axel", "" ], [ "Mutschler", "Christopher", "" ] ]
new_dataset
0.999607
2207.11455
Zhiheng Wu
Zhiheng Wu, Yue Lu, Xingyu Chen, Zhengxing Wu, Liwen Kang, and Junzhi Yu
UC-OWOD: Unknown-Classified Open World Object Detection
Accepted to ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Open World Object Detection (OWOD) is a challenging computer vision problem that requires detecting unknown objects and gradually learning the identified unknown classes. However, it cannot distinguish unknown instances as multiple unknown classes. In this work, we propose a novel OWOD problem called Unknown-Classified Open World Object Detection (UC-OWOD). UC-OWOD aims to detect unknown instances and classify them into different unknown classes. Besides, we formulate the problem and devise a two-stage object detector to solve UC-OWOD. First, unknown label-aware proposal and unknown-discriminative classification head are used to detect known and unknown objects. Then, similarity-based unknown classification and unknown clustering refinement modules are constructed to distinguish multiple unknown classes. Moreover, two novel evaluation protocols are designed to evaluate unknown-class detection. Abundant experiments and visualizations prove the effectiveness of the proposed method. Code is available at https://github.com/JohnWuzh/UC-OWOD.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 08:15:30 GMT" } ]
2022-07-26T00:00:00
[ [ "Wu", "Zhiheng", "" ], [ "Lu", "Yue", "" ], [ "Chen", "Xingyu", "" ], [ "Wu", "Zhengxing", "" ], [ "Kang", "Liwen", "" ], [ "Yu", "Junzhi", "" ] ]
new_dataset
0.999343
2207.11466
Eran Kaufman Dr.
Eran Kaufman and Andrey Iaremenko
Anomaly Detection for Fraud in Cryptocurrency Time Series
null
null
null
null
cs.LG cs.CR
http://creativecommons.org/licenses/by/4.0/
Since the inception of Bitcoin in 2009, the market of cryptocurrencies has grown beyond initial expectations as daily trades exceed $10 billion. As industries become automated, the need for an automated fraud detector becomes very apparent. Detecting anomalies in real time prevents potential accidents and economic losses. Anomaly detection in multivariate time series data poses a particular challenge because it requires simultaneous consideration of temporal dependencies and relationships between variables. Identifying an anomaly in real time is not an easy task specifically because of the exact anomalistic behavior they observe. Some points may present pointwise global or local anomalistic behavior, while others may be anomalistic due to their frequency or seasonal behavior or due to a change in the trend. In this paper we suggested working on real time series of trades of Ethereum from specific accounts and surveyed a large variety of different algorithms traditional and new. We categorized them according to the strategy and the anomalistic behavior which they search and showed that when bundling them together to different groups, they can prove to be a good real-time detector with an alarm time of no longer than a few seconds and with very high confidence.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 08:58:57 GMT" } ]
2022-07-26T00:00:00
[ [ "Kaufman", "Eran", "" ], [ "Iaremenko", "Andrey", "" ] ]
new_dataset
0.990508
2207.11467
Zuoyue Li
Zuoyue Li, Tianxing Fan, Zhenqiang Li, Zhaopeng Cui, Yoichi Sato, Marc Pollefeys, Martin R. Oswald
CompNVS: Novel View Synthesis with Scene Completion
ECCV 2022
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce a scalable framework for novel view synthesis from RGB-D images with largely incomplete scene coverage. While generative neural approaches have demonstrated spectacular results on 2D images, they have not yet achieved similar photorealistic results in combination with scene completion where a spatial 3D scene understanding is essential. To this end, we propose a generative pipeline performing on a sparse grid-based neural scene representation to complete unobserved scene parts via a learned distribution of scenes in a 2.5D-3D-2.5D manner. We process encoded image features in 3D space with a geometry completion network and a subsequent texture inpainting network to extrapolate the missing area. Photorealistic image sequences can be finally obtained via consistency-relevant differentiable rendering. Comprehensive experiments show that the graphical outputs of our method outperform the state of the art, especially within unobserved scene parts.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 09:03:13 GMT" } ]
2022-07-26T00:00:00
[ [ "Li", "Zuoyue", "" ], [ "Fan", "Tianxing", "" ], [ "Li", "Zhenqiang", "" ], [ "Cui", "Zhaopeng", "" ], [ "Sato", "Yoichi", "" ], [ "Pollefeys", "Marc", "" ], [ "Oswald", "Martin R.", "" ] ]
new_dataset
0.987249
2207.11521
Gabriel Lindel\"of
Gabriel Lindel\"of, Talayeh Aledavood, Barbara Keller
Vaccine Discourse on Twitter During the COVID-19 Pandemic
17 pages, 7 figures
null
10.2196/41319
null
cs.CY cs.CL cs.SI
http://creativecommons.org/licenses/by/4.0/
Since the onset of the COVID-19 pandemic, vaccines have been an important topic in public discourse. The discussions around vaccines are polarized as some see them as an important measure to end the pandemic, and others are hesitant or find them harmful. This study investigates posts related to COVID-19 vaccines on Twitter and focuses on those which have a negative stance toward vaccines. A dataset of 16,713,238 English tweets related to COVID-19 vaccines was collected covering the period from March 1, 2020, to July 31, 2021. We used the Scikit-learn Python library to apply a support vector machine (SVM) classifier to identify the tweets with a negative stance toward the COVID-19 vaccines. A total of 5,163 tweets were used to train the classifier, out of which a subset of 2,484 tweets were manually annotated by us and made publicly available. We used the BERTtopic model to extract and investigate the topics discussed within the negative tweets and how they changed over time. We show that the negativity with respect to COVID-19 vaccines has decreased over time along with the vaccine roll-outs. We identify 37 topics of discussion and present their respective importance over time. We show that popular topics consist of conspiratorial discussions such as 5G towers and microchips, but also contain legitimate concerns around vaccination safety and side effects as well as concerns about policies. Our study shows that even unpopular opinions or conspiracy theories can become widespread when paired with a widely popular discussion topic such as COVID-19 vaccines. Understanding the concerns and the discussed topics and how they change over time is essential for policymakers and public health authorities to provide better and in-time information and policies, to facilitate vaccination of the population in future similar crises.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 13:50:51 GMT" } ]
2022-07-26T00:00:00
[ [ "Lindelöf", "Gabriel", "" ], [ "Aledavood", "Talayeh", "" ], [ "Keller", "Barbara", "" ] ]
new_dataset
0.999809
2207.11528
Miguel Arana-Catania
M. Arana-Catania, F.A. Van Lier, Rob Procter
Supporting peace negotiations in the Yemen war through machine learning
28 pages, 16 figures, 2 tables. An earlier version of this paper was presented at the Data for Policy Conference, September, 2021. Current version to appear in Data & Policy journal
null
null
null
cs.CL cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Today's conflicts are becoming increasingly complex, fluid and fragmented, often involving a host of national and international actors with multiple and often divergent interests. This development poses significant challenges for conflict mediation, as mediators struggle to make sense of conflict dynamics, such as the range of conflict parties and the evolution of their political positions, the distinction between relevant and less relevant actors in peace-making, or the identification of key conflict issues and their interdependence. International peace efforts appear ill-equipped to successfully address these challenges. While technology is already being experimented with and used in a range of conflict related fields, such as conflict predicting or information gathering, less attention has been given to how technology can contribute to conflict mediation. This case study contributes to emerging research on the use of state-of-the-art machine learning technologies and techniques in conflict mediation processes. Using dialogue transcripts from peace negotiations in Yemen, this study shows how machine-learning can effectively support mediating teams by providing them with tools for knowledge management, extraction and conflict analysis. Apart from illustrating the potential of machine learning tools in conflict mediation, the paper also emphasises the importance of interdisciplinary and participatory, co-creation methodology for the development of context-sensitive and targeted tools and to ensure meaningful and responsible implementation.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 14:24:38 GMT" } ]
2022-07-26T00:00:00
[ [ "Arana-Catania", "M.", "" ], [ "Van Lier", "F. A.", "" ], [ "Procter", "Rob", "" ] ]
new_dataset
0.999445
2207.11537
Aryslan Malik
Jared Herron, Daniel Lopez, Jarred Jordan, Jillian Rudy, Aryslan Malik, Daniel Posada, Mehran Andalibi, Troy Henderson
RGB-D Robotic Pose Estimation For a Servicing Robotic Arm
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
A large number of robotic and human-assisted missions to the Moon and Mars are forecast. NASA's efforts to learn about the geology and makeup of these celestial bodies rely heavily on the use of robotic arms. The safety and redundancy aspects will be crucial when humans will be working alongside the robotic explorers. Additionally, robotic arms are crucial to satellite servicing and planned orbit debris mitigation missions. The goal of this work is to create a custom Computer Vision (CV) based Artificial Neural Network (ANN) that would be able to rapidly identify the posture of a 7 Degree of Freedom (DoF) robotic arm from a single (RGB-D) image - just like humans can easily identify if an arm is pointing in some general direction. The Sawyer robotic arm is used for developing and training this intelligent algorithm. Since Sawyer's joint space spans 7 dimensions, it is an insurmountable task to cover the entire joint configuration space. In this work, orthogonal arrays are used, similar to the Taguchi method, to efficiently span the joint space with the minimal number of training images. This ``optimally'' generated database is used to train the custom ANN and its degree of accuracy is on average equal to twice the smallest joint displacement step used for database generation. A pre-trained ANN will be useful for estimating the postures of robotic manipulators used on space stations, spacecraft, and rovers as an auxiliary tool or for contingency plans.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 15:03:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Herron", "Jared", "" ], [ "Lopez", "Daniel", "" ], [ "Jordan", "Jarred", "" ], [ "Rudy", "Jillian", "" ], [ "Malik", "Aryslan", "" ], [ "Posada", "Daniel", "" ], [ "Andalibi", "Mehran", "" ], [ "Henderson", "Troy", "" ] ]
new_dataset
0.996653
2207.11541
Jingwei Wang
Tianle Ni, Jingwei Wang, Yunlong Ma, Shuang Wang, Min Liu, and Weiming Shen
FastATDC: Fast Anomalous Trajectory Detection and Classification
6 pages, 4 figures, accepted by 2022 IEEE 18th International Conference on Automation Science and Engineering (CASE)
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Automated detection of anomalous trajectories is an important problem with considerable applications in intelligent transportation systems. Many existing studies have focused on distinguishing anomalous trajectories from normal trajectories, ignoring the large differences between anomalous trajectories. A recent study has made great progress in identifying abnormal trajectory patterns and proposed a two-stage algorithm for anomalous trajectory detection and classification (ATDC). This algorithm has excellent performance but suffers from a few limitations, such as high time complexity and poor interpretation. Here, we present a careful theoretical and empirical analysis of the ATDC algorithm, showing that the calculation of anomaly scores in both stages can be simplified, and that the second stage of the algorithm is much more important than the first stage. Hence, we develop a FastATDC algorithm that introduces a random sampling strategy in both stages. Experimental results show that FastATDC is 10 to 20 times faster than ATDC on real datasets. Moreover, FastATDC outperforms the baseline algorithms and is comparable to the ATDC algorithm.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 15:32:33 GMT" } ]
2022-07-26T00:00:00
[ [ "Ni", "Tianle", "" ], [ "Wang", "Jingwei", "" ], [ "Ma", "Yunlong", "" ], [ "Wang", "Shuang", "" ], [ "Liu", "Min", "" ], [ "Shen", "Weiming", "" ] ]
new_dataset
0.987626
2207.11545
Hanzhao Wang
Hanzhao Wang, Xiaocheng Li, Kalyan Talluri
Learning to Sell a Focal-ancillary Combination
null
null
null
null
cs.LG cs.IR econ.GN q-fin.EC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A number of products are sold in the following sequence: First a focal product is shown, and if the customer purchases, one or more ancillary products are displayed for purchase. A prominent example is the sale of an airline ticket, where first the flight is shown, and when chosen, a number of ancillaries such as cabin or hold bag options, seat selection, insurance etc. are presented. The firm has to decide on a sale format -- whether to sell them in sequence unbundled, or together as a bundle -- and how to price the focal and ancillary products, separately or as a bundle. Since the ancillary is considered by the customer only after the purchase of the focal product, the sale strategy chosen by the firm creates an information and learning dependency between the products: for instance, offering only a bundle would preclude learning customers' valuation for the focal and ancillary products individually. In this paper we study learning strategies for such focal and ancillary item combinations under the following scenarios: (a) pure unbundling to all customers, (b) personalized mechanism, where, depending on some observed features of the customers, the two products are presented and priced as a bundle or in sequence, (c) initially unbundling (for all customers), and switch to bundling (if more profitable) permanently once during the horizon. We design pricing and decisions algorithms for all three scenarios, with regret upper bounded by $O(d \sqrt{T} \log T)$, and an optimal switching time for the third scenario.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 15:55:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Wang", "Hanzhao", "" ], [ "Li", "Xiaocheng", "" ], [ "Talluri", "Kalyan", "" ] ]
new_dataset
0.990597
2207.11565
Marcin Pietron
Michal Karwatowski and Marcin Pietron
Context based lemmatizer for Polish language
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Lemmatization is the process of grouping together the inflected forms of a word so they can be analysed as a single item, identified by the word's lemma, or dictionary form. In computational linguistics, lemmatisation is the algorithmic process of determining the lemma of a word based on its intended meaning. Unlike stemming, lemmatisation depends on correctly identifying the intended part of speech and meaning of a word in a sentence, as well as within the larger context surrounding that sentence. As a result, developing efficient lemmatisation algorithm is the complex task. In recent years it can be observed that deep learning models used for this task outperform other methods including machine learning algorithms. In this paper the polish lemmatizer based on Google T5 model is presented. The training was run with different context lengths. The model achieves the best results for polish language lemmatisation process.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 18:02:16 GMT" } ]
2022-07-26T00:00:00
[ [ "Karwatowski", "Michal", "" ], [ "Pietron", "Marcin", "" ] ]
new_dataset
0.964026
2207.11577
Mostafa Shabani
Mostafa Shabani, Dat Thanh Tran, Juho Kanniainen, Alexandros Iosifidis
Augmented Bilinear Network for Incremental Multi-Stock Time-Series Classification
null
null
null
null
cs.LG q-fin.ST
http://creativecommons.org/licenses/by/4.0/
Deep Learning models have become dominant in tackling financial time-series analysis problems, overturning conventional machine learning and statistical methods. Most often, a model trained for one market or security cannot be directly applied to another market or security due to differences inherent in the market conditions. In addition, as the market evolves through time, it is necessary to update the existing models or train new ones when new data is made available. This scenario, which is inherent in most financial forecasting applications, naturally raises the following research question: How to efficiently adapt a pre-trained model to a new set of data while retaining performance on the old data, especially when the old data is not accessible? In this paper, we propose a method to efficiently retain the knowledge available in a neural network pre-trained on a set of securities and adapt it to achieve high performance in new ones. In our method, the prior knowledge encoded in a pre-trained neural network is maintained by keeping existing connections fixed, and this knowledge is adjusted for the new securities by a set of augmented connections, which are optimized using the new data. The auxiliary connections are constrained to be of low rank. This not only allows us to rapidly optimize for the new task but also reduces the storage and run-time complexity during the deployment phase. The efficiency of our approach is empirically validated in the stock mid-price movement prediction problem using a large-scale limit order book dataset. Experimental results show that our approach enhances prediction performance as well as reduces the overall number of network parameters.
[ { "version": "v1", "created": "Sat, 23 Jul 2022 18:54:10 GMT" } ]
2022-07-26T00:00:00
[ [ "Shabani", "Mostafa", "" ], [ "Tran", "Dat Thanh", "" ], [ "Kanniainen", "Juho", "" ], [ "Iosifidis", "Alexandros", "" ] ]
new_dataset
0.955063