id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2207.05466
Bruno Veloso
Bruno Veloso, Jo\~ao Gama, Rita P. Ribeiro, Pedro M. Pereira
A Benchmark dataset for predictive maintenance
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes the MetroPT data set, an outcome of a eXplainable Predictive Maintenance (XPM) project with an urban metro public transportation service in Porto, Portugal. The data was collected in 2022 that aimed to evaluate machine learning methods for online anomaly detection and failure prediction. By capturing several analogic sensor signals (pressure, temperature, current consumption), digital signals (control signals, discrete signals), and GPS information (latitude, longitude, and speed), we provide a dataset that can be easily used to evaluate online machine learning methods. This dataset contains some interesting characteristics and can be a good benchmark for predictive maintenance models.
[ { "version": "v1", "created": "Tue, 12 Jul 2022 11:25:53 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 15:36:03 GMT" }, { "version": "v3", "created": "Mon, 18 Jul 2022 09:34:24 GMT" } ]
2022-07-19T00:00:00
[ [ "Veloso", "Bruno", "" ], [ "Gama", "João", "" ], [ "Ribeiro", "Rita P.", "" ], [ "Pereira", "Pedro M.", "" ] ]
new_dataset
0.999716
2207.06823
Harinath Krishnamoorthy
Nandhinee PR, Harinath Krishnamoorthy, Koushik Srivatsan, Anil Goyal, Sudarsun Santhiappan
DEXTER: An end-to-end system to extract table contents from electronic medical health documents
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we propose DEXTER, an end to end system to extract information from tables present in medical health documents, such as electronic health records (EHR) and explanation of benefits (EOB). DEXTER consists of four sub-system stages: i) table detection ii) table type classification iii) cell detection; and iv) cell content extraction. We propose a two-stage transfer learning-based approach using CDeC-Net architecture along with Non-Maximal suppression for table detection. We design a conventional computer vision-based approach for table type classification and cell detection using parameterized kernels based on image size for detecting rows and columns. Finally, we extract the text from the detected cells using pre-existing OCR engine Tessaract. To evaluate our system, we manually annotated a sample of the real-world medical dataset (referred to as Meddata) consisting of wide variations of documents (in terms of appearance) covering different table structures, such as bordered, partially bordered, borderless, or coloured tables. We experimentally show that DEXTER outperforms the commercially available Amazon Textract and Microsoft Azure Form Recognizer systems on the annotated real-world medical dataset
[ { "version": "v1", "created": "Thu, 14 Jul 2022 11:27:02 GMT" }, { "version": "v2", "created": "Mon, 18 Jul 2022 06:52:21 GMT" } ]
2022-07-19T00:00:00
[ [ "PR", "Nandhinee", "" ], [ "Krishnamoorthy", "Harinath", "" ], [ "Srivatsan", "Koushik", "" ], [ "Goyal", "Anil", "" ], [ "Santhiappan", "Sudarsun", "" ] ]
new_dataset
0.999502
2207.07115
Ho Kei Cheng
Ho Kei Cheng and Alexander G. Schwing
XMem: Long-Term Video Object Segmentation with an Atkinson-Shiffrin Memory Model
Accepted to ECCV 2022. Project page: https://hkchengrex.github.io/XMem
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We present XMem, a video object segmentation architecture for long videos with unified feature memory stores inspired by the Atkinson-Shiffrin memory model. Prior work on video object segmentation typically only uses one type of feature memory. For videos longer than a minute, a single feature memory model tightly links memory consumption and accuracy. In contrast, following the Atkinson-Shiffrin model, we develop an architecture that incorporates multiple independent yet deeply-connected feature memory stores: a rapidly updated sensory memory, a high-resolution working memory, and a compact thus sustained long-term memory. Crucially, we develop a memory potentiation algorithm that routinely consolidates actively used working memory elements into the long-term memory, which avoids memory explosion and minimizes performance decay for long-term prediction. Combined with a new memory reading mechanism, XMem greatly exceeds state-of-the-art performance on long-video datasets while being on par with state-of-the-art methods (that do not work on long videos) on short-video datasets. Code is available at https://hkchengrex.github.io/XMem
[ { "version": "v1", "created": "Thu, 14 Jul 2022 17:59:37 GMT" }, { "version": "v2", "created": "Mon, 18 Jul 2022 17:56:53 GMT" } ]
2022-07-19T00:00:00
[ [ "Cheng", "Ho Kei", "" ], [ "Schwing", "Alexander G.", "" ] ]
new_dataset
0.999707
2207.07490
Jane (Xue) Tan
Jane (Xue) Tan, Yong Tan
Crypto Rewards in Fundraising: Evidence from Crypto Donations to Ukraine
null
null
null
null
cs.CY econ.GN q-fin.EC
http://creativecommons.org/licenses/by/4.0/
Extrinsic incentives such as a conditional thank-you gift have shown both positive and negative impacts on charitable fundraising. Leveraging the crypto donations to a Ukrainian fundraising plea that accepts Ether (i.e., the currency of the Ethereum blockchain) and Bitcoin (i.e., the currency of the Bitcoin blockchain) over a seven-day period, we analyze the impact of crypto rewards that lasted for more than 24 hours. Crypto rewards are newly minted tokens that are usually valueless initially and grow in value if the corresponding cause is well received. Separately, we find that crypto rewards have a positive impact on the donation count but a negative impact on the average donation size for donations from both blockchains. Comparatively, we further find that the crypto rewards lead to an 812.48% stronger donation count increase for Ethereum than Bitcoin, given that the crypto rewards are more likely to be issued on the Ethereum blockchain, which has higher programmability to support smart contracts. We also find a 30.1% stronger decrease in average donation amount from Ethereum for small donations ($\leq \$250$); the rewards pose similar impacts on the average donation size for the two blockchains for large donations ($>\$250$). Our study is the first work to look into crypto rewards as incentives for fundraising. Our findings indicate that the positive effect of crypto rewards is more likely to manifest in donation count, and the negative effect of crypto rewards is more likely to manifest in donation size.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 14:22:00 GMT" }, { "version": "v2", "created": "Mon, 18 Jul 2022 14:16:51 GMT" } ]
2022-07-19T00:00:00
[ [ "Jane", "", "", "Xue" ], [ "Tan", "", "" ], [ "Tan", "Yong", "" ] ]
new_dataset
0.987711
2207.07712
Jason Wu
Jason Wu and Titus Barik and Xiaoyi Zhang and Colin Lea and Jeffrey Nichols and Jeffrey P. Bigham
Reflow: Automatically Improving Touch Interactions in Mobile Applications through Pixel-based Refinements
null
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Touch is the primary way that users interact with smartphones. However, building mobile user interfaces where touch interactions work well for all users is a difficult problem, because users have different abilities and preferences. We propose a system, Reflow, which automatically applies small, personalized UI adaptations, called refinements -- to mobile app screens to improve touch efficiency. Reflow uses a pixel-based strategy to work with existing applications, and improves touch efficiency while minimally disrupting the design intent of the original application. Our system optimizes a UI by (i) extracting its layout from its screenshot, (ii) refining its layout, and (iii) re-rendering the UI to reflect these modifications. We conducted a user study with 10 participants and a heuristic evaluation with 6 experts and found that applications optimized by Reflow led to, on average, 9% faster selection time with minimal layout disruption. The results demonstrate that Reflow's refinements useful UI adaptations to improve touch interactions.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 19:11:49 GMT" } ]
2022-07-19T00:00:00
[ [ "Wu", "Jason", "" ], [ "Barik", "Titus", "" ], [ "Zhang", "Xiaoyi", "" ], [ "Lea", "Colin", "" ], [ "Nichols", "Jeffrey", "" ], [ "Bigham", "Jeffrey P.", "" ] ]
new_dataset
0.999418
2207.07729
Markus Nemitz
Savita V. Kendre, Gus. T. Teran, Lauryn Whiteside, Tyler Looney, Ryley Wheelock, Surya Ghai, and Markus P. Nemitz
Printable Flexible Robots for Remote Learning
9 pages, 4 figures, peer reviewed and presented paper at American Society of Engineering Education, April 22-23rd, 2022 - Wentworth Institute of Technology
null
null
null
cs.RO cs.CY
http://creativecommons.org/licenses/by/4.0/
The COVID-19 pandemic has revealed the importance of digital fabrication to enable online learning, which remains a challenge for robotics courses. We introduce a teaching methodology that allows students to participate remotely in a hands-on robotics course involving the design and fabrication of robots. Our methodology employs 3D printing techniques with flexible filaments to create innovative soft robots; robots are made from flexible, as opposed to rigid, materials. Students design flexible robotic components such as actuators, sensors, and controllers using CAD software, upload their designs to a remote 3D printing station, monitor the print with a web camera, and inspect the components with lab staff before being mailed for testing and assembly. At the end of the course, students will have iterated through several designs and created fluidically-driven soft robots. Our remote teaching methodology enables educators to utilize 3D printing resources to teach soft robotics and cultivate creativity among students to design novel and innovative robots. Our methodology seeks to democratize robotics engineering by decoupling hands-on learning experiences from expensive equipment in the learning environment.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 19:51:54 GMT" } ]
2022-07-19T00:00:00
[ [ "Kendre", "Savita V.", "" ], [ "Teran", "Gus. T.", "" ], [ "Whiteside", "Lauryn", "" ], [ "Looney", "Tyler", "" ], [ "Wheelock", "Ryley", "" ], [ "Ghai", "Surya", "" ], [ "Nemitz", "Markus P.", "" ] ]
new_dataset
0.998596
2207.07739
Chen Liu
Chen Liu, Xiaomeng Dong, Michael Potter, Hsi-Ming Chang, Ravi Soni
Adversarial Focal Loss: Asking Your Discriminator for Hard Examples
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Focal Loss has reached incredible popularity as it uses a simple technique to identify and utilize hard examples to achieve better performance on classification. However, this method does not easily generalize outside of classification tasks, such as in keypoint detection. In this paper, we propose a novel adaptation of Focal Loss for keypoint detection tasks, called Adversarial Focal Loss (AFL). AFL not only is semantically analogous to Focal loss, but also works as a plug-and-chug upgrade for arbitrary loss functions. While Focal Loss requires output from a classifier, AFL leverages a separate adversarial network to produce a difficulty score for each input. This difficulty score can then be used to dynamically prioritize learning on hard examples, even in absence of a classifier. In this work, we show AFL's effectiveness in enhancing existing methods in keypoint detection and verify its capability to re-weigh examples based on difficulty.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 20:26:32 GMT" } ]
2022-07-19T00:00:00
[ [ "Liu", "Chen", "" ], [ "Dong", "Xiaomeng", "" ], [ "Potter", "Michael", "" ], [ "Chang", "Hsi-Ming", "" ], [ "Soni", "Ravi", "" ] ]
new_dataset
0.988122
2207.07771
Lei Zhang
Lei Zhang, Tianying Chen, Olivia Seow, Tim Chong, Sven Kratz, Yu Jiang Tham, Andr\'es Monroy-Hern\'andez, Rajan Vaish, Fannie Liu
Auggie: Encouraging Effortful Communication through Handcrafted Digital Experiences
To appear at the 25th ACM Conference On Computer-Supported Cooperative Work And Social Computing (CSCW '22). 25 pages
null
null
null
cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
Digital communication is often brisk and automated. From auto-completed messages to "likes," research has shown that such lightweight interactions can affect perceptions of authenticity and closeness. On the other hand, effort in relationships can forge emotional bonds by conveying a sense of caring and is essential in building and maintaining relationships. To explore effortful communication, we designed and evaluated Auggie, an iOS app that encourages partners to create digitally handcrafted Augmented Reality (AR) experiences for each other. Auggie is centered around crafting a 3D character with photos, animated movements, drawings, and audio for someone else. We conducted a two-week-long field study with 30 participants (15 pairs), who used Auggie with their partners remotely. Our qualitative findings show that Auggie participants engaged in meaningful effort through the handcrafting process, and felt closer to their partners, although the tool may not be appropriate in all situations. We discuss design implications and future directions for systems that encourage effortful communication.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 22:31:44 GMT" } ]
2022-07-19T00:00:00
[ [ "Zhang", "Lei", "" ], [ "Chen", "Tianying", "" ], [ "Seow", "Olivia", "" ], [ "Chong", "Tim", "" ], [ "Kratz", "Sven", "" ], [ "Tham", "Yu Jiang", "" ], [ "Monroy-Hernández", "Andrés", "" ], [ "Vaish", "Rajan", "" ], [ "Liu", "Fannie", "" ] ]
new_dataset
0.995177
2207.07790
Fanglin Chen
Fanglin Chen, Xiao Liu, Bo Tang, Feiyu Xiong, Serim Hwang, and Guomian Zhuang
BCRLSP: An Offline Reinforcement Learning Framework for Sequential Targeted Promotion
8 pages, DRL4IR@SIGIR
null
null
null
cs.LG cs.IR
http://creativecommons.org/licenses/by/4.0/
We utilize an offline reinforcement learning (RL) model for sequential targeted promotion in the presence of budget constraints in a real-world business environment. In our application, the mobile app aims to boost customer retention by sending cash bonuses to customers and control the costs of such cash bonuses during each time period. To achieve the multi-task goal, we propose the Budget Constrained Reinforcement Learning for Sequential Promotion (BCRLSP) framework to determine the value of cash bonuses to be sent to users. We first find out the target policy and the associated Q-values that maximizes the user retention rate using an RL model. A linear programming (LP) model is then added to satisfy the constraints of promotion costs. We solve the LP problem by maximizing the Q-values of actions learned from the RL model given the budget constraints. During deployment, we combine the offline RL model with the LP model to generate a robust policy under the budget constraints. Using both online and offline experiments, we demonstrate the efficacy of our approach by showing that BCRLSP achieves a higher long-term customer retention rate and a lower cost than various baselines. Taking advantage of the near real-time cost control method, the proposed framework can easily adapt to data with a noisy behavioral policy and/or meet flexible budget constraints.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 00:10:12 GMT" } ]
2022-07-19T00:00:00
[ [ "Chen", "Fanglin", "" ], [ "Liu", "Xiao", "" ], [ "Tang", "Bo", "" ], [ "Xiong", "Feiyu", "" ], [ "Hwang", "Serim", "" ], [ "Zhuang", "Guomian", "" ] ]
new_dataset
0.993019
2207.07792
Lin Sok
Lin Sok
Hulls of special typed linear codes and constructions of new EAQECCs
13 pages
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study Euclidean and Hermitian hulls of generalized Reed-Solomon codes and twisted generalized Reed-Solomon codes, as well as the Hermitian hulls of Roth-Lempel typed codes. We present explicit constructions of MDS and AMDS linear codes for which their hull dimensions are well determined. As an application, we provide several classes of entanglement-assisted quantum error correcting codes with new parameters.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 00:27:31 GMT" } ]
2022-07-19T00:00:00
[ [ "Sok", "Lin", "" ] ]
new_dataset
0.992131
2207.07797
Lei Hsiung
Lei Hsiung, Yun-Yun Tsai, Pin-Yu Chen, Tsung-Yi Ho
CARBEN: Composite Adversarial Robustness Benchmark
IJCAI 2022 Demo Track; The demonstration is at https://hsiung.cc/CARBEN/
null
null
null
cs.CV cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Prior literature on adversarial attack methods has mainly focused on attacking with and defending against a single threat model, e.g., perturbations bounded in Lp ball. However, multiple threat models can be combined into composite perturbations. One such approach, composite adversarial attack (CAA), not only expands the perturbable space of the image, but also may be overlooked by current modes of robustness evaluation. This paper demonstrates how CAA's attack order affects the resulting image, and provides real-time inferences of different models, which will facilitate users' configuration of the parameters of the attack level and their rapid evaluation of model prediction. A leaderboard to benchmark adversarial robustness against CAA is also introduced.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 01:08:44 GMT" } ]
2022-07-19T00:00:00
[ [ "Hsiung", "Lei", "" ], [ "Tsai", "Yun-Yun", "" ], [ "Chen", "Pin-Yu", "" ], [ "Ho", "Tsung-Yi", "" ] ]
new_dataset
0.970528
2207.07835
Kevin Green
Fangzhou Yu, Ryan Batke, Jeremy Dao, Jonathan Hurst, Kevin Green, Alan Fern
Dynamic Bipedal Maneuvers through Sim-to-Real Reinforcement Learning
In review for the 2022 IEEE-RAS International Conference on Humanoid Robots. 8 pages, 8 figures, 3 tables
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For legged robots to match the athletic capabilities of humans and animals, they must not only produce robust periodic walking and running, but also seamlessly switch between nominal locomotion gaits and more specialized transient maneuvers. Despite recent advancements in controls of bipedal robots, there has been little focus on producing highly dynamic behaviors. Recent work utilizing reinforcement learning to produce policies for control of legged robots have demonstrated success in producing robust walking behaviors. However, these learned policies have difficulty expressing a multitude of different behaviors on a single network. Inspired by conventional optimization-based control techniques for legged robots, this work applies a recurrent policy to execute four-step, 90 degree turns trained using reference data generated from optimized single rigid body model trajectories. We present a novel training framework using epilogue terminal rewards for learning specific behaviors from pre-computed trajectory data and demonstrate a successful transfer to hardware on the bipedal robot Cassie.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 04:57:59 GMT" } ]
2022-07-19T00:00:00
[ [ "Yu", "Fangzhou", "" ], [ "Batke", "Ryan", "" ], [ "Dao", "Jeremy", "" ], [ "Hurst", "Jonathan", "" ], [ "Green", "Kevin", "" ], [ "Fern", "Alan", "" ] ]
new_dataset
0.997623
2207.07836
Suman Banerjee
Mayank Singhal and Suman Banerjee
Envy\mbox{-}free Trip Planning in Group Trip Planning Query Problem
Accepted as a Full Paper @ 25th International Conference on Network-Based Information Systems (NBiS-2022). 12 Pages. 6 Figures
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
In recent times, Group Trip Planning Query (henceforth referred to as GTP Query) is one of the well\mbox{-}studied problems in Spatial Databases. The inputs to the problem are a road network where the vertices represent the Point-of-Interests (mentioned as POIs henceforth) and they are grouped into different categories, edges represent the road segments, and edge weight represents the distance and a group of users along with their source and destination location. This problem asks to return one POI from every category such that the aggregated distance traveled by the group is minimized. As the objective is to minimize the aggregated distance, the existing solution methodologies do not consider the individual distances traveled by the group members. To address this issue, we introduce and study the \textsc{Envy Free Group Trip Planning Query} Problem. Along with the inputs of the GTP Query Problem, in this variant, we also have a threshold distance $D$ such that aggregated distance traveled by the group is minimized and for any member pairs the difference between their individual distance traveled is less than equal to $D$. However, it may so happen that a given $D$ value no such set POIs are found. To tackle this issue, we introduce the surrogate problem \textsc{Envy Free Group Trip Planning Query with Minimum Additional Distance} Problem which asks what is the minimum distance to be added with $D$ to obtain at least one solution. For these problems, we design efficient solution approaches and experiment with real-world datasets. From the experiments, we observe that the proposed solution approaches lead to less aggregated distance compared to baseline methods with reasonable computational overhead.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 04:59:55 GMT" } ]
2022-07-19T00:00:00
[ [ "Singhal", "Mayank", "" ], [ "Banerjee", "Suman", "" ] ]
new_dataset
0.9909
2207.07852
Yuqi Liu
Yuqi Liu, Pengfei Xiong, Luhui Xu, Shengming Cao and Qin Jin
TS2-Net: Token Shift and Selection Transformer for Text-Video Retrieval
Accepted by ECCV2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Text-Video retrieval is a task of great practical value and has received increasing attention, among which learning spatial-temporal video representation is one of the research hotspots. The video encoders in the state-of-the-art video retrieval models usually directly adopt the pre-trained vision backbones with the network structure fixed, they therefore can not be further improved to produce the fine-grained spatial-temporal video representation. In this paper, we propose Token Shift and Selection Network (TS2-Net), a novel token shift and selection transformer architecture, which dynamically adjusts the token sequence and selects informative tokens in both temporal and spatial dimensions from input video samples. The token shift module temporally shifts the whole token features back-and-forth across adjacent frames, to preserve the complete token representation and capture subtle movements. Then the token selection module selects tokens that contribute most to local spatial semantics. Based on thorough experiments, the proposed TS2-Net achieves state-of-the-art performance on major text-video retrieval benchmarks, including new records on MSRVTT, VATEX, LSMDC, ActivityNet, and DiDeMo.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 06:50:27 GMT" } ]
2022-07-19T00:00:00
[ [ "Liu", "Yuqi", "" ], [ "Xiong", "Pengfei", "" ], [ "Xu", "Luhui", "" ], [ "Cao", "Shengming", "" ], [ "Jin", "Qin", "" ] ]
new_dataset
0.974353
2207.07869
Shunli Wang
Shunli Wang, Shuaibing Wang, Bo Jiao, Dingkang Yang, Liuzhen Su, Peng Zhai, Chixiao Chen, Lihua Zhang
CA-SpaceNet: Counterfactual Analysis for 6D Pose Estimation in Space
8 pages, 6 figures, IROS-2022 conference paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reliable and stable 6D pose estimation of uncooperative space objects plays an essential role in on-orbit servicing and debris removal missions. Considering that the pose estimator is sensitive to background interference, this paper proposes a counterfactual analysis framework named CASpaceNet to complete robust 6D pose estimation of the spaceborne targets under complicated background. Specifically, conventional methods are adopted to extract the features of the whole image in the factual case. In the counterfactual case, a non-existent image without the target but only the background is imagined. Side effect caused by background interference is reduced by counterfactual analysis, which leads to unbiased prediction in final results. In addition, we also carry out lowbit-width quantization for CA-SpaceNet and deploy part of the framework to a Processing-In-Memory (PIM) accelerator on FPGA. Qualitative and quantitative results demonstrate the effectiveness and efficiency of our proposed method. To our best knowledge, this paper applies causal inference and network quantization to the 6D pose estimation of space-borne targets for the first time. The code is available at https://github.com/Shunli-Wang/CA-SpaceNet.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 07:48:19 GMT" } ]
2022-07-19T00:00:00
[ [ "Wang", "Shunli", "" ], [ "Wang", "Shuaibing", "" ], [ "Jiao", "Bo", "" ], [ "Yang", "Dingkang", "" ], [ "Su", "Liuzhen", "" ], [ "Zhai", "Peng", "" ], [ "Chen", "Chixiao", "" ], [ "Zhang", "Lihua", "" ] ]
new_dataset
0.997955
2207.07917
Mang Yu
Mang Yu, Sitao Huang and Deming Chen
Chimera: A Hybrid Machine Learning Driven Multi-Objective Design Space Exploration Tool for FPGA High-Level Synthesis
This is an extended version of the conference paper published in the 22nd International Conference on Intelligent Data Engineering and Automated Learning (IDEAL 2021), which won the Best Paper Award. It is supported in part by the Xilinx Center of Excellence and Xilinx Adaptive Compute Clusters (XACC) program at the University of Illinois Urbana-Champaign
null
null
null
cs.AR cs.LG cs.NE
http://creativecommons.org/licenses/by-nc-sa/4.0/
In recent years, hardware accelerators based on field-programmable gate arrays (FPGAs) have been widely adopted, thanks to FPGAs' extraordinary flexibility. However, with the high flexibility comes the difficulty in design and optimization. Conventionally, these accelerators are designed with low-level hardware descriptive languages, which means creating large designs with complex behavior is extremely difficult. Therefore, high-level synthesis (HLS) tools were created to simplify hardware designs for FPGAs. They enable the user to create hardware designs using high-level languages and provide various optimization directives to help to improve the performance of the synthesized hardware. However, applying these optimizations to achieve high performance is time-consuming and usually requires expert knowledge. To address this difficulty, we present an automated design space exploration tool for applying HLS optimization directives, called Chimera, which significantly reduces the human effort and expertise needed for creating high-performance HLS designs. It utilizes a novel multi-objective exploration method that seamlessly integrates active learning, evolutionary algorithm, and Thompson sampling, making it capable of finding a set of optimized designs on a Pareto curve with only a small number of design points evaluated during the exploration. In the experiments, in less than 24 hours, this hybrid method explored design points that have the same or superior performance compared to highly optimized hand-tuned designs created by expert HLS users from the Rosetta benchmark suite. In addition to discovering the extreme points, it also explores a Pareto frontier, where the elbow point can potentially save up to 26\% of Flip-Flop resource with negligibly higher latency.
[ { "version": "v1", "created": "Sun, 3 Jul 2022 21:13:55 GMT" } ]
2022-07-19T00:00:00
[ [ "Yu", "Mang", "" ], [ "Huang", "Sitao", "" ], [ "Chen", "Deming", "" ] ]
new_dataset
0.994687
2207.07932
Jiazhen Liu
Jiazhen Liu, Xirong Li, Qijie Wei, Jie Xu, Dayong Ding
Semi-Supervised Keypoint Detector and Descriptor for Retinal Image Matching
Accepted to ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For retinal image matching (RIM), we propose SuperRetina, the first end-to-end method with jointly trainable keypoint detector and descriptor. SuperRetina is trained in a novel semi-supervised manner. A small set of (nearly 100) images are incompletely labeled and used to supervise the network to detect keypoints on the vascular tree. To attack the incompleteness of manual labeling, we propose Progressive Keypoint Expansion to enrich the keypoint labels at each training epoch. By utilizing a keypoint-based improved triplet loss as its description loss, SuperRetina produces highly discriminative descriptors at full input image size. Extensive experiments on multiple real-world datasets justify the viability of SuperRetina. Even with manual labeling replaced by auto labeling and thus making the training process fully manual-annotation free, SuperRetina compares favorably against a number of strong baselines for two RIM tasks, i.e. image registration and identity verification. SuperRetina will be open source.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 12:55:20 GMT" } ]
2022-07-19T00:00:00
[ [ "Liu", "Jiazhen", "" ], [ "Li", "Xirong", "" ], [ "Wei", "Qijie", "" ], [ "Xu", "Jie", "" ], [ "Ding", "Dayong", "" ] ]
new_dataset
0.997261
2207.07958
Javier Duarte
Javier Duarte and Nhan Tran and Ben Hawks and Christian Herwig and Jules Muhizi and Shvetank Prakash and Vijay Janapa Reddi
FastML Science Benchmarks: Accelerating Real-Time Scientific Edge Machine Learning
9 pages, 4 figures, Contribution to 3rd Workshop on Benchmarking Machine Learning Workloads on Emerging Hardware (MLBench) at 5th Conference on Machine Learning and Systems (MLSys)
null
null
FERMILAB-CONF-22-534-PPD-SCD
cs.LG physics.comp-ph physics.ins-det
http://creativecommons.org/licenses/by/4.0/
Applications of machine learning (ML) are growing by the day for many unique and challenging scientific applications. However, a crucial challenge facing these applications is their need for ultra low-latency and on-detector ML capabilities. Given the slowdown in Moore's law and Dennard scaling, coupled with the rapid advances in scientific instrumentation that is resulting in growing data rates, there is a need for ultra-fast ML at the extreme edge. Fast ML at the edge is essential for reducing and filtering scientific data in real-time to accelerate science experimentation and enable more profound insights. To accelerate real-time scientific edge ML hardware and software solutions, we need well-constrained benchmark tasks with enough specifications to be generically applicable and accessible. These benchmarks can guide the design of future edge ML hardware for scientific applications capable of meeting the nanosecond and microsecond level latency requirements. To this end, we present an initial set of scientific ML benchmarks, covering a variety of ML and embedded system techniques.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 14:30:15 GMT" } ]
2022-07-19T00:00:00
[ [ "Duarte", "Javier", "" ], [ "Tran", "Nhan", "" ], [ "Hawks", "Ben", "" ], [ "Herwig", "Christian", "" ], [ "Muhizi", "Jules", "" ], [ "Prakash", "Shvetank", "" ], [ "Reddi", "Vijay Janapa", "" ] ]
new_dataset
0.997612
2207.08023
Daniel T Chang
Daniel T. Chang
Distance-Geometric Graph Attention Network (DG-GAT) for 3D Molecular Geometry
arXiv admin note: substantial text overlap with arXiv:2006.01785, arXiv:2007.03513
null
null
null
cs.LG q-bio.QM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep learning for molecular science has so far mainly focused on 2D molecular graphs. Recently, however, there has been work to extend it to 3D molecular geometry, due to its scientific significance and critical importance in real-world applications. The 3D distance-geometric graph representation (DG-GR) adopts a unified scheme (distance) for representing the geometry of 3D graphs. It is invariant to rotation and translation of the graph, and it reflects pair-wise node interactions and their generally local nature, particularly relevant for 3D molecular geometry. To facilitate the incorporation of 3D molecular geometry in deep learning for molecular science, we adopt the new graph attention network with dynamic attention (GATv2) for use with DG-GR and propose the 3D distance-geometric graph attention network (DG-GAT). GATv2 is a great fit for DG-GR since the attention can vary by node and by distance between nodes. Experimental results of DG-GAT for the ESOL and FreeSolv datasets show major improvement (31% and 38%, respectively) over those of the standard graph convolution network based on 2D molecular graphs. The same is true for the QM9 dataset. Our work demonstrates the utility and value of DG-GAT for deep learning based on 3D molecular geometry.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 21:39:31 GMT" } ]
2022-07-19T00:00:00
[ [ "Chang", "Daniel T.", "" ] ]
new_dataset
0.996321
2207.08024
Sumanth Gurram
Sumanth Gurram, Andy Fang, David Chan, John Canny
LAVA: Language Audio Vision Alignment for Contrastive Video Pre-Training
Workshop Paper at ICML 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Generating representations of video data is of key importance in advancing the field of machine perception. Most current techniques rely on hand-annotated data, which can be difficult to work with, expensive to generate, and hard to scale. In this work, we propose a novel learning approach based on contrastive learning, LAVA, which is capable of learning joint language, audio, and video representations in a self-supervised manner. We pre-train LAVA on the Kinetics 700 dataset using transformer encoders to learn representations for each modality. We then demonstrate that LAVA performs competitively with the current state-of-the-art self-supervised and weakly-supervised pretraining techniques on UCF-101 and HMDB-51 video action recognition while using a fraction of the unlabeled data.
[ { "version": "v1", "created": "Sat, 16 Jul 2022 21:46:16 GMT" } ]
2022-07-19T00:00:00
[ [ "Gurram", "Sumanth", "" ], [ "Fang", "Andy", "" ], [ "Chan", "David", "" ], [ "Canny", "John", "" ] ]
new_dataset
0.99812
2207.08081
Chandra Shekhar
Chandra Shekhar and Sudipta Saha
Real Time Vehicle Identification
null
null
null
null
cs.DC
http://creativecommons.org/licenses/by-nc-nd/4.0/
Identification of the vehicles passing over the roads is a very important component of an Intelligent Transportation System. However, due to the presence of multiple vehicles together and their velocity, it gets hard to accurately identify and record them in real-time. Solutions based on Computer-vision use heavyweight equipment making them quiet inflexible, costly and hence unsuitable for wide-area coverage. Solutions based on RFID, although are lightweight and cost-effective, lack of fast and efficient communication protocol pertains to their inability to record multiple moving vehicles at the same time. We propose an IoT-assisted solution that leverages Synchronous-Transmission based communication to bridge these gaps. Through extensive experiments we demonstrate that our strategy can consistently record upto an average of 40 vehicles running at speed range 30-90 Km/h with at least 97.5% accuracy.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 05:44:14 GMT" } ]
2022-07-19T00:00:00
[ [ "Shekhar", "Chandra", "" ], [ "Saha", "Sudipta", "" ] ]
new_dataset
0.972522
2207.08105
Marc Schmitt
Marc Schmitt
Mobile Security for the modern CEO: Attacks, Mitigations, and Future Trends
25 pages
null
null
null
cs.CR cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Todays world is digital, global, and interconnected and mobile devices are at the heart of modern communications in business, politics, and civil society. However, cyber threats are an omnipresent reality in our hyper-connected world. The world economic forum ranks cyber threats consistently among the global top security risks. Attacks on mobile devices grow yearly in volume and magnitude causing severe damage. This paper offers a comprehensive overview of modern mobile attacks categorized into malware, phishing, communication, supply chain, physical, and authentication attacks, including a section on mitigations and limitations. It also provides security design tips to secure the mobile setup and general recommendations to prevent the successful execution of an incoming attack. The last section highlights future technology trends and how those will impact and change the mobile security landscape in the future.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 08:19:24 GMT" } ]
2022-07-19T00:00:00
[ [ "Schmitt", "Marc", "" ] ]
new_dataset
0.996492
2207.08112
Jonathan K\"ulz
Jonathan K\"ulz, Andreas Spitz, Ahmad Abu-Akel, Stephan G\"unnemann, Robert West
United States Politicians' Tone Became More Negative with 2016 Primary Campaigns
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a widespread belief that the tone of US political language has become more negative recently, in particular when Donald Trump entered politics. At the same time, there is disagreement as to whether Trump changed or merely continued previous trends. To date, data-driven evidence regarding these questions is scarce, partly due to the difficulty of obtaining a comprehensive, longitudinal record of politicians' utterances. Here we apply psycholinguistic tools to a novel, comprehensive corpus of 24 million quotes from online news attributed to 18,627 US politicians in order to analyze how the tone of US politicians' language evolved between 2008 and 2020. We show that, whereas the frequency of negative emotion words had decreased continuously during Obama's tenure, it suddenly and lastingly increased with the 2016 primary campaigns, by 1.6 pre-campaign standard deviations, or 8% of the pre-campaign mean, in a pattern that emerges across parties. The effect size drops by 40% when omitting Trump's quotes, and by 50% when averaging over speakers rather than quotes, implying that prominent speakers, and Trump in particular, have disproportionately, though not exclusively, contributed to the rise in negative language. This work provides the first large-scale data-driven evidence of a drastic shift toward a more negative political tone following Trump's campaign start as a catalyst, with important implications for the debate about the state of US politics.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 08:41:14 GMT" } ]
2022-07-19T00:00:00
[ [ "Külz", "Jonathan", "" ], [ "Spitz", "Andreas", "" ], [ "Abu-Akel", "Ahmad", "" ], [ "Günnemann", "Stephan", "" ], [ "West", "Robert", "" ] ]
new_dataset
0.999492
2207.08150
Xiao Han
Xiao Han, Licheng Yu, Xiatian Zhu, Li Zhang, Yi-Zhe Song, Tao Xiang
FashionViL: Fashion-Focused Vision-and-Language Representation Learning
ECCV 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Large-scale Vision-and-Language (V+L) pre-training for representation learning has proven to be effective in boosting various downstream V+L tasks. However, when it comes to the fashion domain, existing V+L methods are inadequate as they overlook the unique characteristics of both the fashion V+L data and downstream tasks. In this work, we propose a novel fashion-focused V+L representation learning framework, dubbed as FashionViL. It contains two novel fashion-specific pre-training tasks designed particularly to exploit two intrinsic attributes with fashion V+L data. First, in contrast to other domains where a V+L data point contains only a single image-text pair, there could be multiple images in the fashion domain. We thus propose a Multi-View Contrastive Learning task for pulling closer the visual representation of one image to the compositional multimodal representation of another image+text. Second, fashion text (e.g., product description) often contains rich fine-grained concepts (attributes/noun phrases). To exploit this, a Pseudo-Attributes Classification task is introduced to encourage the learned unimodal (visual/textual) representations of the same concept to be adjacent. Further, fashion V+L tasks uniquely include ones that do not conform to the common one-stream or two-stream architectures (e.g., text-guided image retrieval). We thus propose a flexible, versatile V+L model architecture consisting of a modality-agnostic Transformer so that it can be flexibly adapted to any downstream tasks. Extensive experiments show that our FashionViL achieves a new state of the art across five downstream tasks. Code is available at https://github.com/BrandonHanx/mmf.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 12:06:27 GMT" } ]
2022-07-19T00:00:00
[ [ "Han", "Xiao", "" ], [ "Yu", "Licheng", "" ], [ "Zhu", "Xiatian", "" ], [ "Zhang", "Li", "" ], [ "Song", "Yi-Zhe", "" ], [ "Xiang", "Tao", "" ] ]
new_dataset
0.963831
2207.08178
Xinwei Liu
Xinwei Liu, Jian Liu, Yang Bai, Jindong Gu, Tao Chen, Xiaojun Jia, Xiaochun Cao
Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal
ECCV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
As a common security tool, visible watermarking has been widely applied to protect copyrights of digital images. However, recent works have shown that visible watermarks can be removed by DNNs without damaging their host images. Such watermark-removal techniques pose a great threat to the ownership of images. Inspired by the vulnerability of DNNs on adversarial perturbations, we propose a novel defence mechanism by adversarial machine learning for good. From the perspective of the adversary, blind watermark-removal networks can be posed as our target models; then we actually optimize an imperceptible adversarial perturbation on the host images to proactively attack against watermark-removal networks, dubbed Watermark Vaccine. Specifically, two types of vaccines are proposed. Disrupting Watermark Vaccine (DWV) induces to ruin the host image along with watermark after passing through watermark-removal networks. In contrast, Inerasable Watermark Vaccine (IWV) works in another fashion of trying to keep the watermark not removed and still noticeable. Extensive experiments demonstrate the effectiveness of our DWV/IWV in preventing watermark removal, especially on various watermark removal networks.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 13:50:02 GMT" } ]
2022-07-19T00:00:00
[ [ "Liu", "Xinwei", "" ], [ "Liu", "Jian", "" ], [ "Bai", "Yang", "" ], [ "Gu", "Jindong", "" ], [ "Chen", "Tao", "" ], [ "Jia", "Xiaojun", "" ], [ "Cao", "Xiaochun", "" ] ]
new_dataset
0.996975
2207.08191
Zoneze Chen
Zongze Chen and Wenxia Yang and Xin Li
Stroke-Based Autoencoders: Self-Supervised Learners for Efficient Zero-Shot Chinese Character Recognition
10 pages, 13 figures
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Chinese characters carry a wealth of morphological and semantic information; therefore, the semantic enhancement of the morphology of Chinese characters has drawn significant attention. The previous methods were intended to directly extract information from a whole Chinese character image, which usually cannot capture both global and local information simultaneously. In this paper, we develop a stroke-based autoencoder(SAE), to model the sophisticated morphology of Chinese characters with the self-supervised method. Following its canonical writing order, we first represent a Chinese character as a series of stroke images with a fixed writing order, and then our SAE model is trained to reconstruct this stroke image sequence. This pre-trained SAE model can predict the stroke image series for unseen characters, as long as their strokes or radicals appeared in the training set. We have designed two contrasting SAE architectures on different forms of stroke images. One is fine-tuned on existing stroke-based method for zero-shot recognition of handwritten Chinese characters, and the other is applied to enrich the Chinese word embeddings from their morphological features. The experimental results validate that after pre-training, our SAE architecture outperforms other existing methods in zero-shot recognition and enhances the representation of Chinese characters with their abundant morphological and semantic information.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 14:39:10 GMT" } ]
2022-07-19T00:00:00
[ [ "Chen", "Zongze", "" ], [ "Yang", "Wenxia", "" ], [ "Li", "Xin", "" ] ]
new_dataset
0.999015
2207.08287
Serena Kim
Serena Y. Kim, Koushik Ganesan, Crystal Soderman, Raven O'Rourke
Spatial Distribution of Solar PV Deployment: An Application of the Region-Based Convolutional Neural Network
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents a comprehensive analysis of the social and environmental determinants of solar photovoltaic (PV) deployment rates in Colorado, USA. Using 652,795 satellite imagery and computer vision frameworks based on a convolutional neural network, we estimated the proportion of households with solar PV systems and the roof areas covered by solar panels. At the census block group level, 7% of Coloradan households have a rooftop PV system, and 2.5% of roof areas in Colorado are covered by solar panels as of 2021. Our machine learning models predict solar PV deployment based on 43 natural and social characteristics of neighborhoods. Using four algorithms (Random Forest, CATBoost, LightGBM, XGBoost), we find that the share of Democratic party votes, hail risks, strong wind risks, median home value, and solar PV permitting timelines are the most important predictors of solar PV count per household. In addition to the size of the houses, PV-to-roof area ratio is highly dependent on solar PV permitting timelines, proportion of renters and multifamily housing, and winter weather risks. We also find racial and ethnic disparities in rooftop solar deployment. The average marginal effects of median household income on solar deployment are lower in communities with a greater proportion of African American and Hispanic residents and are higher in communities with a greater proportion of White and Asian residents. In the ongoing energy transition, knowing the key predictors of solar deployment can better inform business and policy decision making for more efficient and equitable grid infrastructure investment and distributed energy resource management.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 21:03:48 GMT" } ]
2022-07-19T00:00:00
[ [ "Kim", "Serena Y.", "" ], [ "Ganesan", "Koushik", "" ], [ "Soderman", "Crystal", "" ], [ "O'Rourke", "Raven", "" ] ]
new_dataset
0.957449
2207.08292
Fran\c{c}ois Portet
Ali Can Kocabiyikoglu, Fran\c{c}ois Portet, Prudence Gibert, Herv\'e Blanchon, Jean-Marc Babouchkine, Ga\"etan Gavazzi
A Spoken Drug Prescription Dataset in French for Spoken Language Understanding
Ali Can Kocabiyikoglu,Fran\c{c}ois Portet, Prudence Gibert, Herv\'e Blanchon, Jean-Marc Babouchkine, Ga\"etan Gavazzi. A Spoken Drug Prescription Dataset in French for Spoken Language Understanding. LREC2022, Marseille, France, 21-22-23 June 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Spoken medical dialogue systems are increasingly attracting interest to enhance access to healthcare services and improve quality and traceability of patient care. In this paper, we focus on medical drug prescriptions acquired on smartphones through spoken dialogue. Such systems would facilitate the traceability of care and would free clinicians' time. However, there is a lack of speech corpora to develop such systems since most of the related corpora are in text form and in English. To facilitate the research and development of spoken medical dialogue systems, we present, to the best of our knowledge, the first spoken medical drug prescriptions corpus, named PxSLU. It contains 4 hours of transcribed and annotated dialogues of drug prescriptions in French acquired through an experiment with 55 participants experts and non-experts in prescriptions. We also present some experiments that demonstrate the interest of this corpus for the evaluation and development of medical dialogue systems.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 21:18:03 GMT" } ]
2022-07-19T00:00:00
[ [ "Kocabiyikoglu", "Ali Can", "" ], [ "Portet", "François", "" ], [ "Gibert", "Prudence", "" ], [ "Blanchon", "Hervé", "" ], [ "Babouchkine", "Jean-Marc", "" ], [ "Gavazzi", "Gaëtan", "" ] ]
new_dataset
0.999814
2207.08312
Duncan Calvert
Duncan Calvert, Bhavyansh Mishra, Stephen McCrory, Sylvain Bertrand, Robert Griffin, and Jerry Pratt
A Fast, Autonomous, Bipedal Walking Behavior over Rapid Regions
null
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In trying to build humanoid robots that perform useful tasks in a world built for humans, we address the problem of autonomous locomotion. Humanoid robot planning and control algorithms for walking over rough terrain are becoming increasingly capable. At the same time, commercially available depth cameras have been getting more accurate and GPU computing has become a primary tool in AI research. In this paper, we present a newly constructed behavior control system for achieving fast, autonomous, bipedal walking, without pauses or deliberation. We achieve this using a recently published rapid planar regions perception algorithm, a height map based body path planner, an A* footstep planner, and a momentum-based walking controller. We put these elements together to form a behavior control system supported by modern software development practices and simulation tools.
[ { "version": "v1", "created": "Sun, 17 Jul 2022 22:30:33 GMT" } ]
2022-07-19T00:00:00
[ [ "Calvert", "Duncan", "" ], [ "Mishra", "Bhavyansh", "" ], [ "McCrory", "Stephen", "" ], [ "Bertrand", "Sylvain", "" ], [ "Griffin", "Robert", "" ], [ "Pratt", "Jerry", "" ] ]
new_dataset
0.992548
2207.08338
Hoang Le
Hoang Le, Liang Zhang, Amir Said, Guillaume Sautiere, Yang Yang, Pranav Shrestha, Fei Yin, Reza Pourreza, Auke Wiggers
MobileCodec: Neural Inter-frame Video Compression on Mobile Devices
ACM MMSys 2022
null
null
null
cs.CV cs.MM eess.IV
http://creativecommons.org/licenses/by/4.0/
Realizing the potential of neural video codecs on mobile devices is a big technological challenge due to the computational complexity of deep networks and the power-constrained mobile hardware. We demonstrate practical feasibility by leveraging Qualcomm's technology and innovation, bridging the gap from neural network-based codec simulations running on wall-powered workstations, to real-time operation on a mobile device powered by Snapdragon technology. We show the first-ever inter-frame neural video decoder running on a commercial mobile phone, decoding high-definition videos in real-time while maintaining a low bitrate and high visual quality.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 01:20:18 GMT" } ]
2022-07-19T00:00:00
[ [ "Le", "Hoang", "" ], [ "Zhang", "Liang", "" ], [ "Said", "Amir", "" ], [ "Sautiere", "Guillaume", "" ], [ "Yang", "Yang", "" ], [ "Shrestha", "Pranav", "" ], [ "Yin", "Fei", "" ], [ "Pourreza", "Reza", "" ], [ "Wiggers", "Auke", "" ] ]
new_dataset
0.997798
2207.08420
David Monniaux
David Monniaux (VERIMAG - IMAG), Alice Pain (VERIMAG - IMAG, ENS-PSL)
Formally verified 32- and 64-bit integer division using double-precision floating-point arithmetic
null
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Some recent processors are not equipped with an integer division unit. Compilers then implement division by a call to a special function supplied by the processor designers, which implements division by a loop producing one bit of quotient per iteration. This hinders compiler optimizations and results in non-constant time computation, which is a problem in some applications. We advocate instead using the processor's floating-point unit, and propose code that the compiler can easily interleave with other computations. We fully proved the correctness of our algorithm, which mixes floating-point and fixed-bitwidth integer computations, using the Coq proof assistant and successfully integrated it into the CompCert formally verified compiler.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 08:01:15 GMT" } ]
2022-07-19T00:00:00
[ [ "Monniaux", "David", "", "VERIMAG - IMAG" ], [ "Pain", "Alice", "", "VERIMAG - IMAG, ENS-PSL" ] ]
new_dataset
0.996248
2207.08556
Qifan Xiao
Xudong Pan, Qifan Xiao, Mi Zhang, Min Yang
A Certifiable Security Patch for Object Tracking in Self-Driving Systems via Historical Deviation Modeling
null
null
null
null
cs.CR stat.ML
http://creativecommons.org/licenses/by-nc-nd/4.0/
Self-driving cars (SDC) commonly implement the perception pipeline to detect the surrounding obstacles and track their moving trajectories, which lays the ground for the subsequent driving decision making process. Although the security of obstacle detection in SDC is intensively studied, not until very recently the attackers start to exploit the vulnerability of the tracking module. Compared with solely attacking the object detectors, this new attack strategy influences the driving decision more effectively with less attack budgets. However, little is known on whether the revealed vulnerability remains effective in end-to-end self-driving systems and, if so, how to mitigate the threat. In this paper, we present the first systematic research on the security of object tracking in SDC. Through a comprehensive case study on the full perception pipeline of a popular open-sourced self-driving system, Baidu's Apollo, we prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism. Our root cause analysis reveals, the vulnerability is innate to the design of KF-based MOT, which shall error-handle the prediction results from the object detectors yet the adopted KF algorithm is prone to trust the observation more when its deviation from the prediction is larger. To address this design flaw, we propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions according to the anomaly index of the observation-prediction deviation, and has certified effectiveness against a generalized hijacking attack model. Extensive evaluation on $4$ KF-based existing MOT implementations (including 2D and 3D, academic and Apollo ones) validate the defense effectiveness and the trivial performance overhead of our approach.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 12:30:24 GMT" } ]
2022-07-19T00:00:00
[ [ "Pan", "Xudong", "" ], [ "Xiao", "Qifan", "" ], [ "Zhang", "Mi", "" ], [ "Yang", "Min", "" ] ]
new_dataset
0.978496
2207.08557
Ahmad Shapiro
Ahmad Shapiro, Ayman Khalafallah, Marwan Torki
AlexU-AIC at Arabic Hate Speech 2022: Contrast to Classify
null
Proceedings of the OSACT 2022 Workshop, LREC2022, June 2022, 200-208
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Online presence on social media platforms such as Facebook and Twitter has become a daily habit for internet users. Despite the vast amount of services the platforms offer for their users, users suffer from cyber-bullying, which further leads to mental abuse and may escalate to cause physical harm to individuals or targeted groups. In this paper, we present our submission to the Arabic Hate Speech 2022 Shared Task Workshop (OSACT5 2022) using the associated Arabic Twitter dataset. The shared task consists of 3 sub-tasks, sub-task A focuses on detecting whether the tweet is offensive or not. Then, For offensive Tweets, sub-task B focuses on detecting whether the tweet is hate speech or not. Finally, For hate speech Tweets, sub-task C focuses on detecting the fine-grained type of hate speech among six different classes. Transformer models proved their efficiency in classification tasks, but with the problem of over-fitting when fine-tuned on a small or an imbalanced dataset. We overcome this limitation by investigating multiple training paradigms such as Contrastive learning and Multi-task learning along with Classification fine-tuning and an ensemble of our top 5 performers. Our proposed solution achieved 0.841, 0.817, and 0.476 macro F1-average in sub-tasks A, B, and C respectively.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 12:33:51 GMT" } ]
2022-07-19T00:00:00
[ [ "Shapiro", "Ahmad", "" ], [ "Khalafallah", "Ayman", "" ], [ "Torki", "Marwan", "" ] ]
new_dataset
0.9998
2207.08635
Jiaan Wang
Jiaan Wang, Tingyi Zhang, Haoxiang Shi
GOAL: Towards Benchmarking Few-Shot Sports Game Summarization
work in progress
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sports game summarization aims to generate sports news based on real-time commentaries. The task has attracted wide research attention but is still under-explored probably due to the lack of corresponding English datasets. Therefore, in this paper, we release GOAL, the first English sports game summarization dataset. Specifically, there are 103 commentary-news pairs in GOAL, where the average lengths of commentaries and news are 2724.9 and 476.3 words, respectively. Moreover, to support the research in the semi-supervised setting, GOAL additionally provides 2,160 unlabeled commentary documents. Based on our GOAL, we build and evaluate several baselines, including extractive and abstractive baselines. The experimental results show the challenges of this task still remain. We hope our work could promote the research of sports game summarization. The dataset has been released at https://github.com/krystalan/goal.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 14:29:18 GMT" } ]
2022-07-19T00:00:00
[ [ "Wang", "Jiaan", "" ], [ "Zhang", "Tingyi", "" ], [ "Shi", "Haoxiang", "" ] ]
new_dataset
0.998143
2207.08766
David Noever
Samantha E. Miller Noever, David Noever
Word Play for Playing Othello (Reverses)
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Language models like OpenAI's Generative Pre-Trained Transformers (GPT-2/3) capture the long-term correlations needed to generate text in a variety of domains (such as language translators) and recently in gameplay (chess, Go, and checkers). The present research applies both the larger (GPT-3) and smaller (GPT-2) language models to explore the complex strategies for the game of Othello (or Reverses). Given the game rules for rapid reversals of fortune, the language model not only represents a candidate predictor of the next move based on previous game moves but also avoids sparse rewards in gameplay. The language model automatically captures or emulates championship-level strategies. The fine-tuned GPT-2 model generates Othello games ranging from 13-71% completion, while the larger GPT-3 model reaches 41% of a complete game. Like previous work with chess and Go, these language models offer a novel way to generate plausible game archives, particularly for comparing opening moves across a larger sample than humanly possible to explore. A primary contribution of these models magnifies (by two-fold) the previous record for player archives (120,000 human games over 45 years from 1977-2022), thus supplying the research community with more diverse and original strategies for sampling with other reinforcement learning techniques.
[ { "version": "v1", "created": "Mon, 18 Jul 2022 17:13:32 GMT" } ]
2022-07-19T00:00:00
[ [ "Noever", "Samantha E. Miller", "" ], [ "Noever", "David", "" ] ]
new_dataset
0.955801
1702.06455
Tim Alderson
Tim L. Alderson
3-Dimensional Optical Orthogonal Codes with Ideal Autocorrelation-Bounds and Optimal Constructions
null
IEEE Trans. Inform. Theory 64 (2018), no. 6, 4392-4398
10.1109/TIT.2017.2717538
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Several new constructions of 3-dimensional optical orthogonal codes are presented here. In each case the codes have ideal autocorrelation $\mathbf{ \lambda_a=0} $, and in all but one case a cross correlation of $ \mathbf{\lambda_c=1} $. All codes produced are optimal with respect to the applicable Johnson bound either presented or developed here. Thus, on one hand the codes are as large as possible, and on the other, the bound(s) are shown to be tight. All codes are constructed by using a particular automorphism (a Singer cycle) of $ \mathbf{ PG(k,q)} $, the finite projective geometry of dimension $ k $ over the field of order $ \mathbf{q} $, or by using an affine analogue in $ AG(k,q) $.
[ { "version": "v1", "created": "Tue, 21 Feb 2017 15:56:20 GMT" } ]
2022-07-18T00:00:00
[ [ "Alderson", "Tim L.", "" ] ]
new_dataset
0.989123
1803.04020
Tim Alderson
Tim L. Alderson and Alessandro Neri
Maximum Weight Spectrum Codes
19 pages
Adv. Math. Commun. 13 (2019), no. 1, 101-119
10.3934/amc.2019006
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the recent work \cite{shi18}, a combinatorial problem concerning linear codes over a finite field $\F_q$ was introduced. In that work the authors studied the weight set of an $[n,k]_q$ linear code, that is the set of non-zero distinct Hamming weights, showing that its cardinality is upper bounded by $\frac{q^k-1}{q-1}$. They showed that this bound was sharp in the case $ q=2 $, and in the case $ k=2 $. They conjectured that the bound is sharp for every prime power $ q $ and every positive integer $ k $. In this work quickly establish the truth of this conjecture. We provide two proofs, each employing different construction techniques. The first relies on the geometric view of linear codes as systems of projective points. The second approach is purely algebraic. We establish some lower bounds on the length of codes that satisfy the conjecture, and the length of the new codes constructed here are discussed.
[ { "version": "v1", "created": "Sun, 11 Mar 2018 19:20:21 GMT" }, { "version": "v2", "created": "Tue, 20 Mar 2018 17:30:47 GMT" }, { "version": "v3", "created": "Tue, 17 Apr 2018 18:18:46 GMT" } ]
2022-07-18T00:00:00
[ [ "Alderson", "Tim L.", "" ], [ "Neri", "Alessandro", "" ] ]
new_dataset
0.956149
1807.11798
Tim Alderson
Tim L. Alderson
A note on full weight spectrum codes
null
Trans. Comb. 8 (2019), no. 3, 15-22
10.22108/toc.2019.112621.1584
null
cs.IT math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A linear $ [n,k]_q $ code $ C $ is said to be a full weight spectrum (FWS) code if there exist codewords of each nonzero weight less than or equal to $ n $. In this brief communication we determine necessary and sufficient conditions for the existence of linear $ [n,k]_q $ full weight spectrum (FWS) codes. Central to our approach is the geometric view of linear codes, whereby columns of a generator matrix correspond to points in $ PG(k-1,q) $.
[ { "version": "v1", "created": "Tue, 31 Jul 2018 13:05:20 GMT" }, { "version": "v2", "created": "Mon, 20 Aug 2018 18:01:52 GMT" }, { "version": "v3", "created": "Thu, 8 Nov 2018 19:17:44 GMT" }, { "version": "v4", "created": "Wed, 3 Apr 2019 16:12:24 GMT" } ]
2022-07-18T00:00:00
[ [ "Alderson", "Tim L.", "" ] ]
new_dataset
0.993724
2003.07311
Johannes C. Paetzold
Suprosanna Shit, Johannes C. Paetzold, Anjany Sekuboyina, Ivan Ezhov, Alexander Unger, Andrey Zhylka, Josien P. W. Pluim, Ulrich Bauer, Bjoern H. Menze
clDice -- A Novel Topology-Preserving Loss Function for Tubular Structure Segmentation
* The authors Suprosanna Shit and Johannes C. Paetzold contributed equally to the work
null
10.1109/CVPR46437.2021.01629
CVPR 2021
cs.CV cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate segmentation of tubular, network-like structures, such as vessels, neurons, or roads, is relevant to many fields of research. For such structures, the topology is their most important characteristic; particularly preserving connectedness: in the case of vascular networks, missing a connected vessel entirely alters the blood-flow dynamics. We introduce a novel similarity measure termed centerlineDice (short clDice), which is calculated on the intersection of the segmentation masks and their (morphological) skeleta. We theoretically prove that clDice guarantees topology preservation up to homotopy equivalence for binary 2D and 3D segmentation. Extending this, we propose a computationally efficient, differentiable loss function (soft-clDice) for training arbitrary neural segmentation networks. We benchmark the soft-clDice loss on five public datasets, including vessels, roads and neurons (2D and 3D). Training on soft-clDice leads to segmentation with more accurate connectivity information, higher graph similarity, and better volumetric scores.
[ { "version": "v1", "created": "Mon, 16 Mar 2020 16:27:49 GMT" }, { "version": "v2", "created": "Mon, 23 Mar 2020 20:45:16 GMT" }, { "version": "v3", "created": "Sun, 29 Mar 2020 22:46:43 GMT" }, { "version": "v4", "created": "Thu, 3 Dec 2020 19:53:43 GMT" }, { "version": "v5", "created": "Mon, 29 Mar 2021 13:36:28 GMT" }, { "version": "v6", "created": "Tue, 30 Mar 2021 11:51:21 GMT" }, { "version": "v7", "created": "Fri, 15 Jul 2022 10:39:38 GMT" } ]
2022-07-18T00:00:00
[ [ "Shit", "Suprosanna", "" ], [ "Paetzold", "Johannes C.", "" ], [ "Sekuboyina", "Anjany", "" ], [ "Ezhov", "Ivan", "" ], [ "Unger", "Alexander", "" ], [ "Zhylka", "Andrey", "" ], [ "Pluim", "Josien P. W.", "" ], [ "Bauer", "Ulrich", "" ], [ "Menze", "Bjoern H.", "" ] ]
new_dataset
0.99927
2004.10100
Shohei Hisada
Shohei Hisada, Taichi Murayama, Kota Tsubouchi, Sumio Fujita, Shuntaro Yada, Shoko Wakamiya, and Eiji Aramaki
Syndromic surveillance using search query logs and user location information from smartphones against COVID-19 clusters in Japan
null
null
10.1038/s41598-020-75771-6
null
cs.IR cs.CY
http://creativecommons.org/licenses/by/4.0/
[Background] Two clusters of coronavirus disease 2019 (COVID-19) were confirmed in Hokkaido, Japan in February 2020. To capture the clusters, this study employs Web search query logs and user location information from smartphones. [Material and Methods] First, we anonymously identified smartphone users who used a Web search engine (Yahoo! JAPAN Search) for the COVID-19 or its symptoms via its companion application for smartphones (Yahoo Japan App). We regard these searchers as Web searchers who are suspicious of their own COVID-19 infection (WSSCI). Second, we extracted the location of the WSSCI via the smartphone application. The spatio-temporal distribution of the number of WSSCI are compared with the actual location of the known two clusters. [Result and Discussion] Before the early stage of the cluster development, we could confirm several WSSCI, which demonstrated the basic feasibility of our WSSCI-based approach. However, it is accurate only in the early stage, and it was biased after the public announcement of the cluster development. For the case where the other cluster-related resources, such as fine-grained population statistics, are not available, the proposed metric would be helpful to catch the hint of emerging clusters.
[ { "version": "v1", "created": "Tue, 21 Apr 2020 15:21:30 GMT" } ]
2022-07-18T00:00:00
[ [ "Hisada", "Shohei", "" ], [ "Murayama", "Taichi", "" ], [ "Tsubouchi", "Kota", "" ], [ "Fujita", "Sumio", "" ], [ "Yada", "Shuntaro", "" ], [ "Wakamiya", "Shoko", "" ], [ "Aramaki", "Eiji", "" ] ]
new_dataset
0.994084
2012.04708
Yusuf H. Sahin
Yusuf H. Sahin, Alican Mertan, Gozde Unal
ODFNet: Using orientation distribution functions to characterize 3D point clouds
The paper is under consideration at Computer Vision and Image Understanding
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Learning new representations of 3D point clouds is an active research area in 3D vision, as the order-invariant point cloud structure still presents challenges to the design of neural network architectures. Recent works explored learning either global or local features or both for point clouds, however none of the earlier methods focused on capturing contextual shape information by analysing local orientation distribution of points. In this paper, we leverage on point orientation distributions around a point in order to obtain an expressive local neighborhood representation for point clouds. We achieve this by dividing the spherical neighborhood of a given point into predefined cone volumes, and statistics inside each volume are used as point features. In this way, a local patch can be represented by not only the selected point's nearest neighbors, but also considering a point density distribution defined along multiple orientations around the point. We are then able to construct an orientation distribution function (ODF) neural network that involves an ODFBlock which relies on mlp (multi-layer perceptron) layers. The new ODFNet model achieves state-of the-art accuracy for object classification on ModelNet40 and ScanObjectNN datasets, and segmentation on ShapeNet S3DIS datasets.
[ { "version": "v1", "created": "Tue, 8 Dec 2020 19:54:20 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 13:02:08 GMT" } ]
2022-07-18T00:00:00
[ [ "Sahin", "Yusuf H.", "" ], [ "Mertan", "Alican", "" ], [ "Unal", "Gozde", "" ] ]
new_dataset
0.978407
2012.06326
Alex B\"auerle
Alex B\"auerle, Patrick Albus, Raphael St\"ork, Tina Seufert, and Timo Ropinski
exploRNN: Understanding Recurrent Neural Networks through Visual Exploration
null
null
10.1007/s00371-022-02593-0
null
cs.LG cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
Due to the success of deep learning (DL) and its growing job market, students and researchers from many areas are interested in learning about DL technologies. Visualization has proven to be of great help during this learning process. While most current educational visualizations are targeted towards one specific architecture or use case, recurrent neural networks (RNNs), which are capable of processing sequential data, are not covered yet. This is despite the fact that tasks on sequential data, such as text and function analysis, are at the forefront of DL research. Therefore, we propose exploRNN, the first interactively explorable educational visualization for RNNs. On the basis of making learning easier and more fun, we define educational objectives targeted towards understanding RNNs. We use these objectives to form guidelines for the visual design process. By means of exploRNN, which is accessible online, we provide an overview of the training process of RNNs at a coarse level, while also allowing a detailed inspection of the data flow within LSTM cells. In an empirical study, we assessed 37 subjects in a between-subjects design to investigate the learning outcomes and cognitive load of exploRNN compared to a classic text-based learning environment. While learners in the text group are ahead in superficial knowledge acquisition, exploRNN is particularly helpful for deeper understanding of the learning content. In addition, the complex content in exploRNN is perceived as significantly easier and causes less extraneous load than in the text group. The study shows that for difficult learning material such as recurrent networks, where deep understanding is important, interactive visualizations such as exploRNN can be helpful.
[ { "version": "v1", "created": "Wed, 9 Dec 2020 15:06:01 GMT" }, { "version": "v2", "created": "Wed, 5 Jan 2022 10:24:36 GMT" }, { "version": "v3", "created": "Wed, 22 Jun 2022 10:52:45 GMT" } ]
2022-07-18T00:00:00
[ [ "Bäuerle", "Alex", "" ], [ "Albus", "Patrick", "" ], [ "Störk", "Raphael", "" ], [ "Seufert", "Tina", "" ], [ "Ropinski", "Timo", "" ] ]
new_dataset
0.987792
2202.06257
Ling Chen
Pengyue Jia, Ling Chen, Dandan Lyu
Fine-Grained Population Mobility Data-Based Community-Level COVID-19 Prediction Model
Accepted by Cybernetics and Systems
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Predicting the number of infections in the anti-epidemic process is extremely beneficial to the government in developing anti-epidemic strategies, especially in fine-grained geographic units. Previous works focus on low spatial resolution prediction, e.g., county-level, and preprocess data to the same geographic level, which loses some useful information. In this paper, we propose a fine-grained population mobility data-based model (FGC-COVID) utilizing data of two geographic levels for community-level COVID-19 prediction. We use the population mobility data between Census Block Groups (CBGs), which is a finer-grained geographic level than community, to build the graph and capture the dependencies between CBGs using graph neural networks (GNNs). To mine as finer-grained patterns as possible for prediction, a spatial weighted aggregation module is introduced to aggregate the embeddings of CBGs to community level based on their geographic affiliation and spatial autocorrelation. Extensive experiments on 300 days LA city COVID-19 data indicate our model outperforms existing forecasting models on community-level COVID-19 prediction.
[ { "version": "v1", "created": "Sun, 13 Feb 2022 08:40:47 GMT" }, { "version": "v2", "created": "Thu, 14 Apr 2022 02:00:27 GMT" }, { "version": "v3", "created": "Fri, 15 Jul 2022 07:12:56 GMT" } ]
2022-07-18T00:00:00
[ [ "Jia", "Pengyue", "" ], [ "Chen", "Ling", "" ], [ "Lyu", "Dandan", "" ] ]
new_dataset
0.993937
2203.06585
Jiaqi Gu
Jiaqi Gu, Zhiyu Xiang, Pan Zhao, Tingming Bai, Lingxuan Wang, Xijun Zhao, Zhiyuan Zhang
CVFNet: Real-time 3D Object Detection by Learning Cross View Features
7 pages, 5 figures, accepted by IROS 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years 3D object detection from LiDAR point clouds has made great progress thanks to the development of deep learning technologies. Although voxel or point based methods are popular in 3D object detection, they usually involve time-consuming operations such as 3D convolutions on voxels or ball query among points, making the resulting network inappropriate for time critical applications. On the other hand, 2D view-based methods feature high computing efficiency while usually obtaining inferior performance than the voxel or point based methods. In this work, we present a real-time view-based single stage 3D object detector, namely CVFNet to fulfill this task. To strengthen the cross-view feature learning under the condition of demanding efficiency, our framework extracts the features of different views and fuses them in an efficient progressive way. We first propose a novel Point-Range feature fusion module that deeply integrates point and range view features in multiple stages. Then, a special Slice Pillar is designed to well maintain the 3D geometry when transforming the obtained deep point-view features into bird's eye view. To better balance the ratio of samples, a sparse pillar detection head is presented to focus the detection on the nonempty grids. We conduct experiments on the popular KITTI and NuScenes benchmark, and state-of-the-art performances are achieved in terms of both accuracy and speed.
[ { "version": "v1", "created": "Sun, 13 Mar 2022 06:23:18 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 03:10:58 GMT" } ]
2022-07-18T00:00:00
[ [ "Gu", "Jiaqi", "" ], [ "Xiang", "Zhiyu", "" ], [ "Zhao", "Pan", "" ], [ "Bai", "Tingming", "" ], [ "Wang", "Lingxuan", "" ], [ "Zhao", "Xijun", "" ], [ "Zhang", "Zhiyuan", "" ] ]
new_dataset
0.997882
2203.09091
Inkyu Sa
Inkyu Sa, JongYoon Lim, Ho Seok Ahn, Bruce MacDonald
deepNIR: Datasets for generating synthetic NIR images and improved fruit detection system using deep learning techniques
35 pages, 27 figures, published in MDPI Remote Sensing journal
null
10.3390/s22134721
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper presents datasets utilised for synthetic near-infrared (NIR) image generation and bounding-box level fruit detection systems. It is undeniable that high-calibre machine learning frameworks such as Tensorflow or Pytorch, and large-scale ImageNet or COCO datasets with the aid of accelerated GPU hardware have pushed the limit of machine learning techniques for more than decades. Among these breakthroughs, a high-quality dataset is one of the essential building blocks that can lead to success in model generalisation and the deployment of data-driven deep neural networks. In particular, synthetic data generation tasks often require more training samples than other supervised approaches. Therefore, in this paper, we share the NIR+RGB datasets that are re-processed from two public datasets (i.e., nirscene and SEN12MS) and our novel NIR+RGB sweet pepper(capsicum) dataset. We quantitatively and qualitatively demonstrate that these NIR+RGB datasets are sufficient to be used for synthetic NIR image generation. We achieved Frechet Inception Distance (FID) of 11.36, 26.53, and 40.15 for nirscene1, SEN12MS, and sweet pepper datasets respectively. In addition, we release manual annotations of 11 fruit bounding boxes that can be exported as various formats using cloud service. Four newly added fruits [blueberry, cherry, kiwi, and wheat] compound 11 novel bounding box datasets on top of our previous work presented in the deepFruits project [apple, avocado, capsicum, mango, orange, rockmelon, strawberry]. The total number of bounding box instances of the dataset is 162k and it is ready to use from cloud service. For the evaluation of the dataset, Yolov5 single stage detector is exploited and reported impressive mean-average-precision,mAP[0.5:0.95] results of[min:0.49, max:0.812]. We hope these datasets are useful and serve as a baseline for the future studies.
[ { "version": "v1", "created": "Thu, 17 Mar 2022 05:25:36 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 04:41:31 GMT" } ]
2022-07-18T00:00:00
[ [ "Sa", "Inkyu", "" ], [ "Lim", "JongYoon", "" ], [ "Ahn", "Ho Seok", "" ], [ "MacDonald", "Bruce", "" ] ]
new_dataset
0.999779
2203.12184
Ke Shen
Henrique Santos, Ke Shen, Alice M. Mulvehill, Yasaman Razeghi, Deborah L. McGuinness, Mayank Kejriwal
A Theoretically Grounded Benchmark for Evaluating Machine Commonsense
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Programming machines with commonsense reasoning (CSR) abilities is a longstanding challenge in the Artificial Intelligence community. Current CSR benchmarks use multiple-choice (and in relatively fewer cases, generative) question-answering instances to evaluate machine commonsense. Recent progress in transformer-based language representation models suggest that considerable progress has been made on existing benchmarks. However, although tens of CSR benchmarks currently exist, and are growing, it is not evident that the full suite of commonsense capabilities have been systematically evaluated. Furthermore, there are doubts about whether language models are 'fitting' to a benchmark dataset's training partition by picking up on subtle, but normatively irrelevant (at least for CSR), statistical features to achieve good performance on the testing partition. To address these challenges, we propose a benchmark called Theoretically-Grounded Commonsense Reasoning (TG-CSR) that is also based on discriminative question answering, but with questions designed to evaluate diverse aspects of commonsense, such as space, time, and world states. TG-CSR is based on a subset of commonsense categories first proposed as a viable theory of commonsense by Gordon and Hobbs. The benchmark is also designed to be few-shot (and in the future, zero-shot), with only a few training and validation examples provided. This report discusses the structure and construction of the benchmark. Preliminary results suggest that the benchmark is challenging even for advanced language representation models designed for discriminative CSR question answering tasks. Benchmark access and leaderboard: https://codalab.lisn.upsaclay.fr/competitions/3080 Benchmark website: https://usc-isi-i2.github.io/TGCSR/
[ { "version": "v1", "created": "Wed, 23 Mar 2022 04:06:01 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 23:27:43 GMT" } ]
2022-07-18T00:00:00
[ [ "Santos", "Henrique", "" ], [ "Shen", "Ke", "" ], [ "Mulvehill", "Alice M.", "" ], [ "Razeghi", "Yasaman", "" ], [ "McGuinness", "Deborah L.", "" ], [ "Kejriwal", "Mayank", "" ] ]
new_dataset
0.99439
2203.12268
Yinxiao Feng
Yinxiao Feng, Kaisheng Ma
Chiplet Actuary: A Quantitative Cost Model and Multi-Chiplet Architecture Exploration
Accepted by and presented at DAC 2022
null
null
null
cs.AR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-chip integration is widely recognized as the extension of Moore's Law. Cost-saving is a frequently mentioned advantage, but previous works rarely present quantitative demonstrations on the cost superiority of multi-chip integration over monolithic SoC. In this paper, we build a quantitative cost model and put forward an analytical method for multi-chip systems based on three typical multi-chip integration technologies to analyze the cost benefits from yield improvement, chiplet and package reuse, and heterogeneity. We re-examine the actual cost of multi-chip systems from various perspectives and show how to reduce the total cost of the VLSI system through appropriate multi-chiplet architecture.
[ { "version": "v1", "created": "Wed, 23 Mar 2022 08:30:30 GMT" }, { "version": "v2", "created": "Wed, 30 Mar 2022 08:13:57 GMT" }, { "version": "v3", "created": "Fri, 15 Jul 2022 11:30:38 GMT" } ]
2022-07-18T00:00:00
[ [ "Feng", "Yinxiao", "" ], [ "Ma", "Kaisheng", "" ] ]
new_dataset
0.972421
2204.03117
Shuo Liang
Shuo Liang, Wei Wei, Xian-Ling Mao, Fei Wang and Zhiyong He
BiSyn-GAT+: Bi-Syntax Aware Graph Attention Network for Aspect-based Sentiment Analysis
Findings of ACL 2022
null
10.18653/v1/2022.findings-acl.144
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Aspect-based sentiment analysis (ABSA) is a fine-grained sentiment analysis task that aims to align aspects and corresponding sentiments for aspect-specific sentiment polarity inference. It is challenging because a sentence may contain multiple aspects or complicated (e.g., conditional, coordinating, or adversative) relations. Recently, exploiting dependency syntax information with graph neural networks has been the most popular trend. Despite its success, methods that heavily rely on the dependency tree pose challenges in accurately modeling the alignment of the aspects and their words indicative of sentiment, since the dependency tree may provide noisy signals of unrelated associations (e.g., the "conj" relation between "great" and "dreadful" in Figure 2). In this paper, to alleviate this problem, we propose a Bi-Syntax aware Graph Attention Network (BiSyn-GAT+). Specifically, BiSyn-GAT+ fully exploits the syntax information (e.g., phrase segmentation and hierarchical structure) of the constituent tree of a sentence to model the sentiment-aware context of every single aspect (called intra-context) and the sentiment relations across aspects (called inter-context) for learning. Experiments on four benchmark datasets demonstrate that BiSyn-GAT+ outperforms the state-of-the-art methods consistently.
[ { "version": "v1", "created": "Wed, 6 Apr 2022 22:18:12 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 08:51:00 GMT" } ]
2022-07-18T00:00:00
[ [ "Liang", "Shuo", "" ], [ "Wei", "Wei", "" ], [ "Mao", "Xian-Ling", "" ], [ "Wang", "Fei", "" ], [ "He", "Zhiyong", "" ] ]
new_dataset
0.968853
2204.07827
Daniel Reichman
Hermish Mehta and Daniel Reichman
Local treewidth of random and noisy graphs with applications to stopping contagion in networks
Accepted to RANDOM 2022
null
null
null
cs.DS cs.DM
http://creativecommons.org/licenses/by/4.0/
We study the notion of local treewidth in sparse random graphs: the maximum treewidth over all $k$-vertex subgraphs of an $n$-vertex graph. When $k$ is not too large, we give nearly tight bounds for this local treewidth parameter; we also derive tight bounds for the local treewidth of noisy trees, trees where every non-edge is added independently with small probability. We apply our upper bounds on the local treewidth to obtain fixed parameter tractable algorithms (on random graphs and noisy trees) for edge-removal problems centered around containing a contagious process evolving over a network. In these problems, our main parameter of study is $k$, the number of initially ``infected'' vertices in the network. For the random graph models we consider and a certain range of parameters the running time of our algorithms on $n$-vertex graphs is $2^{o(k)}\textrm{poly}(n)$, improving upon the $2^{\Omega(k)}\textrm{poly}(n)$ performance of the best-known algorithms designed for worst-case instances of these edge deletion problems.
[ { "version": "v1", "created": "Sat, 16 Apr 2022 15:53:11 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 16:54:45 GMT" } ]
2022-07-18T00:00:00
[ [ "Mehta", "Hermish", "" ], [ "Reichman", "Daniel", "" ] ]
new_dataset
0.997355
2205.01202
Jingxing Qian
Jingxing Qian, Veronica Chatrath, Jun Yang, James Servos, Angela P. Schoellig, and Steven L. Waslander
POCD: Probabilistic Object-Level Change Detection and Volumetric Mapping in Semi-Static Scenes
Published in Robotics: Science and Systems (RSS) 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Maintaining an up-to-date map to reflect recent changes in the scene is very important, particularly in situations involving repeated traversals by a robot operating in an environment over an extended period. Undetected changes may cause a deterioration in map quality, leading to poor localization, inefficient operations, and lost robots. Volumetric methods, such as truncated signed distance functions (TSDFs), have quickly gained traction due to their real-time production of a dense and detailed map, though map updating in scenes that change over time remains a challenge. We propose a framework that introduces a novel probabilistic object state representation to track object pose changes in semi-static scenes. The representation jointly models a stationarity score and a TSDF change measure for each object. A Bayesian update rule that incorporates both geometric and semantic information is derived to achieve consistent online map maintenance. To extensively evaluate our approach alongside the state-of-the-art, we release a novel real-world dataset in a warehouse environment. We also evaluate on the public ToyCar dataset. Our method outperforms state-of-the-art methods on the reconstruction quality of semi-static environments.
[ { "version": "v1", "created": "Mon, 2 May 2022 20:33:11 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 13:40:04 GMT" } ]
2022-07-18T00:00:00
[ [ "Qian", "Jingxing", "" ], [ "Chatrath", "Veronica", "" ], [ "Yang", "Jun", "" ], [ "Servos", "James", "" ], [ "Schoellig", "Angela P.", "" ], [ "Waslander", "Steven L.", "" ] ]
new_dataset
0.999281
2206.00204
Shuhao Zeng
Shuhao Zeng, Hongliang Zhang, Boya Di, Yuanwei Liu, Marco Di Renzo, Zhu Han, H. Vincent Poor, Lingyang Song
Intelligent Omni-Surfaces: Reflection-Refraction Circuit Model, Full-Dimensional Beamforming, and System Implementation
33 pages, 20 figures
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The intelligent omni-surface (IOS) is a dynamic metasurface that has recently been proposed to achieve full-dimensional communications by realizing the dual function of anomalous reflection and anomalous refraction. Existing research works provide only simplified models for the reflection and refraction responses of the IOS, which do not explicitly depend on the physical structure of the IOS and the angle of incidence of the electromagnetic (EM) wave. Therefore, the available reflection-refraction models are insufficient to characterize the performance of full-dimensional communications. In this paper, we propose a complete and detailed circuit-based reflection-refraction model for the IOS, which is formulated in terms of the physical structure and equivalent circuits of the IOS elements, as well as we validate it against full-wave EM simulations. Based on the proposed circuit-based model for the IOS, we analyze the asymmetry between the reflection and transmission coefficients. Moreover, the proposed circuit-based model is utilized for optimizing the hybrid beamforming of IOS-assisted networks and hence improving the system performance. To verify the circuit-based model, the theoretical findings, and to evaluate the performance of full-dimensional beamforming, we implement a prototype of IOS and deploy an IOS-assisted wireless communication testbed to experimentally measure the beam patterns and to quantify the achievable rate. The obtained experimental results validate the theoretical findings and the accuracy of the proposed circuit-based reflection-refraction model for IOSs.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 03:01:23 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 01:23:04 GMT" } ]
2022-07-18T00:00:00
[ [ "Zeng", "Shuhao", "" ], [ "Zhang", "Hongliang", "" ], [ "Di", "Boya", "" ], [ "Liu", "Yuanwei", "" ], [ "Di Renzo", "Marco", "" ], [ "Han", "Zhu", "" ], [ "Poor", "H. Vincent", "" ], [ "Song", "Lingyang", "" ] ]
new_dataset
0.991088
2207.00748
Pedro Henrique Luz de Araujo
Pedro H. Luz de Araujo, Ana Paula G. S. de Almeida, Fabricio A. Braz, Nilton C. da Silva, Flavio de Barros Vidal, Teofilo E. de Campos
Sequence-aware multimodal page classification of Brazilian legal documents
11 pages, 6 figures. This preprint, which was originally written on 8 April 2021, has not undergone peer review or any post-submission improvements or corrections. The Version of Record of this article is published in the International Journal on Document Analysis and Recognition, and is available online at https://doi.org/10.1007/s10032-022-00406-7 and https://rdcu.be/cRvvV
International Journal on Document Analysis and Recognition.2022
10.1007/s10032-022-00406-7
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
The Brazilian Supreme Court receives tens of thousands of cases each semester. Court employees spend thousands of hours to execute the initial analysis and classification of those cases -- which takes effort away from posterior, more complex stages of the case management workflow. In this paper, we explore multimodal classification of documents from Brazil's Supreme Court. We train and evaluate our methods on a novel multimodal dataset of 6,510 lawsuits (339,478 pages) with manual annotation assigning each page to one of six classes. Each lawsuit is an ordered sequence of pages, which are stored both as an image and as a corresponding text extracted through optical character recognition. We first train two unimodal classifiers: a ResNet pre-trained on ImageNet is fine-tuned on the images, and a convolutional network with filters of multiple kernel sizes is trained from scratch on document texts. We use them as extractors of visual and textual features, which are then combined through our proposed Fusion Module. Our Fusion Module can handle missing textual or visual input by using learned embeddings for missing data. Moreover, we experiment with bi-directional Long Short-Term Memory (biLSTM) networks and linear-chain conditional random fields to model the sequential nature of the pages. The multimodal approaches outperform both textual and visual classifiers, especially when leveraging the sequential nature of the pages.
[ { "version": "v1", "created": "Sat, 2 Jul 2022 06:23:25 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 07:02:55 GMT" } ]
2022-07-18T00:00:00
[ [ "de Araujo", "Pedro H. Luz", "" ], [ "de Almeida", "Ana Paula G. S.", "" ], [ "Braz", "Fabricio A.", "" ], [ "da Silva", "Nilton C.", "" ], [ "Vidal", "Flavio de Barros", "" ], [ "de Campos", "Teofilo E.", "" ] ]
new_dataset
0.989641
2207.07120
Ryo Eguchi
Ryo Eguchi, David Vacek, Cole Godzinski, Silvia Curry, Max Evans, Allison M. Okamura
Between-Tactor Display Using Dynamic Tactile Stimuli
null
EuroHaptics 2022
null
null
cs.HC cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Display of illusory vibration locations between physical vibrotactile motors (tactors) placed on the skin has the potential to reduce the number of tactors in distributed tactile displays. This paper presents a between-tactor display method that uses dynamic tactile stimuli to generate illusory vibration locations. A belt with only 6 vibration motors displays 24 targets consisting of on-tactor and between-tactor locations. On-tactor locations are represented by simply vibrating the relevant single tactor. Between-tactor locations are displayed by adjusting the relative vibration amplitudes of two adjacent motors, with either (1) constant vibration amplitudes or (2) perturbed vibration amplitudes (creating local illusory motion). User testing showed that perturbations improve recognition accuracy for in-between tactor localization.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 06:25:29 GMT" } ]
2022-07-18T00:00:00
[ [ "Eguchi", "Ryo", "" ], [ "Vacek", "David", "" ], [ "Godzinski", "Cole", "" ], [ "Curry", "Silvia", "" ], [ "Evans", "Max", "" ], [ "Okamura", "Allison M.", "" ] ]
new_dataset
0.998366
2207.07262
Xiaoqiang Wang
Xiaoqiang Wang, Chunming Tang, Cunsheng Ding
Infinite families of cyclic and negacyclic codes supporting 3-designs
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interplay between coding theory and combinatorial $t$-designs has been a hot topic for many years for combinatorialists and coding theorists. Some infinite families of cyclic codes supporting infinite families of $3$-designs have been constructed in the past 50 years. However, no infinite family of negacyclic codes supporting an infinite family of $3$-designs has been reported in the literature. This is the main motivation of this paper. Let $q=p^m$, where $p$ is an odd prime and $m \geq 2$ is an integer. The objective of this paper is to present an infinite family of cyclic codes over $\gf(q)$ supporting an infinite family of $3$-designs and two infinite families of negacyclic codes over $\gf(q^2)$ supporting two infinite families of $3$-designs. The parameters and the weight distributions of these codes are determined. The subfield subcodes of these negacyclic codes over $\gf(q)$ are studied. Three infinite families of almost MDS codes are also presented. A constacyclic code over GF($4$) supporting a $4$-design and six open problems are also presented in this paper.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 02:47:57 GMT" } ]
2022-07-18T00:00:00
[ [ "Wang", "Xiaoqiang", "" ], [ "Tang", "Chunming", "" ], [ "Ding", "Cunsheng", "" ] ]
new_dataset
0.997074
2207.07386
Ho Yin Au
Ho Yin Au, Jie Chen, Junkun Jiang, Yike Guo
ChoreoGraph: Music-conditioned Automatic Dance Choreography over a Style and Tempo Consistent Dynamic Graph
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To generate dance that temporally and aesthetically matches the music is a challenging problem, as the following factors need to be considered. First, the aesthetic styles and messages conveyed by the motion and music should be consistent. Second, the beats of the generated motion should be locally aligned to the musical features. And finally, basic choreomusical rules should be observed, and the motion generated should be diverse. To address these challenges, we propose ChoreoGraph, which choreographs high-quality dance motion for a given piece of music over a Dynamic Graph. A data-driven learning strategy is proposed to evaluate the aesthetic style and rhythmic connections between music and motion in a progressively learned cross-modality embedding space. The motion sequences will be beats-aligned based on the music segments and then incorporated as nodes of a Dynamic Motion Graph. Compatibility factors such as the style and tempo consistency, motion context connection, action completeness, and transition smoothness are comprehensively evaluated to determine the node transition in the graph. We demonstrate that our repertoire-based framework can generate motions with aesthetic consistency and robustly extensible in diversity. Both quantitative and qualitative experiment results show that our proposed model outperforms other baseline models.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 10:24:26 GMT" } ]
2022-07-18T00:00:00
[ [ "Au", "Ho Yin", "" ], [ "Chen", "Jie", "" ], [ "Jiang", "Junkun", "" ], [ "Guo", "Yike", "" ] ]
new_dataset
0.973488
2207.07403
Jordi Pons
Nicol\'as Schmidt, Jordi Pons, Marius Miron
PodcastMix: A dataset for separating music and speech in podcasts
In proceedings of INTERSPEECH2022. Project webpage: http://www.jordipons.me/apps/podcastmix/
null
null
null
cs.SD cs.DB eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce PodcastMix, a dataset formalizing the task of separating background music and foreground speech in podcasts. We aim at defining a benchmark suitable for training and evaluating (deep learning) source separation models. To that end, we release a large and diverse training dataset based on programatically generated podcasts. However, current (deep learning) models can incur into generalization issues, specially when trained on synthetic data. To target potential generalization issues, we release an evaluation set based on real podcasts for which we design objective and subjective tests. Out of our experiments with real podcasts, we find that current (deep learning) models may have generalization issues. Yet, these can perform competently, e.g., our best baseline separates speech with a mean opinion score of 3.84 (rating "overall separation quality" from 1 to 5). The dataset and baselines are accessible online.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 11:12:21 GMT" } ]
2022-07-18T00:00:00
[ [ "Schmidt", "Nicolás", "" ], [ "Pons", "Jordi", "" ], [ "Miron", "Marius", "" ] ]
new_dataset
0.9988
2207.07413
Mordechai Guri
Mordechai Guri
SATAn: Air-Gap Exfiltration Attack via Radio Signals From SATA Cables
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper introduces a new type of attack on isolated, air-gapped workstations. Although air-gap computers have no wireless connectivity, we show that attackers can use the SATA cable as a wireless antenna to transfer radio signals at the 6 GHz frequency band. The Serial ATA (SATA) is a bus interface widely used in modern computers and connects the host bus to mass storage devices such as hard disk drives, optical drives, and solid-state drives. The prevalence of the SATA interface makes this attack highly available to attackers in a wide range of computer systems and IT environments. We discuss related work on this topic and provide technical background. We show the design of the transmitter and receiver and present the implementation of these components. We also demonstrate the attack on different computers and provide the evaluation. The results show that attackers can use the SATA cable to transfer a brief amount of sensitive information from highly secured, air-gap computers wirelessly to a nearby receiver. Furthermore, we show that the attack can operate from user mode, is effective even from inside a Virtual Machine (VM), and can successfully work with other running workloads in the background. Finally, we discuss defense and mitigation techniques for this new air-gap attack.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 11:45:57 GMT" } ]
2022-07-18T00:00:00
[ [ "Guri", "Mordechai", "" ] ]
new_dataset
0.999486
2207.07423
Kiran Gopinathan
Kiran Gopinathan
GopCaml: A Structural Editor for OCaml
Presented at OCaml workshop at ICFP 2021
null
null
null
cs.PL
http://creativecommons.org/publicdomain/zero/1.0/
This talk presents Gopcaml-mode, the first structural editing plugin for OCaml. We will give a tour of the main plugin features, discussing the plugin's internal design and its integration with existing OCaml and GNU Emacs toolchains.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 12:02:47 GMT" } ]
2022-07-18T00:00:00
[ [ "Gopinathan", "Kiran", "" ] ]
new_dataset
0.998582
2207.07482
Axel Schaffland
Axel Schaffland
The Mechanical Neural Network(MNN) -- A physical implementation of a multilayer perceptron for education and hands-on experimentation
short video (30sec): https://youtu.be/zMxh3Io3hFE, full presentation video: https://youtu.be/cEzk8JKDzy4; 8 pages, 6 figures
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In this paper the Mechanical Neural Network(MNN) is introduced, a physical implementation of a multilayer perceptron(MLP) with ReLU activation functions, two input neurons, four hidden neurons and two output neurons. This physical model of a MLP is used in education to give a hands on experience and allow students to experience the effect of changing the parameters of the network on the output. Neurons are small wooden levers which are connected by threads. Students can adapt the weights between the neurons by moving the clamps connecting a neuron via a thread to the next. The MNN can model real valued functions and logical operators including XOR.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 14:05:44 GMT" } ]
2022-07-18T00:00:00
[ [ "Schaffland", "Axel", "" ] ]
new_dataset
0.997731
2207.07586
Micha{\l} Kajstura
Joanna Baran, Micha{\l} Kajstura, Maciej Zi\'o{\l}kowski, Krzysztof Rajda
Does Twitter know your political views? POLiTweets dataset and semi-automatic method for political leaning discovery
null
null
null
null
cs.CL cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Every day, the world is flooded by millions of messages and statements posted on Twitter or Facebook. Social media platforms try to protect users' personal data, but there still is a real risk of misuse, including elections manipulation. Did you know, that only 13 posts addressing important or controversial topics for society are enough to predict one's political affiliation with a 0.85 F1-score? To examine this phenomenon, we created a novel universal method of semi-automated political leaning discovery. It relies on a heuristical data annotation procedure, which was evaluated to achieve 0.95 agreement with human annotators (counted as an accuracy metric). We also present POLiTweets - the first publicly open Polish dataset for political affiliation discovery in a multi-party setup, consisting of over 147k tweets from almost 10k Polish-writing users annotated heuristically and almost 40k tweets from 166 users annotated manually as a test set. We used our data to study the aspects of domain shift in the context of topics and the type of content writers - ordinary citizens vs. professional politicians.
[ { "version": "v1", "created": "Tue, 14 Jun 2022 10:28:23 GMT" } ]
2022-07-18T00:00:00
[ [ "Baran", "Joanna", "" ], [ "Kajstura", "Michał", "" ], [ "Ziółkowski", "Maciej", "" ], [ "Rajda", "Krzysztof", "" ] ]
new_dataset
0.993333
2207.07629
Zhiruo Zhou
Zhiruo Zhou, Hongyu Fu, Suya You, C.-C. Jay Kuo
GUSOT: Green and Unsupervised Single Object Tracking for Long Video Sequences
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervised and unsupervised deep trackers that rely on deep learning technologies are popular in recent years. Yet, they demand high computational complexity and a high memory cost. A green unsupervised single-object tracker, called GUSOT, that aims at object tracking for long videos under a resource-constrained environment is proposed in this work. Built upon a baseline tracker, UHP-SOT++, which works well for short-term tracking, GUSOT contains two additional new modules: 1) lost object recovery, and 2) color-saliency-based shape proposal. They help resolve the tracking loss problem and offer a more flexible object proposal, respectively. Thus, they enable GUSOT to achieve higher tracking accuracy in the long run. We conduct experiments on the large-scale dataset LaSOT with long video sequences, and show that GUSOT offers a lightweight high-performance tracking solution that finds applications in mobile and edge computing platforms.
[ { "version": "v1", "created": "Fri, 15 Jul 2022 17:42:49 GMT" } ]
2022-07-18T00:00:00
[ [ "Zhou", "Zhiruo", "" ], [ "Fu", "Hongyu", "" ], [ "You", "Suya", "" ], [ "Kuo", "C. -C. Jay", "" ] ]
new_dataset
0.998307
1712.10222
Simina Br\^anzei
Simina Br\^anzei and Erel Segal-Halevi and Aviv Zohar
How to Charge Lightning: The Economics of Bitcoin Transaction Channels
An earlier version of the paper was presented at Scaling Bitcoin 2017
null
null
null
cs.CR cs.DC cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Off-chain transaction channels represent one of the leading techniques to scale the transaction throughput in cryptocurrencies. However, the economic effect of transaction channels on the system has not been explored much until now. We study the economics of Bitcoin transaction channels, and present a framework for an economic analysis of the lightning network and its effect on transaction fees on the blockchain. Our framework allows us to reason about different patterns of demand for transactions and different topologies of the lightning network, and to derive the resulting fees for transacting both on and off the blockchain. Our initial results indicate that while the lightning network does allow for a substantially higher number of transactions to pass through the system, it does not necessarily provide higher fees to miners, and as a result may in fact lead to lower participation in mining within the system.
[ { "version": "v1", "created": "Fri, 29 Dec 2017 13:33:46 GMT" }, { "version": "v2", "created": "Wed, 13 Jul 2022 21:07:52 GMT" } ]
2022-07-15T00:00:00
[ [ "Brânzei", "Simina", "" ], [ "Segal-Halevi", "Erel", "" ], [ "Zohar", "Aviv", "" ] ]
new_dataset
0.992381
2005.02155
AKM Shahariar Azad Rabby
Jannatul Ferdous, Suvrajit Karmaker, A K M Shahariar Azad Rabby, Syed Akhter Hossain
MatriVasha: A Multipurpose Comprehensive Database for Bangla Handwritten Compound Characters
19 fig, 2 table
null
10.1007/978-981-15-9774-9_74
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
At present, recognition of the Bangla handwriting compound character has been an essential issue for many years. In recent years there have been application-based researches in machine learning, and deep learning, which is gained interest, and most notably is handwriting recognition because it has a tremendous application such as Bangla OCR. MatrriVasha, the project which can recognize Bangla, handwritten several compound characters. Currently, compound character recognition is an important topic due to its variant application, and helps to create old forms, and information digitization with reliability. But unfortunately, there is a lack of a comprehensive dataset that can categorize all types of Bangla compound characters. MatrriVasha is an attempt to align compound character, and it's challenging because each person has a unique style of writing shapes. After all, MatrriVasha has proposed a dataset that intends to recognize Bangla 120(one hundred twenty) compound characters that consist of 2552(two thousand five hundred fifty-two) isolated handwritten characters written unique writers which were collected from within Bangladesh. This dataset faced problems in terms of the district, age, and gender-based written related research because the samples were collected that includes a verity of the district, age group, and the equal number of males, and females. As of now, our proposed dataset is so far the most extensive dataset for Bangla compound characters. It is intended to frame the acknowledgment technique for handwritten Bangla compound character. In the future, this dataset will be made publicly available to help to widen the research.
[ { "version": "v1", "created": "Wed, 29 Apr 2020 06:38:12 GMT" }, { "version": "v2", "created": "Wed, 6 May 2020 07:59:45 GMT" } ]
2022-07-15T00:00:00
[ [ "Ferdous", "Jannatul", "" ], [ "Karmaker", "Suvrajit", "" ], [ "Rabby", "A K M Shahariar Azad", "" ], [ "Hossain", "Syed Akhter", "" ] ]
new_dataset
0.999845
2101.11569
Marco Tarini
Marco Tarini
Closed-form Quadrangulation of N-Sided Patches
null
Computers & Graphics, Volume 107, Pages 60-65, ISSN 0097-8493, 2022
10.1016/j.cag.2022.06.015
null
cs.GR cs.CG
http://creativecommons.org/licenses/by-sa/4.0/
We analyze the problem of quadrangulating a $n$-sided patch, each side at its boundary subdivided into a given number of edges, using a single irregular vertex (or none, when $n = 4$) that breaks the otherwise fully regular lattice. We derive, in an analytical closed-form, (1) the necessary and sufficient conditions that a patch must meet to admit this quadrangulation, and (2) a full description of the resulting tessellation(s).
[ { "version": "v1", "created": "Wed, 27 Jan 2021 17:54:11 GMT" }, { "version": "v2", "created": "Mon, 8 Feb 2021 13:42:27 GMT" } ]
2022-07-15T00:00:00
[ [ "Tarini", "Marco", "" ] ]
new_dataset
0.951485
2104.01821
Li Zhang
Li Zhang, Wei Lu, Jinqing Yang
LAGOS-AND: A Large Gold Standard Dataset for Scholarly Author Name Disambiguation
33 pages, 7 tables, 7 figures
null
null
null
cs.DL
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a method to automatically build large labeled datasets for the author ambiguity problem in the academic world by leveraging the authoritative academic resources, ORCID and DOI. Using the method, we built LAGOS-AND, two large, gold-standard datasets for author name disambiguation (AND), of which LAGOS-AND-BLOCK is created for clustering-based AND research and LAGOS-AND-PAIRWISE is created for classification-based AND research. Our LAGOS-AND datasets are substantially different from the existing ones. The initial versions of the datasets (v1.0, released in February 2021) include 7.5M citations authored by 798K unique authors (LAGOS-AND-BLOCK) and close to 1M instances (LAGOS-AND-PAIRWISE). And both datasets show close similarities to the whole Microsoft Academic Graph (MAG) across validations of six facets. In building the datasets, we reveal the variation degrees of last names in three literature databases, PubMed, MAG, and Semantic Scholar, by comparing author names hosted to the authors' official last names shown on the ORCID pages. Furthermore, we evaluate several baseline disambiguation methods as well as the MAG's author IDs system on our datasets, and the evaluation helps identify several interesting findings. We hope the datasets and findings will bring new insights for future studies. The code and datasets are publicly available.
[ { "version": "v1", "created": "Mon, 5 Apr 2021 09:32:29 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 12:50:41 GMT" } ]
2022-07-15T00:00:00
[ [ "Zhang", "Li", "" ], [ "Lu", "Wei", "" ], [ "Yang", "Jinqing", "" ] ]
new_dataset
0.999656
2107.08336
Behnam Dezfouli
Puneet Kumar and Behnam Dezfouli
QuicSDN: Transitioning from TCP to QUIC for Southbound Communication in SDNs
null
null
null
SIOTLAB-REV-QUICSDN-2022
cs.NI
http://creativecommons.org/licenses/by/4.0/
In Software-Defined Networks (SDNs), the control plane and data plane communicate for various purposes, such as applying configurations and collecting statistical data. While various methods have been proposed to reduce the overhead and enhance the scalability of SDNs, the impact of the transport layer protocol used for southbound communication has not been investigated. Existing SDNs rely on TCP (and TLS) to enforce reliability and security. In this paper, we show that the use of TCP imposes a considerable overhead on southbound communication, identify the causes of this overhead, and demonstrate how replacing TCP with QUIC can enhance the performance of this communication. We introduce the quicSDN architecture, enabling southbound communication in SDNs via the QUIC protocol. We present a reference architecture based on the standard, most widely used protocols by the SDN community and show how the controller and switch are revamped to facilitate this transition. We compare, both analytically and empirically, the performance of quicSDN versus the traditional SDN architecture and confirm the superior performance of quicSDN.
[ { "version": "v1", "created": "Sun, 18 Jul 2021 01:09:05 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 02:10:14 GMT" } ]
2022-07-15T00:00:00
[ [ "Kumar", "Puneet", "" ], [ "Dezfouli", "Behnam", "" ] ]
new_dataset
0.999252
2107.08865
Siwei Chen
Siwei Chen, Xiao Ma, Yunfan Lu and David Hsu
Ab Initio Particle-based Object Manipulation
Robotics: Science and Systems (RSS) 2021
null
10.15607/RSS.2021.XVII.071
null
cs.RO cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents Particle-based Object Manipulation (Prompt), a new approach to robot manipulation of novel objects ab initio, without prior object models or pre-training on a large object data set. The key element of Prompt is a particle-based object representation, in which each particle represents a point in the object, the local geometric, physical, and other features of the point, and also its relation with other particles. Like the model-based analytic approaches to manipulation, the particle representation enables the robot to reason about the object's geometry and dynamics in order to choose suitable manipulation actions. Like the data-driven approaches, the particle representation is learned online in real-time from visual sensor input, specifically, multi-view RGB images. The particle representation thus connects visual perception with robot control. Prompt combines the benefits of both model-based reasoning and data-driven learning. We show empirically that Prompt successfully handles a variety of everyday objects, some of which are transparent. It handles various manipulation tasks, including grasping, pushing, etc,. Our experiments also show that Prompt outperforms a state-of-the-art data-driven grasping method on the daily objects, even though it does not use any offline training data.
[ { "version": "v1", "created": "Mon, 19 Jul 2021 13:27:00 GMT" } ]
2022-07-15T00:00:00
[ [ "Chen", "Siwei", "" ], [ "Ma", "Xiao", "" ], [ "Lu", "Yunfan", "" ], [ "Hsu", "David", "" ] ]
new_dataset
0.994398
2108.09416
Amir Karami
Amir Karami, Spring B. Clark, Anderson Mackenzie, Dorathea Lee, Michael Zhu, Hannah R. Boyajieff, Bailey Goldschmidt
2020 U.S. presidential election in swing states: Gender differences in Twitter conversations
null
null
null
null
cs.SI cs.CL
http://creativecommons.org/licenses/by/4.0/
Social media is commonly used by the public during election campaigns to express their opinions regarding different issues. Among various social media channels, Twitter provides an efficient platform for researchers and politicians to explore public opinion regarding a wide range of topics such as the economy and foreign policy. Current literature mainly focuses on analyzing the content of tweets without considering the gender of users. This research collects and analyzes a large number of tweets and uses computational, human coding, and statistical analyses to identify topics in more than 300,000 tweets posted during the 2020 U.S. presidential election and to compare female and male users regarding the average weight of the discussed topics. Our findings are based upon a wide range of topics, such as tax, climate change, and the COVID-19 pandemic. Out of the topics, there exists a significant difference between female and male users for more than 70% of topics.
[ { "version": "v1", "created": "Sat, 21 Aug 2021 01:31:03 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 03:28:40 GMT" } ]
2022-07-15T00:00:00
[ [ "Karami", "Amir", "" ], [ "Clark", "Spring B.", "" ], [ "Mackenzie", "Anderson", "" ], [ "Lee", "Dorathea", "" ], [ "Zhu", "Michael", "" ], [ "Boyajieff", "Hannah R.", "" ], [ "Goldschmidt", "Bailey", "" ] ]
new_dataset
0.999733
2112.03030
Yinyu Nie
Yinyu Nie, Angela Dai, Xiaoguang Han, Matthias Nie{\ss}ner
Pose2Room: Understanding 3D Scenes from Human Activities
Accepted by ECCV'2022; Project page: https://yinyunie.github.io/pose2room-page/ Video: https://www.youtube.com/watch?v=MFfKTcvbM5o
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With wearable IMU sensors, one can estimate human poses from wearable devices without requiring visual input~\cite{von2017sparse}. In this work, we pose the question: Can we reason about object structure in real-world environments solely from human trajectory information? Crucially, we observe that human motion and interactions tend to give strong information about the objects in a scene -- for instance a person sitting indicates the likely presence of a chair or sofa. To this end, we propose P2R-Net to learn a probabilistic 3D model of the objects in a scene characterized by their class categories and oriented 3D bounding boxes, based on an input observed human trajectory in the environment. P2R-Net models the probability distribution of object class as well as a deep Gaussian mixture model for object boxes, enabling sampling of multiple, diverse, likely modes of object configurations from an observed human trajectory. In our experiments we show that P2R-Net can effectively learn multi-modal distributions of likely objects for human motions, and produce a variety of plausible object structures of the environment, even without any visual information. The results demonstrate that P2R-Net consistently outperforms the baselines on the PROX dataset and the VirtualHome platform.
[ { "version": "v1", "created": "Wed, 1 Dec 2021 20:54:36 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 16:20:50 GMT" } ]
2022-07-15T00:00:00
[ [ "Nie", "Yinyu", "" ], [ "Dai", "Angela", "" ], [ "Han", "Xiaoguang", "" ], [ "Nießner", "Matthias", "" ] ]
new_dataset
0.989928
2201.02179
Ruslan Nikolaev
Ruslan Nikolaev, Binoy Ravindran
wCQ: A Fast Wait-Free Queue with Bounded Memory Usage
null
Proceedings of the 34th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2022)
10.1145/3490148.3538572
null
cs.DC
http://creativecommons.org/licenses/by/4.0/
The concurrency literature presents a number of approaches for building non-blocking, FIFO, multiple-producer and multiple-consumer (MPMC) queues. However, only a fraction of them have high performance. In addition, many queue designs, such as LCRQ, trade memory usage for better performance. The recently proposed SCQ design achieves both memory efficiency as well as excellent performance. Unfortunately, both LCRQ and SCQ are only lock-free. On the other hand, existing wait-free queues are either not very performant or suffer from potentially unbounded memory usage. Strictly described, the latter queues, such as Yang & Mellor-Crummey's (YMC) queue, forfeit wait-freedom as they are blocking when memory is exhausted. We present a wait-free queue, called wCQ. wCQ is based on SCQ and uses its own variation of fast-path-slow-path methodology to attain wait-freedom and bound memory usage. Our experimental studies on x86 and PowerPC architectures validate wCQ's great performance and memory efficiency. They also show that wCQ's performance is often on par with the best known concurrent queue designs.
[ { "version": "v1", "created": "Thu, 6 Jan 2022 18:46:53 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 17:58:51 GMT" } ]
2022-07-15T00:00:00
[ [ "Nikolaev", "Ruslan", "" ], [ "Ravindran", "Binoy", "" ] ]
new_dataset
0.999283
2203.09509
Thomas Hartvigsen
Thomas Hartvigsen, Saadia Gabriel, Hamid Palangi, Maarten Sap, Dipankar Ray, Ece Kamar
ToxiGen: A Large-Scale Machine-Generated Dataset for Adversarial and Implicit Hate Speech Detection
Published as a long paper at ACL 2022. Code: https://github.com/microsoft/TOXIGEN
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Toxic language detection systems often falsely flag text that contains minority group mentions as toxic, as those groups are often the targets of online hate. Such over-reliance on spurious correlations also causes systems to struggle with detecting implicitly toxic language. To help mitigate these issues, we create ToxiGen, a new large-scale and machine-generated dataset of 274k toxic and benign statements about 13 minority groups. We develop a demonstration-based prompting framework and an adversarial classifier-in-the-loop decoding method to generate subtly toxic and benign text with a massive pretrained language model. Controlling machine generation in this way allows ToxiGen to cover implicitly toxic text at a larger scale, and about more demographic groups, than previous resources of human-written text. We conduct a human evaluation on a challenging subset of ToxiGen and find that annotators struggle to distinguish machine-generated text from human-written language. We also find that 94.5% of toxic examples are labeled as hate speech by human annotators. Using three publicly-available datasets, we show that finetuning a toxicity classifier on our data improves its performance on human-written data substantially. We also demonstrate that ToxiGen can be used to fight machine-generated toxicity as finetuning improves the classifier significantly on our evaluation subset. Our code and data can be found at https://github.com/microsoft/ToxiGen.
[ { "version": "v1", "created": "Thu, 17 Mar 2022 17:57:56 GMT" }, { "version": "v2", "created": "Tue, 3 May 2022 11:54:40 GMT" }, { "version": "v3", "created": "Tue, 10 May 2022 10:50:46 GMT" }, { "version": "v4", "created": "Thu, 14 Jul 2022 13:04:29 GMT" } ]
2022-07-15T00:00:00
[ [ "Hartvigsen", "Thomas", "" ], [ "Gabriel", "Saadia", "" ], [ "Palangi", "Hamid", "" ], [ "Sap", "Maarten", "" ], [ "Ray", "Dipankar", "" ], [ "Kamar", "Ece", "" ] ]
new_dataset
0.999817
2203.12705
Sagar Parekh
Sagar Parekh, Soheil Habibian, and Dylan P. Losey
RILI: Robustly Influencing Latent Intent
null
null
null
null
cs.RO cs.AI
http://creativecommons.org/licenses/by/4.0/
When robots interact with human partners, often these partners change their behavior in response to the robot. On the one hand this is challenging because the robot must learn to coordinate with a dynamic partner. But on the other hand -- if the robot understands these dynamics -- it can harness its own behavior, influence the human, and guide the team towards effective collaboration. Prior research enables robots to learn to influence other robots or simulated agents. In this paper we extend these learning approaches to now influence humans. What makes humans especially hard to influence is that -- not only do humans react to the robot -- but the way a single user reacts to the robot may change over time, and different humans will respond to the same robot behavior in different ways. We therefore propose a robust approach that learns to influence changing partner dynamics. Our method first trains with a set of partners across repeated interactions, and learns to predict the current partner's behavior based on the previous states, actions, and rewards. Next, we rapidly adapt to new partners by sampling trajectories the robot learned with the original partners, and then leveraging those existing behaviors to influence the new partner dynamics. We compare our resulting algorithm to state-of-the-art baselines across simulated environments and a user study where the robot and participants collaborate to build towers. We find that our approach outperforms the alternatives, even when the partner follows new or unexpected dynamics. Videos of the user study are available here: https://youtu.be/lYsWM8An18g
[ { "version": "v1", "created": "Wed, 23 Mar 2022 19:55:49 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 15:44:40 GMT" } ]
2022-07-15T00:00:00
[ [ "Parekh", "Sagar", "" ], [ "Habibian", "Soheil", "" ], [ "Losey", "Dylan P.", "" ] ]
new_dataset
0.95205
2205.04047
Animesh Basak Chowdhury
Mukta Debnath, Animesh Basak Chowdhury, Debasri Saha, Susmita Sur-Kolay
GreyConE: Greybox fuzzing+Concolic execution guided test generation for high level design
5 pages, 5 figures, 2 tables, 2 algorithms. Accepted in International Test Conference (ITC 2022)
null
null
null
cs.SE cs.CR
http://creativecommons.org/licenses/by/4.0/
Exhaustive testing of high-level designs pose an arduous challenge due to complex branching conditions, loop structures and inherent concurrency of hardware designs. Test engineers aim to generate quality test-cases satisfying various code coverage metrics to ensure minimal presence of bugs in a design. Prior works in testing SystemC designs are time inefficient which obstruct achieving the desired coverage in shorter time-span. We interleave greybox fuzzing and concolic execution in a systematic manner and generate quality test-cases accelerating test coverage metrics. Our results outperform state-of-the-art methods in terms of number of test cases and branch-coverage for some of the benchmarks, and runtime for most of them.
[ { "version": "v1", "created": "Mon, 9 May 2022 05:34:09 GMT" }, { "version": "v2", "created": "Tue, 10 May 2022 09:36:02 GMT" }, { "version": "v3", "created": "Wed, 13 Jul 2022 23:28:01 GMT" } ]
2022-07-15T00:00:00
[ [ "Debnath", "Mukta", "" ], [ "Chowdhury", "Animesh Basak", "" ], [ "Saha", "Debasri", "" ], [ "Sur-Kolay", "Susmita", "" ] ]
new_dataset
0.97802
2207.05675
Laszlo Kish
Laszlo B. Kish
Time synchronization protocol for the KLJN secure key exchange scheme
In press at Fluctuation and Noise Letters. Coming out in the October 2022 issue
null
null
null
cs.CR quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The information theoretically secure Kirchhoff-law-Johnson-noise (KLJN) key exchange scheme, similarly to quantum key distribution (QKD), is also potentially vulnerable against clock attacks, where Eve takes over the control of clock synchronization in the channel. This short note aims to introduce a time synchronization protocol scheme for Alice and Bob, which is resistant against arbitrary time delay attacks, both symmetric and asymmetric ones. We propose and explore various ways of clock synchronization for the KLJN system and propose an ultimate protocol that preserves time and hardware integrity under arbitrary attacks.
[ { "version": "v1", "created": "Mon, 4 Jul 2022 00:33:07 GMT" }, { "version": "v2", "created": "Thu, 14 Jul 2022 01:02:38 GMT" } ]
2022-07-15T00:00:00
[ [ "Kish", "Laszlo B.", "" ] ]
new_dataset
0.997126
2207.06410
Arijit Nandi
Arijit Nandi, Fatos Xhafa, Laia Subirats, Santi Fort
MDEAW: A Multimodal Dataset for Emotion Analysis through EDA and PPG signals from wireless wearable low-cost off-the-shelf Devices
null
null
null
null
cs.HC cs.AI cs.LG
http://creativecommons.org/licenses/by/4.0/
We present MDEAW, a multimodal database consisting of Electrodermal Activity (EDA) and Photoplethysmography (PPG) signals recorded during the exams for the course taught by the teacher at Eurecat Academy, Sabadell, Barcelona in order to elicit the emotional reactions to the students in a classroom scenario. Signals from 10 students were recorded along with the students' self-assessment of their affective state after each stimulus, in terms of 6 basic emotion states. All the signals were captured using portable, wearable, wireless, low-cost, and off-the-shelf equipment that has the potential to allow the use of affective computing methods in everyday applications. A baseline for student-wise affect recognition using EDA and PPG-based features, as well as their fusion, was established through ReMECS, Fed-ReMECS, and Fed-ReMECS-U. These results indicate the prospects of using low-cost devices for affective state recognition applications. The proposed database will be made publicly available in order to allow researchers to achieve a more thorough evaluation of the suitability of these capturing devices for emotion state recognition applications.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 07:04:29 GMT" } ]
2022-07-15T00:00:00
[ [ "Nandi", "Arijit", "" ], [ "Xhafa", "Fatos", "" ], [ "Subirats", "Laia", "" ], [ "Fort", "Santi", "" ] ]
new_dataset
0.999796
2207.06440
Jhony Heriberto Giraldo Zuluaga
Jhony H. Giraldo, Sajid Javed, Naoufel Werghi, Thierry Bouwmans
Graph CNN for Moving Object Detection in Complex Environments from Unseen Videos
null
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2021, pp. 225-233
10.1109/ICCVW54120.2021.00030
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Moving Object Detection (MOD) is a fundamental step for many computer vision applications. MOD becomes very challenging when a video sequence captured from a static or moving camera suffers from the challenges: camouflage, shadow, dynamic backgrounds, and lighting variations, to name a few. Deep learning methods have been successfully applied to address MOD with competitive performance. However, in order to handle the overfitting problem, deep learning methods require a large amount of labeled data which is a laborious task as exhaustive annotations are always not available. Moreover, some MOD deep learning methods show performance degradation in the presence of unseen video sequences because the testing and training splits of the same sequences are involved during the network learning process. In this work, we pose the problem of MOD as a node classification problem using Graph Convolutional Neural Networks (GCNNs). Our algorithm, dubbed as GraphMOD-Net, encompasses instance segmentation, background initialization, feature extraction, and graph construction. GraphMOD-Net is tested on unseen videos and outperforms state-of-the-art methods in unsupervised, semi-supervised, and supervised learning in several challenges of the Change Detection 2014 (CDNet2014) and UCSD background subtraction datasets.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 18:00:12 GMT" } ]
2022-07-15T00:00:00
[ [ "Giraldo", "Jhony H.", "" ], [ "Javed", "Sajid", "" ], [ "Werghi", "Naoufel", "" ], [ "Bouwmans", "Thierry", "" ] ]
new_dataset
0.977302
2207.06464
Philipe Melo
Clara Andrade Pimentel, Joana Ziller, Philipe Melo
De Quem e o Jogo? Disputas Narrativas no Fandom de World of Warcraft
in Portuguese language
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Digital games are increasingly part of a cyberculture engendered by digital platforms. With this in mind, we approach in this work some considerations about World of Warcraft players as fans and content producers and the narrative disputes that emerge about the game on fan work publishing platforms (Archive of Our Own and DeviantArt). We analyzed a vast set of fanfics and fanarts collected on these platforms, showing a textuality that involves not only the digital game, but a whole network of fan production that expands beyond the act of playing. Our observations show that, despite the popular perception that World of Warcraft fandom is mostly male and heteronormative, women and LGBTQI+ people are a large participatory audience and produce a lot of content, especially in the fanfic universe. The works created are also quite marked by narratives of dissident bodies and sexualities. However, despite the presence of these subjects and narratives in the fandom, this content is made invisible in DeviantArt, which privileges male artists and heteronormative fanarts of a commercial nature.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 18:29:58 GMT" } ]
2022-07-15T00:00:00
[ [ "Pimentel", "Clara Andrade", "" ], [ "Ziller", "Joana", "" ], [ "Melo", "Philipe", "" ] ]
new_dataset
0.998758
2207.06495
Enrico Paolini
Enrico Paolini, Lorenzo Valentini, Velio Tralli, Marco Chiani
Irregular Repetition Slotted ALOHA in an Information-Theoretic Setting
6 pages, 2 figures
2022 IEEE International Symposium on Information Theory
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
An information-theoretic approach to irregular repetition slotted ALOHA (IRSA) is proposed. In contrast with previous works, in which IRSA analysis is conducted only based on quantities that are typical of collision models such as the traffic, the new approach also captures more fundamental quantities. Specifically, a suitable codebook construction for the adder channel model is adopted to establish a link with successive interference cancellation over the multi-packet reception channel. This perspective allows proving achievability and converse results for the average sum rate of IRSA multiple access schemes.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 19:37:08 GMT" } ]
2022-07-15T00:00:00
[ [ "Paolini", "Enrico", "" ], [ "Valentini", "Lorenzo", "" ], [ "Tralli", "Velio", "" ], [ "Chiani", "Marco", "" ] ]
new_dataset
0.994274
2207.06553
Xiaodong Yang
Tong Su, Xishun Wang, Xiaodong Yang
QML for Argoverse 2 Motion Forecasting Challenge
null
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To safely navigate in various complex traffic scenarios, autonomous driving systems are generally equipped with a motion forecasting module to provide vital information for the downstream planning module. For the real-world onboard applications, both accuracy and latency of a motion forecasting model are essential. In this report, we present an effective and efficient solution, which ranks the 3rd place in the Argoverse 2 Motion Forecasting Challenge 2022.
[ { "version": "v1", "created": "Wed, 13 Jul 2022 23:25:30 GMT" } ]
2022-07-15T00:00:00
[ [ "Su", "Tong", "" ], [ "Wang", "Xishun", "" ], [ "Yang", "Xiaodong", "" ] ]
new_dataset
0.997683
2207.06626
Tae Bok Lee
Tae Bok Lee, Sujy Han, Yong Seok Heo
Continuous Facial Motion Deblurring
null
IEEE Access (Early Access), 12 July 2022
10.1109/ACCESS.2022.3190089
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce a novel framework for continuous facial motion deblurring that restores the continuous sharp moment latent in a single motion-blurred face image via a moment control factor. Although a motion-blurred image is the accumulated signal of continuous sharp moments during the exposure time, most existing single image deblurring approaches aim to restore a fixed number of frames using multiple networks and training stages. To address this problem, we propose a continuous facial motion deblurring network based on GAN (CFMD-GAN), which is a novel framework for restoring the continuous moment latent in a single motion-blurred face image with a single network and a single training stage. To stabilize the network training, we train the generator to restore continuous moments in the order determined by our facial motion-based reordering process (FMR) utilizing domain-specific knowledge of the face. Moreover, we propose an auxiliary regressor that helps our generator produce more accurate images by estimating continuous sharp moments. Furthermore, we introduce a control-adaptive (ContAda) block that performs spatially deformable convolution and channel-wise attention as a function of the control factor. Extensive experiments on the 300VW datasets demonstrate that the proposed framework generates a various number of continuous output frames by varying the moment control factor. Compared with the recent single-to-single image deblurring networks trained with the same 300VW training set, the proposed method show the superior performance in restoring the central sharp frame in terms of perceptual metrics, including LPIPS, FID and Arcface identity distance. The proposed method outperforms the existing single-to-video deblurring method for both qualitative and quantitative comparisons.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 02:53:37 GMT" } ]
2022-07-15T00:00:00
[ [ "Lee", "Tae Bok", "" ], [ "Han", "Sujy", "" ], [ "Heo", "Yong Seok", "" ] ]
new_dataset
0.989904
2207.06673
Pappu Yadav
Pappu Kumar Yadav, J. Alex Thomasson, Robert Hardin, Stephen W. Searcy, Ulisses Braga-Neto, Sorin C. Popescu, Daniel E. Martin, Roberto Rodriguez, Karem Meza, Juan Enciso, Jorge Solorzano Diaz, Tianyi Wang
Detecting Volunteer Cotton Plants in a Corn Field with Deep Learning on UAV Remote-Sensing Imagery
38 Pages
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The cotton boll weevil, Anthonomus grandis Boheman is a serious pest to the U.S. cotton industry that has cost more than 16 billion USD in damages since it entered the United States from Mexico in the late 1800s. This pest has been nearly eradicated; however, southern part of Texas still faces this issue and is always prone to the pest reinfestation each year due to its sub-tropical climate where cotton plants can grow year-round. Volunteer cotton (VC) plants growing in the fields of inter-seasonal crops, like corn, can serve as hosts to these pests once they reach pin-head square stage (5-6 leaf stage) and therefore need to be detected, located, and destroyed or sprayed . In this paper, we present a study to detect VC plants in a corn field using YOLOv3 on three band aerial images collected by unmanned aircraft system (UAS). The two-fold objectives of this paper were : (i) to determine whether YOLOv3 can be used for VC detection in a corn field using RGB (red, green, and blue) aerial images collected by UAS and (ii) to investigate the behavior of YOLOv3 on images at three different scales (320 x 320, S1; 416 x 416, S2; and 512 x 512, S3 pixels) based on average precision (AP), mean average precision (mAP) and F1-score at 95% confidence level. No significant differences existed for mAP among the three scales, while a significant difference was found for AP between S1 and S3 (p = 0.04) and S2 and S3 (p = 0.02). A significant difference was also found for F1-score between S2 and S3 (p = 0.02). The lack of significant differences of mAP at all the three scales indicated that the trained YOLOv3 model can be used on a computer vision-based remotely piloted aerial application system (RPAAS) for VC detection and spray application in near real-time.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 05:59:54 GMT" } ]
2022-07-15T00:00:00
[ [ "Yadav", "Pappu Kumar", "" ], [ "Thomasson", "J. Alex", "" ], [ "Hardin", "Robert", "" ], [ "Searcy", "Stephen W.", "" ], [ "Braga-Neto", "Ulisses", "" ], [ "Popescu", "Sorin C.", "" ], [ "Martin", "Daniel E.", "" ], [ "Rodriguez", "Roberto", "" ], [ "Meza", "Karem", "" ], [ "Enciso", "Juan", "" ], [ "Diaz", "Jorge Solorzano", "" ], [ "Wang", "Tianyi", "" ] ]
new_dataset
0.996476
2207.06681
Mart\'in Ceresa
Mart\'an Ceresa and C\'esar S\'anchez
Multi: a Formal Playground for Multi-Smart Contract Interaction
null
null
null
null
cs.LO cs.PL cs.SC
http://creativecommons.org/licenses/by/4.0/
Blockchains are maintained by a network of participants that run algorithms designed to maintain collectively a distributed machine tolerant to Byzantine attacks. From the point of view of users, blockchains provide the illusion of centralized computers that perform trustable verifiable computations, where all computations are deterministic and the results cannot be manipulated or undone. Smart-contracts are written in a special-purpose programming language with deterministic semantics. Each transaction begins with an invocation from an external user to a smart contract. Contracts have local storage and can call other contracts, and more importantly, they store, send and receive cryptocurrency. It is very important to guarantee that contracts are correct before deployment since their code cannot be modified afterward deployment. However, the resulting ecosystem makes it very difficult to reason about program correctness, since contracts can be executed by malicious users or malicious contracts can be designed to exploit other contracts that call them. Many attacks and bugs are caused by unexpected interactions between multiple contracts, the attacked contract and unknown code that performs the exploit. Moreover, there is a very aggressive competition between different blockchains to expand their user base. Ideas are implemented fast and blockchains compete to offer and adopt new features quickly. In this paper, we propose a formal extensible playground that allows reasoning about multi-contract interactions to ultimately prove properties before features are incorporated into the real blockchain. We implemented a model of computation that models the execution platform, abstracts the internal code of each individual contract and focuses on contract interactions. Moreover, we show how many features, existing or proposed, can be used to reason about multi-contract interactions.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 06:19:39 GMT" } ]
2022-07-15T00:00:00
[ [ "Ceresa", "Martán", "" ], [ "Sánchez", "César", "" ] ]
new_dataset
0.999086
2207.06695
Zhanzhan Cheng
Liang Qiao, Hui Jiang, Ying Chen, Can Li, Pengfei Li, Zaisheng Li, Baorui Zou, Dashan Guo, Yingda Xu, Yunlu Xu, Zhanzhan Cheng and Yi Niu
DavarOCR: A Toolbox for OCR and Multi-Modal Document Understanding
Short paper, Accept by ACM MM2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
This paper presents DavarOCR, an open-source toolbox for OCR and document understanding tasks. DavarOCR currently implements 19 advanced algorithms, covering 9 different task forms. DavarOCR provides detailed usage instructions and the trained models for each algorithm. Compared with the previous opensource OCR toolbox, DavarOCR has relatively more complete support for the sub-tasks of the cutting-edge technology of document understanding. In order to promote the development and application of OCR technology in academia and industry, we pay more attention to the use of modules that different sub-domains of technology can share. DavarOCR is publicly released at https://github.com/hikopensource/Davar-Lab-OCR.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 06:54:47 GMT" } ]
2022-07-15T00:00:00
[ [ "Qiao", "Liang", "" ], [ "Jiang", "Hui", "" ], [ "Chen", "Ying", "" ], [ "Li", "Can", "" ], [ "Li", "Pengfei", "" ], [ "Li", "Zaisheng", "" ], [ "Zou", "Baorui", "" ], [ "Guo", "Dashan", "" ], [ "Xu", "Yingda", "" ], [ "Xu", "Yunlu", "" ], [ "Cheng", "Zhanzhan", "" ], [ "Niu", "Yi", "" ] ]
new_dataset
0.967315
2207.06717
Bowen Yu
Zhenyu Zhang, Bowen Yu, Haiyang Yu, Tingwen Liu, Cheng Fu, Jingyang Li, Chengguang Tang, Jian Sun, Yongbin Li
Layout-Aware Information Extraction for Document-Grounded Dialogue: Dataset, Method and Demonstration
Accepted to ACM Multimedia (MM) Industry Track 2022
null
null
null
cs.CL cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Building document-grounded dialogue systems have received growing interest as documents convey a wealth of human knowledge and commonly exist in enterprises. Wherein, how to comprehend and retrieve information from documents is a challenging research problem. Previous work ignores the visual property of documents and treats them as plain text, resulting in incomplete modality. In this paper, we propose a Layout-aware document-level Information Extraction dataset, LIE, to facilitate the study of extracting both structural and semantic knowledge from visually rich documents (VRDs), so as to generate accurate responses in dialogue systems. LIE contains 62k annotations of three extraction tasks from 4,061 pages in product and official documents, becoming the largest VRD-based information extraction dataset to the best of our knowledge. We also develop benchmark methods that extend the token-based language model to consider layout features like humans. Empirical results show that layout is critical for VRD-based extraction, and system demonstration also verifies that the extracted knowledge can help locate the answers that users care about.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 07:59:45 GMT" } ]
2022-07-15T00:00:00
[ [ "Zhang", "Zhenyu", "" ], [ "Yu", "Bowen", "" ], [ "Yu", "Haiyang", "" ], [ "Liu", "Tingwen", "" ], [ "Fu", "Cheng", "" ], [ "Li", "Jingyang", "" ], [ "Tang", "Chengguang", "" ], [ "Sun", "Jian", "" ], [ "Li", "Yongbin", "" ] ]
new_dataset
0.992271
2207.06828
Haozheng Zhang
Haozheng Zhang, Edmond S.L. Ho, Xiatian Zhang and Hubert P.H. Shum
Pose-based Tremor Classification for Parkinson's Disease Diagnosis from Video
MICCAI 2022
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Parkinson's disease (PD) is a progressive neurodegenerative disorder that results in a variety of motor dysfunction symptoms, including tremors, bradykinesia, rigidity and postural instability. The diagnosis of PD mainly relies on clinical experience rather than a definite medical test, and the diagnostic accuracy is only about 73-84% since it is challenged by the subjective opinions or experiences of different medical experts. Therefore, an efficient and interpretable automatic PD diagnosis system is valuable for supporting clinicians with more robust diagnostic decision-making. To this end, we propose to classify Parkinson's tremor since it is one of the most predominant symptoms of PD with strong generalizability. Different from other computer-aided time and resource-consuming Parkinson's Tremor (PT) classification systems that rely on wearable sensors, we propose SPAPNet, which only requires consumer-grade non-intrusive video recording of camera-facing human movements as input to provide undiagnosed patients with low-cost PT classification results as a PD warning sign. For the first time, we propose to use a novel attention module with a lightweight pyramidal channel-squeezing-fusion architecture to extract relevant PT information and filter the noise efficiently. This design aids in improving both classification performance and system interpretability. Experimental results show that our system outperforms state-of-the-arts by achieving a balanced accuracy of 90.9% and an F1-score of 90.6% in classifying PT with the non-PT class.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 11:32:42 GMT" } ]
2022-07-15T00:00:00
[ [ "Zhang", "Haozheng", "" ], [ "Ho", "Edmond S. L.", "" ], [ "Zhang", "Xiatian", "" ], [ "Shum", "Hubert P. H.", "" ] ]
new_dataset
0.997322
2207.06985
Mohsen Zand
Mohsen Zand, Ali Etemad, Michael Greenspan
ObjectBox: From Centers to Boxes for Anchor-Free Object Detection
ECCV 2022 Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present ObjectBox, a novel single-stage anchor-free and highly generalizable object detection approach. As opposed to both existing anchor-based and anchor-free detectors, which are more biased toward specific object scales in their label assignments, we use only object center locations as positive samples and treat all objects equally in different feature levels regardless of the objects' sizes or shapes. Specifically, our label assignment strategy considers the object center locations as shape- and size-agnostic anchors in an anchor-free fashion, and allows learning to occur at all scales for every object. To support this, we define new regression targets as the distances from two corners of the center cell location to the four sides of the bounding box. Moreover, to handle scale-variant objects, we propose a tailored IoU loss to deal with boxes with different sizes. As a result, our proposed object detector does not need any dataset-dependent hyperparameters to be tuned across datasets. We evaluate our method on MS-COCO 2017 and PASCAL VOC 2012 datasets, and compare our results to state-of-the-art methods. We observe that ObjectBox performs favorably in comparison to prior works. Furthermore, we perform rigorous ablation experiments to evaluate different components of our method. Our code is available at: https://github.com/MohsenZand/ObjectBox.
[ { "version": "v1", "created": "Thu, 14 Jul 2022 15:10:29 GMT" } ]
2022-07-15T00:00:00
[ [ "Zand", "Mohsen", "" ], [ "Etemad", "Ali", "" ], [ "Greenspan", "Michael", "" ] ]
new_dataset
0.999703
2207.07098
Martin Karp
Martin Karp, Daniele Massaro, Niclas Jansson, Alistair Hart, Jacob Wahlgren, Philipp Schlatter, and Stefano Markidis
Large-Scale Direct Numerical Simulations of Turbulence Using GPUs and Modern Fortran
13 pages, 7 figures
null
null
null
cs.MS cs.CE cs.DC physics.flu-dyn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present our approach to making direct numerical simulations of turbulence with applications in sustainable shipping. We use modern Fortran and the spectral element method to leverage and scale on supercomputers powered by the Nvidia A100 and the recent AMD Instinct MI250X GPUs, while still providing support for user software developed in Fortran. We demonstrate the efficiency of our approach by performing the world's first direct numerical simulation of the flow around a Flettner rotor at Re=30'000 and its interaction with a turbulent boundary layer. We present one of the first performance comparisons between the AMD Instinct MI250X and Nvidia A100 GPUs for scalable computational fluid dynamics. Our results show that one MI250X offers performance on par with two A100 GPUs and has a similar power efficiency.
[ { "version": "v1", "created": "Thu, 23 Jun 2022 12:41:19 GMT" } ]
2022-07-15T00:00:00
[ [ "Karp", "Martin", "" ], [ "Massaro", "Daniele", "" ], [ "Jansson", "Niclas", "" ], [ "Hart", "Alistair", "" ], [ "Wahlgren", "Jacob", "" ], [ "Schlatter", "Philipp", "" ], [ "Markidis", "Stefano", "" ] ]
new_dataset
0.999452
2103.09704
Jiaye Li
Shichao Zhang, Jiaye Li and Yangding Li
Reachable Distance Function for KNN Classification
null
IEEE Transactions on Knowledge and Data Engineering, 2022
10.1109/TKDE.2022.3185149.
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Distance function is a main metrics of measuring the affinity between two data points in machine learning. Extant distance functions often provide unreachable distance values in real applications. This can lead to incorrect measure of the affinity between data points. This paper proposes a reachable distance function for KNN classification. The reachable distance function is not a geometric direct-line distance between two data points. It gives a consideration to the class attribute of a training dataset when measuring the affinity between data points. Concretely speaking, the reachable distance between data points includes their class center distance and real distance. Its shape looks like "Z", and we also call it a Z distance function. In this way, the affinity between data points in the same class is always stronger than that in different classes. Or, the intraclass data points are always closer than those interclass data points. We evaluated the reachable distance with experiments, and demonstrated that the proposed distance function achieved better performance in KNN classification.
[ { "version": "v1", "created": "Wed, 17 Mar 2021 15:01:17 GMT" }, { "version": "v2", "created": "Wed, 29 Jun 2022 06:02:07 GMT" } ]
2022-07-14T00:00:00
[ [ "Zhang", "Shichao", "" ], [ "Li", "Jiaye", "" ], [ "Li", "Yangding", "" ] ]
new_dataset
0.95695
2104.02527
Yangzheng Wu
Yangzheng Wu, Mohsen Zand, Ali Etemad, Michael Greenspan
Vote from the Center: 6 DoF Pose Estimation in RGB-D Images by Radial Keypoint Voting
ECCV 2022 Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel keypoint voting scheme based on intersecting spheres, that is more accurate than existing schemes and allows for fewer, more disperse keypoints. The scheme is based upon the distance between points, which as a 1D quantity can be regressed more accurately than the 2D and 3D vector and offset quantities regressed in previous work, yielding more accurate keypoint localization. The scheme forms the basis of the proposed RCVPose method for 6 DoF pose estimation of 3D objects in RGB-D data, which is particularly effective at handling occlusions. A CNN is trained to estimate the distance between the 3D point corresponding to the depth mode of each RGB pixel, and a set of 3 disperse keypoints defined in the object frame. At inference, a sphere centered at each 3D point is generated, of radius equal to this estimated distance. The surfaces of these spheres vote to increment a 3D accumulator space, the peaks of which indicate keypoint locations. The proposed radial voting scheme is more accurate than previous vector or offset schemes, and is robust to disperse keypoints. Experiments demonstrate RCVPose to be highly accurate and competitive, achieving state-of-the-art results on the LINEMOD 99.7% and YCB-Video 97.2% datasets, notably scoring +4.9% higher 71.1% than previous methods on the challenging Occlusion LINEMOD dataset, and on average outperforming all other published results from the BOP benchmark for these 3 datasets. Our code is available at http://www.github.com/aaronwool/rcvpose.
[ { "version": "v1", "created": "Tue, 6 Apr 2021 14:06:08 GMT" }, { "version": "v2", "created": "Wed, 7 Apr 2021 21:29:19 GMT" }, { "version": "v3", "created": "Tue, 30 Nov 2021 14:03:54 GMT" }, { "version": "v4", "created": "Tue, 12 Jul 2022 23:50:22 GMT" } ]
2022-07-14T00:00:00
[ [ "Wu", "Yangzheng", "" ], [ "Zand", "Mohsen", "" ], [ "Etemad", "Ali", "" ], [ "Greenspan", "Michael", "" ] ]
new_dataset
0.993011
2104.13666
Abbas Cheddad
Mengqiao Zhao, Andre G. Hochuli, Abbas Cheddad
End-to-End Approach for Recognition of Historical Digit Strings
Cite as: Mengqiao Zhao, Andre G. Hochuli and Abbas Cheddad, End-to-End Approach for Recognition of Historical Digit Strings, to appear in the 16th International Conference on Document Analysis and Recognition (ICDAR 2021), LNCS, Springer, Lausanne, Switzerland
null
10.1007/978-3-030-86334-0_39
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
The plethora of digitalised historical document datasets released in recent years has rekindled interest in advancing the field of handwriting pattern recognition. In the same vein, a recently published data set, known as ARDIS, presents handwritten digits manually cropped from 15.000 scanned documents of Swedish church books and exhibiting various handwriting styles. To this end, we propose an end-to-end segmentation-free deep learning approach to handle this challenging ancient handwriting style of dates present in the ARDIS dataset (4-digits long strings). We show that with slight modifications in the VGG-16 deep model, the framework can achieve a recognition rate of 93.2%, resulting in a feasible solution free of heuristic methods, segmentation, and fusion methods. Moreover, the proposed approach outperforms the well-known CRNN method (a model widely applied in handwriting recognition tasks).
[ { "version": "v1", "created": "Wed, 28 Apr 2021 09:39:29 GMT" } ]
2022-07-14T00:00:00
[ [ "Zhao", "Mengqiao", "" ], [ "Hochuli", "Andre G.", "" ], [ "Cheddad", "Abbas", "" ] ]
new_dataset
0.998472
2111.04204
Felix Lau
Felix Lau, Nishant Subramani, Sasha Harrison, Aerin Kim, Elliot Branson and Rosanne Liu
Natural Adversarial Objects
null
Advances in Neural Information Processing Systems Data Centric AI workshop 2021
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Although state-of-the-art object detection methods have shown compelling performance, models often are not robust to adversarial attacks and out-of-distribution data. We introduce a new dataset, Natural Adversarial Objects (NAO), to evaluate the robustness of object detection models. NAO contains 7,934 images and 9,943 objects that are unmodified and representative of real-world scenarios, but cause state-of-the-art detection models to misclassify with high confidence. The mean average precision (mAP) of EfficientDet-D7 drops 74.5% when evaluated on NAO compared to the standard MSCOCO validation set. Moreover, by comparing a variety of object detection architectures, we find that better performance on MSCOCO validation set does not necessarily translate to better performance on NAO, suggesting that robustness cannot be simply achieved by training a more accurate model. We further investigate why examples in NAO are difficult to detect and classify. Experiments of shuffling image patches reveal that models are overly sensitive to local texture. Additionally, using integrated gradients and background replacement, we find that the detection model is reliant on pixel information within the bounding box, and insensitive to the background context when predicting class labels. NAO can be downloaded at https://drive.google.com/drive/folders/15P8sOWoJku6SSEiHLEts86ORfytGezi8.
[ { "version": "v1", "created": "Sun, 7 Nov 2021 23:42:55 GMT" } ]
2022-07-14T00:00:00
[ [ "Lau", "Felix", "" ], [ "Subramani", "Nishant", "" ], [ "Harrison", "Sasha", "" ], [ "Kim", "Aerin", "" ], [ "Branson", "Elliot", "" ], [ "Liu", "Rosanne", "" ] ]
new_dataset
0.999714
2111.09046
Jiawei Hu
Jiawei Hu, Wenhang Liu, Heng Zhang, Jingang Yi, Zhenhua Xiong
Multi-Robot Object Transport Motion Planning with a Deformable Sheet
8 pages, 10 figures, accepted by RAL&CASE 2022 in June 24, 2022
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Using a deformable sheet to handle objects is convenient and found in many practical applications. For object manipulation through a deformable sheet that is held by multiple mobile robots, it is a challenging task to model the object-sheet interactions. We present a computational model and algorithm to capture the object position on the deformable sheet with changing robotic team formations. A virtual variable cables model (VVCM) is proposed to simplify the modeling of the robot-sheet-object system. With the VVCM, we further present a motion planner for the robotic team to transport the object in a three-dimensional (3D) cluttered environment. Simulation and experimental results with different robot team sizes show the effectiveness and versatility of the proposed VVCM. We also compare and demonstrate the planning results to avoid the obstacle in 3D space with the other benchmark planner.
[ { "version": "v1", "created": "Wed, 17 Nov 2021 11:42:16 GMT" }, { "version": "v2", "created": "Wed, 16 Mar 2022 09:09:25 GMT" }, { "version": "v3", "created": "Thu, 26 May 2022 13:37:17 GMT" }, { "version": "v4", "created": "Wed, 13 Jul 2022 01:44:13 GMT" } ]
2022-07-14T00:00:00
[ [ "Hu", "Jiawei", "" ], [ "Liu", "Wenhang", "" ], [ "Zhang", "Heng", "" ], [ "Yi", "Jingang", "" ], [ "Xiong", "Zhenhua", "" ] ]
new_dataset
0.995703
2112.03227
Oier Mees
Oier Mees, Lukas Hermann, Erick Rosete-Beas, Wolfram Burgard
CALVIN: A Benchmark for Language-Conditioned Policy Learning for Long-Horizon Robot Manipulation Tasks
Accepted for publication at IEEE Robotics and Automation Letters (RAL). Code, models and dataset available at http://calvin.cs.uni-freiburg.de
null
null
null
cs.RO cs.AI cs.CL cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
General-purpose robots coexisting with humans in their environment must learn to relate human language to their perceptions and actions to be useful in a range of daily tasks. Moreover, they need to acquire a diverse repertoire of general-purpose skills that allow composing long-horizon tasks by following unconstrained language instructions. In this paper, we present CALVIN (Composing Actions from Language and Vision), an open-source simulated benchmark to learn long-horizon language-conditioned tasks. Our aim is to make it possible to develop agents that can solve many robotic manipulation tasks over a long horizon, from onboard sensors, and specified only via human language. CALVIN tasks are more complex in terms of sequence length, action space, and language than existing vision-and-language task datasets and supports flexible specification of sensor suites. We evaluate the agents in zero-shot to novel language instructions and to novel environments and objects. We show that a baseline model based on multi-context imitation learning performs poorly on CALVIN, suggesting that there is significant room for developing innovative agents that learn to relate human language to their world models with this benchmark.
[ { "version": "v1", "created": "Mon, 6 Dec 2021 18:37:33 GMT" }, { "version": "v2", "created": "Wed, 8 Dec 2021 10:04:13 GMT" }, { "version": "v3", "created": "Thu, 23 Jun 2022 11:43:49 GMT" }, { "version": "v4", "created": "Wed, 13 Jul 2022 12:15:04 GMT" } ]
2022-07-14T00:00:00
[ [ "Mees", "Oier", "" ], [ "Hermann", "Lukas", "" ], [ "Rosete-Beas", "Erick", "" ], [ "Burgard", "Wolfram", "" ] ]
new_dataset
0.999537
2112.08634
Robert Logan Iv
Robert L. Logan IV, Alexandre Passos, Sameer Singh and Ming-Wei Chang
FRUIT: Faithfully Reflecting Updated Information in Text
v2.0, NAACL 2022
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Textual knowledge bases such as Wikipedia require considerable effort to keep up to date and consistent. While automated writing assistants could potentially ease this burden, the problem of suggesting edits grounded in external knowledge has been under-explored. In this paper, we introduce the novel generation task of *faithfully reflecting updated information in text* (FRUIT) where the goal is to update an existing article given new evidence. We release the FRUIT-WIKI dataset, a collection of over 170K distantly supervised data produced from pairs of Wikipedia snapshots, along with our data generation pipeline and a gold evaluation set of 914 instances whose edits are guaranteed to be supported by the evidence. We provide benchmark results for popular generation systems as well as EDIT5 -- a T5-based approach tailored to editing we introduce that establishes the state of the art. Our analysis shows that developing models that can update articles faithfully requires new capabilities for neural generation models, and opens doors to many new applications.
[ { "version": "v1", "created": "Thu, 16 Dec 2021 05:21:24 GMT" }, { "version": "v2", "created": "Wed, 13 Jul 2022 15:01:10 GMT" } ]
2022-07-14T00:00:00
[ [ "Logan", "Robert L.", "IV" ], [ "Passos", "Alexandre", "" ], [ "Singh", "Sameer", "" ], [ "Chang", "Ming-Wei", "" ] ]
new_dataset
0.962398
2112.08910
Prasanna Parasurama
Prasanna Parasurama, Jo\~ao Sedoc
Degendering Resumes for Fair Algorithmic Resume Screening
None
null
null
null
cs.CL cs.CY
http://creativecommons.org/licenses/by/4.0/
We investigate whether it is feasible to remove gendered information from resumes to mitigate potential bias in algorithmic resume screening. Using a corpus of 709k resumes from IT firms, we first train a series of models to classify the self-reported gender of the applicant, thereby measuring the extent and nature of gendered information encoded in resumes. We then conduct a series of gender obfuscation experiments, where we iteratively remove gendered information from resumes. Finally, we train a resume screening algorithm and investigate the trade-off between gender obfuscation and screening algorithm performance. Results show: (1) There is a significant amount of gendered information in resumes. (2) Lexicon-based gender obfuscation method (i.e. removing tokens that are predictive of gender) can reduce the amount of gendered information to a large extent. However, after a certain point, the performance of the resume screening algorithm starts suffering. (3) General-purpose gender debiasing methods for NLP models such as removing gender subspace from embeddings are not effective in obfuscating gender.
[ { "version": "v1", "created": "Thu, 16 Dec 2021 14:26:36 GMT" }, { "version": "v2", "created": "Thu, 30 Jun 2022 19:52:35 GMT" }, { "version": "v3", "created": "Tue, 12 Jul 2022 23:52:47 GMT" } ]
2022-07-14T00:00:00
[ [ "Parasurama", "Prasanna", "" ], [ "Sedoc", "João", "" ] ]
new_dataset
0.982518
2201.06499
Vladimir Kokh
Pavel Blinov, Arina Reshetnikova, Aleksandr Nesterov, Galina Zubkova, Vladimir Kokh
RuMedBench: A Russian Medical Language Understanding Benchmark
11 pages, code available at this https URL; Published in the proceedings of 20th International Conference on Artificial Intelligence in Medicine, Halifax, Canada; code available at https://github.com/pavel-blinov/RuMedBench
null
10.1007/978-3-031-09342-5_38
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper describes the open Russian medical language understanding benchmark covering several task types (classification, question answering, natural language inference, named entity recognition) on a number of novel text sets. Given the sensitive nature of the data in healthcare, such a benchmark partially closes the problem of Russian medical dataset absence. We prepare the unified format labeling, data split, and evaluation metrics for new tasks. The remaining tasks are from existing datasets with a few modifications. A single-number metric expresses a model's ability to cope with the benchmark. Moreover, we implement several baseline models, from simple ones to neural networks with transformer architecture, and release the code. Expectedly, the more advanced models yield better performance, but even a simple model is enough for a decent result in some tasks. Furthermore, for all tasks, we provide a human evaluation. Interestingly the models outperform humans in the large-scale classification tasks. However, the advantage of natural intelligence remains in the tasks requiring more knowledge and reasoning.
[ { "version": "v1", "created": "Mon, 17 Jan 2022 16:23:33 GMT" }, { "version": "v2", "created": "Tue, 24 May 2022 12:39:23 GMT" } ]
2022-07-14T00:00:00
[ [ "Blinov", "Pavel", "" ], [ "Reshetnikova", "Arina", "" ], [ "Nesterov", "Aleksandr", "" ], [ "Zubkova", "Galina", "" ], [ "Kokh", "Vladimir", "" ] ]
new_dataset
0.999696
2204.08532
Marcella Cornia
Davide Morelli, Matteo Fincato, Marcella Cornia, Federico Landi, Fabio Cesari, Rita Cucchiara
Dress Code: High-Resolution Multi-Category Virtual Try-On
ECCV 2022 - Video Demo: https://www.youtube.com/watch?v=qr6TW3uTHG4
null
null
null
cs.CV cs.AI cs.GR cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image-based virtual try-on strives to transfer the appearance of a clothing item onto the image of a target person. Prior work focuses mainly on upper-body clothes (e.g. t-shirts, shirts, and tops) and neglects full-body or lower-body items. This shortcoming arises from a main factor: current publicly available datasets for image-based virtual try-on do not account for this variety, thus limiting progress in the field. To address this deficiency, we introduce Dress Code, which contains images of multi-category clothes. Dress Code is more than 3x larger than publicly available datasets for image-based virtual try-on and features high-resolution paired images (1024x768) with front-view, full-body reference models. To generate HD try-on images with high visual quality and rich in details, we propose to learn fine-grained discriminating features. Specifically, we leverage a semantic-aware discriminator that makes predictions at pixel-level instead of image- or patch-level. Extensive experimental evaluation demonstrates that the proposed approach surpasses the baselines and state-of-the-art competitors in terms of visual quality and quantitative results. The Dress Code dataset is publicly available at https://github.com/aimagelab/dress-code.
[ { "version": "v1", "created": "Mon, 18 Apr 2022 19:31:49 GMT" }, { "version": "v2", "created": "Wed, 13 Jul 2022 12:47:00 GMT" } ]
2022-07-14T00:00:00
[ [ "Morelli", "Davide", "" ], [ "Fincato", "Matteo", "" ], [ "Cornia", "Marcella", "" ], [ "Landi", "Federico", "" ], [ "Cesari", "Fabio", "" ], [ "Cucchiara", "Rita", "" ] ]
new_dataset
0.999803
2204.13879
Ben Burgess-Limerick
Ben Burgess-Limerick, Chris Lehnert, Jurgen Leitner, Peter Corke
DGBench: An Open-Source, Reproducible Benchmark for Dynamic Grasping
Dynamic Grasping Benchmark available: https://github.com/BenBurgessLimerick/DGBench
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
This paper introduces DGBench, a fully reproducible open-source testing system to enable benchmarking of dynamic grasping in environments with unpredictable relative motion between robot and object. We use the proposed benchmark to compare several visual perception arrangements. Traditional perception systems developed for static grasping are unable to provide feedback during the final phase of a grasp due to sensor minimum range, occlusion, and a limited field of view. A multi-camera eye-in-hand perception system is presented that has advantages over commonly used camera configurations. We quantitatively evaluate the performance on a real robot with an image-based visual servoing grasp controller and show a significantly improved success rate on a dynamic grasping task.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 04:37:18 GMT" }, { "version": "v2", "created": "Wed, 13 Jul 2022 05:21:56 GMT" } ]
2022-07-14T00:00:00
[ [ "Burgess-Limerick", "Ben", "" ], [ "Lehnert", "Chris", "" ], [ "Leitner", "Jurgen", "" ], [ "Corke", "Peter", "" ] ]
new_dataset
0.98464
2206.04460
Julian Tritscher
Julian Tritscher, Fabian Gwinner, Daniel Schl\"or, Anna Krause, Andreas Hotho
Open ERP System Data For Occupational Fraud Detection
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent estimates report that companies lose 5% of their revenue to occupational fraud. Since most medium-sized and large companies employ Enterprise Resource Planning (ERP) systems to track vast amounts of information regarding their business process, researchers have in the past shown interest in automatically detecting fraud through ERP system data. Current research in this area, however, is hindered by the fact that ERP system data is not publicly available for the development and comparison of fraud detection methods. We therefore endeavour to generate public ERP system data that includes both normal business operation and fraud. We propose a strategy for generating ERP system data through a serious game, model a variety of fraud scenarios in cooperation with auditing experts, and generate data from a simulated make-to-stock production company with multiple research participants. We aggregate the generated data into ready to used datasets for fraud detection in ERP systems, and supply both the raw and aggregated data to the general public to allow for open development and comparison of fraud detection approaches on ERP system data.
[ { "version": "v1", "created": "Thu, 9 Jun 2022 12:38:29 GMT" }, { "version": "v2", "created": "Fri, 10 Jun 2022 13:04:56 GMT" }, { "version": "v3", "created": "Wed, 13 Jul 2022 07:51:02 GMT" } ]
2022-07-14T00:00:00
[ [ "Tritscher", "Julian", "" ], [ "Gwinner", "Fabian", "" ], [ "Schlör", "Daniel", "" ], [ "Krause", "Anna", "" ], [ "Hotho", "Andreas", "" ] ]
new_dataset
0.986711
2206.05728
Linh K\"astner
Linh K\"astner, Teham Bhuiyan, Tuan Anh Le, Elias Treis, Johannes Cox, Boris Meinardus, Jacek Kmiecik, Reyk Carstens, Duc Pichel, Bassel Fatloun, Niloufar Khorsandi, Jens Lambrecht
Arena-Bench: A Benchmarking Suite for Obstacle Avoidance Approaches in Highly Dynamic Environments
Robotics and Automation Letters (RA-L), 2022, 8 pages, 6 figures
null
10.1109/LRA.2022.3190086
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
The ability to autonomously navigate safely, especially within dynamic environments, is paramount for mobile robotics. In recent years, DRL approaches have shown superior performance in dynamic obstacle avoidance. However, these learning-based approaches are often developed in specially designed simulation environments and are hard to test against conventional planning approaches. Furthermore, the integration and deployment of these approaches into real robotic platforms are not yet completely solved. In this paper, we present Arena-bench, a benchmark suite to train, test, and evaluate navigation planners on different robotic platforms within 3D environments. It provides tools to design and generate highly dynamic evaluation worlds, scenarios, and tasks for autonomous navigation and is fully integrated into the robot operating system. To demonstrate the functionalities of our suite, we trained a DRL agent on our platform and compared it against a variety of existing different model-based and learning-based navigation approaches on a variety of relevant metrics. Finally, we deployed the approaches towards real robots and demonstrated the reproducibility of the results. The code is publicly available at github.com/ignc-research/arena-bench.
[ { "version": "v1", "created": "Sun, 12 Jun 2022 13:00:00 GMT" }, { "version": "v2", "created": "Mon, 11 Jul 2022 10:19:00 GMT" } ]
2022-07-14T00:00:00
[ [ "Kästner", "Linh", "" ], [ "Bhuiyan", "Teham", "" ], [ "Le", "Tuan Anh", "" ], [ "Treis", "Elias", "" ], [ "Cox", "Johannes", "" ], [ "Meinardus", "Boris", "" ], [ "Kmiecik", "Jacek", "" ], [ "Carstens", "Reyk", "" ], [ "Pichel", "Duc", "" ], [ "Fatloun", "Bassel", "" ], [ "Khorsandi", "Niloufar", "" ], [ "Lambrecht", "Jens", "" ] ]
new_dataset
0.999375