id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2206.01094
Qichao Ying
Yifei Wang, Qichao Ying, Zhenxing Qian, Sheng Li and Xinpeng Zhang
A DTCWT-SVD Based Video Watermarking resistant to frame rate conversion
null
null
null
null
cs.MM cs.CV
http://creativecommons.org/licenses/by/4.0/
Videos can be easily tampered, copied and redistributed by attackers for illegal and monetary usage. Such behaviors severely jeopardize the interest of content owners. Despite huge efforts made in digital video watermarking for copyright protection, typical distortions in video transmission including signal attacks, geometric attacks and temporal synchronization attacks can still easily erase the embedded signal. Among them, temporal synchronization attacks which include frame dropping, frame insertion and frame rate conversion is one of the most prevalent attacks. To address this issue, we present a new video watermarking based on joint Dual-Tree Cosine Wavelet Transformation (DTCWT) and Singular Value Decomposition (SVD), which is resistant to frame rate conversion. We first extract a set of candidate coefficient by applying SVD decomposition after DTCWT transform. Then, we simulate the watermark embedding by adjusting the shape of candidate coefficient. Finally, we perform group-level watermarking that includes moderate temporal redundancy to resist temporal desynchronization attacks. Extensive experimental results show that the proposed scheme is more resilient to temporal desynchronization attacks and performs better than the existing blind video watermarking schemes.
[ { "version": "v1", "created": "Thu, 2 Jun 2022 15:20:52 GMT" } ]
2022-06-03T00:00:00
[ [ "Wang", "Yifei", "" ], [ "Ying", "Qichao", "" ], [ "Qian", "Zhenxing", "" ], [ "Li", "Sheng", "" ], [ "Zhang", "Xinpeng", "" ] ]
new_dataset
0.994924
2206.01146
R J Cintra
H. P. L. Arjuna Madanayake, R. J. Cintra, V. S. Dimitrov, L. Bruton
Block-Parallel Systolic-Array Architecture for 2-D NTT-based Fragile Watermark Embedding
11 pages, 4 figures
Parallel Processing Letters, vol. 22, no. 03, 1250009, 2012
10.1142/S0129626412500090
null
cs.MM eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Number-theoretic transforms (NTTs) have been applied in the fragile watermarking of digital images. A block-parallel systolic-array architecture is proposed for watermarking based on the 2-D special Hartley NTT (HNTT). The proposed core employs two 2-D special HNTT hardware cores, each using digital arithmetic over $\mathrm{GF}(3)$, and processes $4\times4$ blocks of pixels in parallel every clock cycle. Prototypes are operational on a Xilinx Sx35-10ff668 FPGA device. The maximum estimated throughput of the FPGA circuit is 100 million $4\times4$ HNTT fragile watermarked blocks per second, when clocked at 100 MHz. Potential applications exist in high-traffic back-end servers dealing with large amounts of protected digital images requiring authentication, in remote-sensing for high-security surveillance applications, in real-time video processing of information of a sensitive nature or matters of national security, in video/photographic content management of corporate clients, in authenticating multimedia for the entertainment industry, in the authentication of electronic evidence material, and in real-time news streaming.
[ { "version": "v1", "created": "Thu, 2 Jun 2022 16:52:54 GMT" } ]
2022-06-03T00:00:00
[ [ "Madanayake", "H. P. L. Arjuna", "" ], [ "Cintra", "R. J.", "" ], [ "Dimitrov", "V. S.", "" ], [ "Bruton", "L.", "" ] ]
new_dataset
0.998493
2206.01153
Ruoyi Du
Ruoyi Du, Wenqing Yu, Heqing Wang, Dongliang Chang, Ting-En Lin, Yongbin Li, Zhanyu Ma
Multi-View Active Fine-Grained Recognition
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
As fine-grained visual classification (FGVC) being developed for decades, great works related have exposed a key direction -- finding discriminative local regions and revealing subtle differences. However, unlike identifying visual contents within static images, for recognizing objects in the real physical world, discriminative information is not only present within seen local regions but also hides in other unseen perspectives. In other words, in addition to focusing on the distinguishable part from the whole, for efficient and accurate recognition, it is required to infer the key perspective with a few glances, e.g., people may recognize a "Benz AMG GT" with a glance of its front and then know that taking a look at its exhaust pipe can help to tell which year's model it is. In this paper, back to reality, we put forward the problem of active fine-grained recognition (AFGR) and complete this study in three steps: (i) a hierarchical, multi-view, fine-grained vehicle dataset is collected as the testbed, (ii) a simple experiment is designed to verify that different perspectives contribute differently for FGVC and different categories own different discriminative perspective, (iii) a policy-gradient-based framework is adopted to achieve efficient recognition with active view selection. Comprehensive experiments demonstrate that the proposed method delivers a better performance-efficient trade-off than previous FGVC methods and advanced neural networks.
[ { "version": "v1", "created": "Thu, 2 Jun 2022 17:12:14 GMT" } ]
2022-06-03T00:00:00
[ [ "Du", "Ruoyi", "" ], [ "Yu", "Wenqing", "" ], [ "Wang", "Heqing", "" ], [ "Chang", "Dongliang", "" ], [ "Lin", "Ting-En", "" ], [ "Li", "Yongbin", "" ], [ "Ma", "Zhanyu", "" ] ]
new_dataset
0.987328
2108.07467
Chiranjibi Sitaula
Chiranjibi Sitaula and Jinyuan He and Archana Priyadarshi and Mark Tracy and Omid Kavehei and Murray Hinder and Anusha Withana and Alistair McEwan and Faezeh Marzbanrad
Neonatal Bowel Sound Detection Using Convolutional Neural Network and Laplace Hidden Semi-Markov Model
Published in IEEE/ACM Transactions on Audio Speech and Language Processing journal
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2022
10.1109/TASLP.2022.3178225
null
cs.SD cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Abdominal auscultation is a convenient, safe and inexpensive method to assess bowel conditions, which is essential in neonatal care. It helps early detection of neonatal bowel dysfunctions and allows timely intervention. This paper presents a neonatal bowel sound detection method to assist the auscultation. Specifically, a Convolutional Neural Network (CNN) is proposed to classify peristalsis and non-peristalsis sounds. The classification is then optimized using a Laplace Hidden Semi-Markov Model (HSMM). The proposed method is validated on abdominal sounds from 49 newborn infants admitted to our tertiary Neonatal Intensive Care Unit (NICU). The results show that the method can effectively detect bowel sounds with accuracy and area under curve (AUC) score being 89.81% and 83.96% respectively, outperforming 13 baseline methods. Furthermore, the proposed Laplace HSMM refinement strategy is proven capable to enhance other bowel sound detection models. The outcomes of this work have the potential to facilitate future telehealth applications for neonatal care. The source code of our work can be found at: https://bitbucket.org/chirudeakin/neonatal-bowel-sound-classification/
[ { "version": "v1", "created": "Tue, 17 Aug 2021 06:50:17 GMT" }, { "version": "v2", "created": "Thu, 14 Apr 2022 08:59:46 GMT" }, { "version": "v3", "created": "Wed, 1 Jun 2022 01:38:59 GMT" } ]
2022-06-02T00:00:00
[ [ "Sitaula", "Chiranjibi", "" ], [ "He", "Jinyuan", "" ], [ "Priyadarshi", "Archana", "" ], [ "Tracy", "Mark", "" ], [ "Kavehei", "Omid", "" ], [ "Hinder", "Murray", "" ], [ "Withana", "Anusha", "" ], [ "McEwan", "Alistair", "" ], [ "Marzbanrad", "Faezeh", "" ] ]
new_dataset
0.953092
2108.07689
Raju Gottumukkala
Seyedmajid Hosseini, Satya Katragadda, Ravi Teja Bhupatiraju, Ziad Ashkar, Christoph W. Borst, Kenneth Cochran, Raju Gottumukkala
A multimodal sensor dataset for continuous stress detection of nurses in a hospital
14 pages, 9 images
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-nd/4.0/
Advances in wearable technologies provide the opportunity to monitor many physiological variables continuously. Stress detection has gained increased attention in recent years, mainly because early stress detection can help individuals better manage health to minimize the negative impacts of long-term stress exposure. This paper provides a unique stress detection dataset created in a natural working environment in a hospital. This dataset is a collection of biometric data of nurses during the COVID-19 outbreak. Studying stress in a work environment is complex due to many social, cultural, and psychological factors in dealing with stressful conditions. Therefore, we captured both the physiological data and associated context pertaining to the stress events. We monitored specifc physiological variables such as electrodermal activity, Heart Rate, and skin temperature of the nurse subjects. A periodic smartphone-administered survey also captured the contributing factors for the detected stress events. A database containing the signals, stress events, and survey responses is publicly available on Dryad.
[ { "version": "v1", "created": "Sun, 25 Jul 2021 22:24:25 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 11:50:32 GMT" } ]
2022-06-02T00:00:00
[ [ "Hosseini", "Seyedmajid", "" ], [ "Katragadda", "Satya", "" ], [ "Bhupatiraju", "Ravi Teja", "" ], [ "Ashkar", "Ziad", "" ], [ "Borst", "Christoph W.", "" ], [ "Cochran", "Kenneth", "" ], [ "Gottumukkala", "Raju", "" ] ]
new_dataset
0.999743
2111.00282
\'Edouard Bonnet
\'Edouard Bonnet, Eun Jung Kim, Amadeus Reinald, St\'ephan Thomass\'e
Twin-width VI: the lens of contraction sequences
27 pages, 3 figures
null
null
null
cs.DS cs.DM cs.LO math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A contraction sequence of a graph consists of iteratively merging two of its vertices until only one vertex remains. The recently introduced twin-width graph invariant is based on contraction sequences. More precisely, if one puts red edges between two vertices representing non-homogeneous subsets, the twin-width is the minimum integer $d$ such that a contraction sequence keeps red degree at most $d$. By changing the condition imposed on the trigraphs (i.e., graphs with some edges being red) and possibly slightly tweaking the notion of contractions, we show how to characterize the well-established bounded rank-width, tree-width, linear rank-width, path-width, and proper minor-closed classes by means of contraction sequences. As an application we give a transparent alternative proof of the celebrated Courcelle's theorem (actually of its generalization by Courcelle, Makowsky, and Rotics), that MSO$_2$ (resp. MSO$_1$) model checking on graphs with bounded tree-width (resp. bounded rank-width) is fixed-parameter tractable in the size of the input sentence. We then explore new avenues along the general theme of contraction sequences both in order to refine the landscape between bounded tree-width and bounded twin-width (via spanning twin-width) and to capture more general classes than bounded twin-width. To this end, we define an oriented version of twin-width, where appearing red edges are oriented away from the newly contracted vertex, and the mere red out-degree should remain bounded. Surprisingly, classes of bounded oriented twin-width coincide with those of bounded twin-width. Finally we examine, from an algorithmic standpoint, the concept of partial contraction sequences, where, instead of terminating on a single-vertex graph, the sequence ends when reaching a particular target class.
[ { "version": "v1", "created": "Sat, 30 Oct 2021 16:28:03 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 21:49:10 GMT" } ]
2022-06-02T00:00:00
[ [ "Bonnet", "Édouard", "" ], [ "Kim", "Eun Jung", "" ], [ "Reinald", "Amadeus", "" ], [ "Thomassé", "Stéphan", "" ] ]
new_dataset
0.999804
2111.05463
Dimitrios Antoniadis
Dimitris Antoniadis, Andrea Mifsud, Peilong Feng, Timothy G. Constandinou
An Open-Source RRAM Compiler
Final Version of NEWCAS 2022. 5 pages
null
null
null
cs.ET
http://creativecommons.org/licenses/by/4.0/
Memory compilers are necessary tools to boost the design procedure of digital circuits. However, only a few are available to academia. Resistive Random Access Memory (RRAM) is characterised by high density, high speed, non volatility and is a potential candidate of future digital memories. To the best of the authors' knowledge, this paper presents the first open source RRAM compiler for automatic memory generation including its peripheral circuits, verification and timing characterisation. The RRAM compiler is written with Cadence SKILL programming language and is integrated in Cadence environment. The layout verification procedure takes place in Siemens Mentor Calibre tool. The technology used by the compiler is TSMC 180nm. This paper analyses the novel results of a plethora of M x N RRAMs generated by the compiler, up to M = 128, N = 64 and word size B = 16 bits, for clock frequency equal to 12.5 MHz. Finally, the compiler achieves density of up to 0.024 Mb/mm2.
[ { "version": "v1", "created": "Wed, 10 Nov 2021 00:10:42 GMT" }, { "version": "v2", "created": "Thu, 3 Feb 2022 20:14:14 GMT" }, { "version": "v3", "created": "Thu, 17 Feb 2022 22:31:12 GMT" }, { "version": "v4", "created": "Tue, 31 May 2022 21:09:14 GMT" } ]
2022-06-02T00:00:00
[ [ "Antoniadis", "Dimitris", "" ], [ "Mifsud", "Andrea", "" ], [ "Feng", "Peilong", "" ], [ "Constandinou", "Timothy G.", "" ] ]
new_dataset
0.999197
2205.05883
Damla Senol Cali
Damla Senol Cali, Konstantinos Kanellopoulos, Joel Lindegger, Z\"ulal Bing\"ol, Gurpreet S. Kalsi, Ziyi Zuo, Can Firtina, Meryem Banu Cavlak, Jeremie Kim, Nika Mansouri Ghiasi, Gagandeep Singh, Juan G\'omez-Luna, Nour Almadhoun Alserr, Mohammed Alser, Sreenivas Subramoney, Can Alkan, Saugata Ghose, Onur Mutlu
SeGraM: A Universal Hardware Accelerator for Genomic Sequence-to-Graph and Sequence-to-Sequence Mapping
To appear in ISCA'22
null
10.1145/3470496.3527436
null
cs.AR q-bio.GN
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A critical step of genome sequence analysis is the mapping of sequenced DNA fragments (i.e., reads) collected from an individual to a known linear reference genome sequence (i.e., sequence-to-sequence mapping). Recent works replace the linear reference sequence with a graph-based representation of the reference genome, which captures the genetic variations and diversity across many individuals in a population. Mapping reads to the graph-based reference genome (i.e., sequence-to-graph mapping) results in notable quality improvements in genome analysis. Unfortunately, while sequence-to-sequence mapping is well studied with many available tools and accelerators, sequence-to-graph mapping is a more difficult computational problem, with a much smaller number of practical software tools currently available. We analyze two state-of-the-art sequence-to-graph mapping tools and reveal four key issues. We find that there is a pressing need to have a specialized, high-performance, scalable, and low-cost algorithm/hardware co-design that alleviates bottlenecks in both the seeding and alignment steps of sequence-to-graph mapping. To this end, we propose SeGraM, a universal algorithm/hardware co-designed genomic mapping accelerator that can effectively and efficiently support both sequence-to-graph mapping and sequence-to-sequence mapping, for both short and long reads. To our knowledge, SeGraM is the first algorithm/hardware co-design for accelerating sequence-to-graph mapping. SeGraM consists of two main components: (1) MinSeed, the first minimizer-based seeding accelerator; and (2) BitAlign, the first bitvector-based sequence-to-graph alignment accelerator. We demonstrate that SeGraM provides significant improvements for multiple steps of the sequence-to-graph and sequence-to-sequence mapping pipelines.
[ { "version": "v1", "created": "Thu, 12 May 2022 05:27:26 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 18:29:57 GMT" } ]
2022-06-02T00:00:00
[ [ "Cali", "Damla Senol", "" ], [ "Kanellopoulos", "Konstantinos", "" ], [ "Lindegger", "Joel", "" ], [ "Bingöl", "Zülal", "" ], [ "Kalsi", "Gurpreet S.", "" ], [ "Zuo", "Ziyi", "" ], [ "Firtina", "Can", "" ], [ "Cavlak", "Meryem Banu", "" ], [ "Kim", "Jeremie", "" ], [ "Ghiasi", "Nika Mansouri", "" ], [ "Singh", "Gagandeep", "" ], [ "Gómez-Luna", "Juan", "" ], [ "Alserr", "Nour Almadhoun", "" ], [ "Alser", "Mohammed", "" ], [ "Subramoney", "Sreenivas", "" ], [ "Alkan", "Can", "" ], [ "Ghose", "Saugata", "" ], [ "Mutlu", "Onur", "" ] ]
new_dataset
0.999257
2205.06911
Wen Zhang
Wen Zhang, Eric Sheng, Michael Chang, Aurojit Panda, Mooly Sagiv, Scott Shenker
Blockaid: Data Access Policy Enforcement for Web Applications
Extended technical report for OSDI 2022 paper
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Modern web applications serve large amounts of sensitive user data, access to which is typically governed by data-access policies. Enforcing such policies is crucial to preventing improper data access, and prior work has proposed many enforcement mechanisms. However, these prior methods either alter application semantics or require adopting a new programming model; the former can result in unexpected application behavior, while the latter cannot be used with existing web frameworks. Blockaid is an access-policy enforcement system that preserves application semantics and is compatible with existing web frameworks. It intercepts database queries from the application, attempts to verify that each query is policy-compliant, and blocks queries that are not. It verifies policy compliance using SMT solvers and generalizes and caches previous compliance decisions for better performance. We show that Blockaid supports existing web applications while requiring minimal code changes and adding only modest overheads.
[ { "version": "v1", "created": "Fri, 13 May 2022 22:13:15 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 01:59:02 GMT" } ]
2022-06-02T00:00:00
[ [ "Zhang", "Wen", "" ], [ "Sheng", "Eric", "" ], [ "Chang", "Michael", "" ], [ "Panda", "Aurojit", "" ], [ "Sagiv", "Mooly", "" ], [ "Shenker", "Scott", "" ] ]
new_dataset
0.994182
2205.15951
Nihar Ranjan Sahoo
Sandhya Singh, Prapti Roy, Nihar Sahoo, Niteesh Mallela, Himanshu Gupta, Pushpak Bhattacharyya, Milind Savagaonkar, Nidhi, Roshni Ramnani, Anutosh Maitra, Shubhashis Sengupta
Hollywood Identity Bias Dataset: A Context Oriented Bias Analysis of Movie Dialogues
null
null
null
null
cs.CL cs.CY cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Movies reflect society and also hold power to transform opinions. Social biases and stereotypes present in movies can cause extensive damage due to their reach. These biases are not always found to be the need of storyline but can creep in as the author's bias. Movie production houses would prefer to ascertain that the bias present in a script is the story's demand. Today, when deep learning models can give human-level accuracy in multiple tasks, having an AI solution to identify the biases present in the script at the writing stage can help them avoid the inconvenience of stalled release, lawsuits, etc. Since AI solutions are data intensive and there exists no domain specific data to address the problem of biases in scripts, we introduce a new dataset of movie scripts that are annotated for identity bias. The dataset contains dialogue turns annotated for (i) bias labels for seven categories, viz., gender, race/ethnicity, religion, age, occupation, LGBTQ, and other, which contains biases like body shaming, personality bias, etc. (ii) labels for sensitivity, stereotype, sentiment, emotion, emotion intensity, (iii) all labels annotated with context awareness, (iv) target groups and reason for bias labels and (v) expert-driven group-validation process for high quality annotations. We also report various baseline performances for bias identification and category detection on our dataset.
[ { "version": "v1", "created": "Tue, 31 May 2022 16:49:51 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 05:43:53 GMT" } ]
2022-06-02T00:00:00
[ [ "Singh", "Sandhya", "" ], [ "Roy", "Prapti", "" ], [ "Sahoo", "Nihar", "" ], [ "Mallela", "Niteesh", "" ], [ "Gupta", "Himanshu", "" ], [ "Bhattacharyya", "Pushpak", "" ], [ "Savagaonkar", "Milind", "" ], [ "Nidhi", "", "" ], [ "Ramnani", "Roshni", "" ], [ "Maitra", "Anutosh", "" ], [ "Sengupta", "Shubhashis", "" ] ]
new_dataset
0.998905
2205.15972
Hao Yang
Hao Yang, Yang Xu, Yong Li, Hyun-Deok Choi
K-Detector: Identifying Duplicate Crash Failures in Large-Scale Software Delivery
6 pages, 7 figures, ISSRE 2020
null
10.1109/ISSREW51248.2020.00028
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
After a developer submits code, corresponding test cases arise to ensure the quality of software delivery. Test failures would occur during this period, such as crash, error, and timeout. Since it takes time for developers to resolve them, many duplicate failures will happen during this period. In the delivery practice of SAP HANA, crash triage is considered as the most time-consuming task. If duplicate crash failures can be automatically identified, the degree of automation will be significantly enhanced. To find such duplicates, we propose a training-based mathematical model that utilizes component information of SAP HANA to achieve better crash similarity comparison. We implement our approach in a tool named Knowledge-based Detector (K-Detector), which is verified by 11,208 samples and performs 0.986 in AUC. Furthermore, we have deployed K-Detector to the production environment, and it can save 97% human efforts in crash triage as statistics.
[ { "version": "v1", "created": "Tue, 31 May 2022 17:28:01 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 03:31:40 GMT" } ]
2022-06-02T00:00:00
[ [ "Yang", "Hao", "" ], [ "Xu", "Yang", "" ], [ "Li", "Yong", "" ], [ "Choi", "Hyun-Deok", "" ] ]
new_dataset
0.997703
2206.00092
Fereshteh Shakeri
Fereshteh Shakeri, Malik Boudiaf, Sina Mohammadi, Ivaxi Sheth, Mohammad Havaei, Ismail Ben Ayed, Samira Ebrahimi Kahou
FHIST: A Benchmark for Few-shot Classification of Histological Images
Code available at: https://github.com/mboudiaf/Few-shot-histology
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot learning has recently attracted wide interest in image classification, but almost all the current public benchmarks are focused on natural images. The few-shot paradigm is highly relevant in medical-imaging applications due to the scarcity of labeled data, as annotations are expensive and require specialized expertise. However, in medical imaging, few-shot learning research is sparse, limited to private data sets and is at its early stage. In particular, the few-shot setting is of high interest in histology due to the diversity and fine granularity of cancer related tissue classification tasks, and the variety of data-preparation techniques. This paper introduces a highly diversified public benchmark, gathered from various public datasets, for few-shot histology data classification. We build few-shot tasks and base-training data with various tissue types, different levels of domain shifts stemming from various cancer sites, and different class-granularity levels, thereby reflecting realistic scenarios. We evaluate the performances of state-of-the-art few-shot learning methods on our benchmark, and observe that simple fine-tuning and regularization methods achieve better results than the popular meta-learning and episodic-training paradigm. Furthermore, we introduce three scenarios based on the domain shifts between the source and target histology data: near-domain, middle-domain and out-domain. Our experiments display the potential of few-shot learning in histology classification, with state-of-art few shot learning methods approaching the supervised-learning baselines in the near-domain setting. In our out-domain setting, for 5-way 5-shot, the best performing method reaches 60% accuracy. We believe that our work could help in building realistic evaluations and fair comparisons of few-shot learning methods and will further encourage research in the few-shot paradigm.
[ { "version": "v1", "created": "Tue, 31 May 2022 20:03:40 GMT" } ]
2022-06-02T00:00:00
[ [ "Shakeri", "Fereshteh", "" ], [ "Boudiaf", "Malik", "" ], [ "Mohammadi", "Sina", "" ], [ "Sheth", "Ivaxi", "" ], [ "Havaei", "Mohammad", "" ], [ "Ayed", "Ismail Ben", "" ], [ "Kahou", "Samira Ebrahimi", "" ] ]
new_dataset
0.998056
2206.00101
Debopriya Roy Dipta
Debopriya Roy Dipta and Berk Gulmezoglu
MAD-EN: Microarchitectural Attack Detection through System-wide Energy Consumption
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Microarchitectural attacks have become more threatening the hardware security than before with the increasing diversity of attacks such as Spectre and Meltdown. Vendor patches cannot keep up with the pace of the new threats, which makes the dynamic anomaly detection tools more evident than before. Unfortunately, previous studies utilize hardware performance counters that lead to high performance overhead and profile limited number of microarchitectural attacks due to the small number of counters that can be profiled concurrently. This yields those detection tools inefficient in real-world scenarios. In this study, we introduce MAD-EN dynamic detection tool that leverages system-wide energy consumption traces collected from a generic Intel RAPL tool to detect ongoing anomalies in a system. In our experiments, we show that CNN-based MAD-EN can detect 10 different microarchitectural attacks with a total of 15 variants with the highest F1 score of 0.999, which makes our tool the most generic attack detection tool so far. Moreover, individual attacks can be distinguished with a 98% accuracy after an anomaly is detected in a system. We demonstrate that MAD-EN introduces 69.3% less performance overhead compared to performance counter-based detection mechanisms.
[ { "version": "v1", "created": "Tue, 31 May 2022 20:25:21 GMT" } ]
2022-06-02T00:00:00
[ [ "Dipta", "Debopriya Roy", "" ], [ "Gulmezoglu", "Berk", "" ] ]
new_dataset
0.998224
2206.00130
Jaein Lim
Jaein Lim and Panagiotis Tsiotras
CBS-Budget (CBSB): A Complete and Bounded Suboptimal Search for Multi-Agent Path Finding
null
null
null
null
cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multi-Agent Path Finding (MAPF) is the problem of finding a collection of collision-free paths for a team of multiple agents while minimizing some global cost, such as the sum of the time travelled by all agents, or the time travelled by the last agent. Conflict Based Search (CBS) is a leading complete and optimal MAPF solver which lazily explores the joint agent state space, using an admissible heuristic joint plan. Such an admissible heuristic joint plan is computed by combining individual shortest paths found without considering inter-agent conflicts, and which becomes gradually more informed as constraints are added to individual agents' path planning problems to avoid discovered conflicts. In this paper, we seek to speedup CBS by finding a more informed heuristic joint plan which is bounded from above. We first propose the budgeted Class-Ordered A* (bCOA*), a novel algorithm that finds the shortest path with minimal number of conflicts that is upper bounded in terms of length. Then, we propose a novel bounded-cost variant of CBS, called CBS-Budget (CBSB) by using a bCOA* search at the low-level search of the CBS and by using a modified focal search at the high-level search of the CBS. We prove that CBSB is complete and bounded-suboptimal. In our numerical experiments, CBSB finds a near optimal solution for hundreds of agents within a fraction of a second. CBSB shows state-of-the-art performance, comparable to Explicit Estimation CBS (EECBS), an enhanced recent version of CBS. On the other hand, CBSB is easier to implement than EECBS, since only two priority queues at the high-level search are needed as in Enhanced CBS (ECBS).
[ { "version": "v1", "created": "Tue, 31 May 2022 22:22:33 GMT" } ]
2022-06-02T00:00:00
[ [ "Lim", "Jaein", "" ], [ "Tsiotras", "Panagiotis", "" ] ]
new_dataset
0.972493
2206.00142
Artem Zholus
Artem Zholus, Alexey Skrynnik, Shrestha Mohanty, Zoya Volovikova, Julia Kiseleva, Artur Szlam, Marc-Alexandre Cot\'e, Aleksandr I. Panov
IGLU Gridworld: Simple and Fast Environment for Embodied Dialog Agents
null
null
null
null
cs.LG cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present the IGLU Gridworld: a reinforcement learning environment for building and evaluating language conditioned embodied agents in a scalable way. The environment features visual agent embodiment, interactive learning through collaboration, language conditioned RL, and combinatorically hard task (3d blocks building) space.
[ { "version": "v1", "created": "Tue, 31 May 2022 23:08:22 GMT" } ]
2022-06-02T00:00:00
[ [ "Zholus", "Artem", "" ], [ "Skrynnik", "Alexey", "" ], [ "Mohanty", "Shrestha", "" ], [ "Volovikova", "Zoya", "" ], [ "Kiseleva", "Julia", "" ], [ "Szlam", "Artur", "" ], [ "Coté", "Marc-Alexandre", "" ], [ "Panov", "Aleksandr I.", "" ] ]
new_dataset
0.996969
2206.00251
Guillermo P\'erez
Swen Jacobs, Guillermo A. Perez, Remco Abraham, Veronique Bruyere, Michael Cadilhac, Maximilien Colange, Charly Delfosse, Tom van Dijk, Alexandre Duret-Lutz, Peter Faymonville, Bernd Finkbeiner, Ayrat Khalimov, Felix Klein, Michael Luttenberger, Klara Meyer, Thibaud Michaud, Adrien Pommellet, Florian Renkin, Philipp Schlehuber-Caissier, Mouhammad Sakr, Salomon Sickert, Gaetan Staquet, Clement Tamines, Leander Tentrup, Adam Walker
The Reactive Synthesis Competition (SYNTCOMP): 2018-2021
null
null
null
null
cs.LO
http://creativecommons.org/licenses/by/4.0/
We report on the last four editions of the reactive synthesis competition (SYNTCOMP 2018-2021). We briefly describe the evaluation scheme and the experimental setup of SYNTCOMP. Then, we introduce new benchmark classes that have been added to the SYNTCOMP library and give an overview of the participants of SYNTCOMP. Finally, we present and analyze the results of our experimental evaluations, including a ranking of tools with respect to quantity and quality of solutions.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 06:28:01 GMT" } ]
2022-06-02T00:00:00
[ [ "Jacobs", "Swen", "" ], [ "Perez", "Guillermo A.", "" ], [ "Abraham", "Remco", "" ], [ "Bruyere", "Veronique", "" ], [ "Cadilhac", "Michael", "" ], [ "Colange", "Maximilien", "" ], [ "Delfosse", "Charly", "" ], [ "van Dijk", "Tom", "" ], [ "Duret-Lutz", "Alexandre", "" ], [ "Faymonville", "Peter", "" ], [ "Finkbeiner", "Bernd", "" ], [ "Khalimov", "Ayrat", "" ], [ "Klein", "Felix", "" ], [ "Luttenberger", "Michael", "" ], [ "Meyer", "Klara", "" ], [ "Michaud", "Thibaud", "" ], [ "Pommellet", "Adrien", "" ], [ "Renkin", "Florian", "" ], [ "Schlehuber-Caissier", "Philipp", "" ], [ "Sakr", "Mouhammad", "" ], [ "Sickert", "Salomon", "" ], [ "Staquet", "Gaetan", "" ], [ "Tamines", "Clement", "" ], [ "Tentrup", "Leander", "" ], [ "Walker", "Adam", "" ] ]
new_dataset
0.993817
2206.00266
Hyungtae Lim
Dong-Uk Seo, Hyungtae Lim, Seungjae Lee, Hyun Myung
PaGO-LOAM: Robust Ground-Optimized LiDAR Odometry
7 pages, 5 figures, conference
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Numerous researchers have conducted studies to achieve fast and robust ground-optimized LiDAR odometry methods for terrestrial mobile platforms. In particular, ground-optimized LiDAR odometry usually employs ground segmentation as a preprocessing method. This is because most of the points in a 3D point cloud captured by a 3D LiDAR sensor on a terrestrial platform are from the ground. However, the effect of the performance of ground segmentation on LiDAR odometry is still not closely examined. In this paper, a robust ground-optimized LiDAR odometry framework is proposed to facilitate the study to check the effect of ground segmentation on LiDAR SLAM based on the state-of-the-art (SOTA) method. By using our proposed odometry framework, it is easy and straightforward to test whether ground segmentation algorithms help extract well-described features and thus improve SLAM performance. In addition, by leveraging the SOTA ground segmentation method called Patchwork, which shows robust ground segmentation even in complex and uneven urban environments with little performance perturbation, a novel ground-optimized LiDAR odometry is proposed, called PaGO-LOAM. The methods were tested using the KITTI odometry dataset. \textit{PaGO-LOAM} shows robust and accurate performance compared with the baseline method. Our code is available at https://github.com/url-kaist/AlterGround-LeGO-LOAM.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 06:50:44 GMT" } ]
2022-06-02T00:00:00
[ [ "Seo", "Dong-Uk", "" ], [ "Lim", "Hyungtae", "" ], [ "Lee", "Seungjae", "" ], [ "Myung", "Hyun", "" ] ]
new_dataset
0.968522
2206.00279
Depeng Liu
Depeng Liu, Lutan Zhao, Pengfei Yang, Bow-Yaw Wang, Rui Hou, Lijun Zhang, Naijun Zhan
Defensive Design of Saturating Counters Based on Differential Privacy
null
null
null
null
cs.CR cs.FL
http://creativecommons.org/licenses/by-nc-nd/4.0/
The saturating counter is the basic module of the dynamic branch predictor, which involves the core technique to improve instruction level parallelism performance in modern processors. However, most studies focus on the performance improvement and hardware consumption of saturating counters, while ignoring the security problems they may cause. In this paper, we creatively propose to study and design saturating counters from the defense perspective of differential privacy, so that attackers cannot distinguish the states that saturating counters are in and further infer sensitive information. To obtain theoretical guarantees, we use Markov chain to formalize the attack algorithm applied to the saturating counter, investigate into the optimal attack strategy and calculate the probability of successful attack. Furthermore, we find that the attacker is able to accurately guess the branch execution of the victim's process in the existing saturating counters. To avoid this, we design a new probabilistic saturating counter, which generalizes the existing conventional and probabilistic saturating counters. The guarantee of differential privacy is applied to deduce parameters of the new saturating counters so that the security requirement can be satisfied. We also theoretically calculate the misprediction rate when the saturating counter reaches the steady state. The experimental results on testing programs show that the calculated theoretical results agree with the experimental performances. Compared with the existing conventional and probabilistic saturating counters, when the parameters of our designed models are selected appropriately, the new saturating counters can not only ensure similar operational performance, but also establish strict security guarantee.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 07:19:31 GMT" } ]
2022-06-02T00:00:00
[ [ "Liu", "Depeng", "" ], [ "Zhao", "Lutan", "" ], [ "Yang", "Pengfei", "" ], [ "Wang", "Bow-Yaw", "" ], [ "Hou", "Rui", "" ], [ "Zhang", "Lijun", "" ], [ "Zhan", "Naijun", "" ] ]
new_dataset
0.996678
2206.00304
Alberto Sanfeliu
J. E. Dominguez-Vidal, Nicolas Rodriguez, Rene Alquezar and Alberto Sanfeliu
Perception-Intention-Action Cycle in Human-Robot Collaborative Tasks
null
null
null
null
cs.RO cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work we argue that in Human-Robot Collaboration (HRC) tasks, the Perception-Action cycle in HRC tasks can not fully explain the collaborative behaviour of the human and robot and it has to be extended to Perception-Intention-Action cycle, where Intention is a key topic. In some cases, agent Intention can be perceived or inferred by the other agent, but in others, it has to be explicitly informed to the other agent to succeed the goal of the HRC task. The Perception-Intention-Action cycle includes three basic functional procedures: Perception-Intention, Situation Awareness and Action. The Perception and the Intention are the input of the Situation Awareness, which evaluates the current situation and projects it, into the future situation. The agents receive this information, plans and agree with the actions to be executed and modify their action roles while perform the HRC task. In this work, we validate the Perception-Intention-Action cycle in a joint object transportation task, modeling the Perception-Intention-Action cycle through a force model which uses real life and social forces. The perceived world is projected into a force world and the human intention (perceived or informed) is also modelled as a force that acts in the HRC task. Finally, we show that the action roles (master-slave, collaborative, neutral or adversary) are intrinsic to any HRC task and they appear in the different steps of a collaborative sequence of actions performed during the task.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 08:13:39 GMT" } ]
2022-06-02T00:00:00
[ [ "Dominguez-Vidal", "J. E.", "" ], [ "Rodriguez", "Nicolas", "" ], [ "Alquezar", "Rene", "" ], [ "Sanfeliu", "Alberto", "" ] ]
new_dataset
0.995783
2206.00325
Kun Wang
Yu Fu, Xueyuan Duan, Kun Wang, Bin Li
LDoS attack detection method based on traffic time-frequency characteristics
null
null
null
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
For the traditional denial-of-service attack detection methods have complex algorithms and high computational overhead, which are difficult to meet the demand of online detection; and the experimental environment is mostly a simulation platform, which is difficult to deploy in real network environment, we propose a real network environment-oriented LDoS attack detection method based on the time-frequency characteristics of traffic data. All the traffic data flowing through the Web server is obtained through the acquisition storage system, and the detection data set is constructed using pre-processing; the simple features of the flow fragments are used as input, and the deep neural network is used to learn the time-frequency domain features of normal traffic features and generate reconstructed sequences, and the LDoS attack is discriminated based on the differences between the reconstructed sequences and the input data in the time-frequency domain. The experimental results show that the proposed method can accurately detect the attack features in the flow fragments in a very short time and achieve high detection accuracy for complex and diverse LDoS attacks; since only the statistical features of the packets are used, there is no need to parse the packet data, which can be adapted to different network environments.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 08:39:48 GMT" } ]
2022-06-02T00:00:00
[ [ "Fu", "Yu", "" ], [ "Duan", "Xueyuan", "" ], [ "Wang", "Kun", "" ], [ "Li", "Bin", "" ] ]
new_dataset
0.997481
2206.00372
Nauros Romim
Nauros Romim, Mosahed Ahmed, Md. Saiful Islam, Arnab Sen Sharma, Hriteshwar Talukder, Mohammad Ruhul Amin
BD-SHS: A Benchmark Dataset for Learning to Detect Online Bangla Hate Speech in Different Social Contexts
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Social media platforms and online streaming services have spawned a new breed of Hate Speech (HS). Due to the massive amount of user-generated content on these sites, modern machine learning techniques are found to be feasible and cost-effective to tackle this problem. However, linguistically diverse datasets covering different social contexts in which offensive language is typically used are required to train generalizable models. In this paper, we identify the shortcomings of existing Bangla HS datasets and introduce a large manually labeled dataset BD-SHS that includes HS in different social contexts. The labeling criteria were prepared following a hierarchical annotation process, which is the first of its kind in Bangla HS to the best of our knowledge. The dataset includes more than 50,200 offensive comments crawled from online social networking sites and is at least 60% larger than any existing Bangla HS datasets. We present the benchmark result of our dataset by training different NLP models resulting in the best one achieving an F1-score of 91.0%. In our experiments, we found that a word embedding trained exclusively using 1.47 million comments from social media and streaming sites consistently resulted in better modeling of HS detection in comparison to other pre-trained embeddings. Our dataset and all accompanying codes is publicly available at github.com/naurosromim/hate-speech-dataset-for-Bengali-social-media
[ { "version": "v1", "created": "Wed, 1 Jun 2022 10:10:15 GMT" } ]
2022-06-02T00:00:00
[ [ "Romim", "Nauros", "" ], [ "Ahmed", "Mosahed", "" ], [ "Islam", "Md. Saiful", "" ], [ "Sharma", "Arnab Sen", "" ], [ "Talukder", "Hriteshwar", "" ], [ "Amin", "Mohammad Ruhul", "" ] ]
new_dataset
0.999805
2206.00376
Giuseppe Romana
Antonio Restivo, Giuseppe Romana, Marinella Sciortino
String Attractors and Infinite Words
null
null
null
null
cs.FL cs.DS
http://creativecommons.org/licenses/by/4.0/
The notion of string attractor has been introduced in [Kempa and Prezza, 2018] in the context of Data Compression and it represents a set of positions of a finite word in which all of its factors can be "attracted". The smallest size $\gamma^*$ of a string attractor for a finite word is a lower bound for several repetitiveness measures associated with the most common compression schemes, including BWT-based and LZ-based compressors. The combinatorial properties of the measure $\gamma^*$ have been studied in [Mantaci et al., 2021]. Very recently, a complexity measure, called string attractor profile function, has been introduced for infinite words, by evaluating $\gamma^*$ on each prefix. Such a measure has been studied for automatic sequences and linearly recurrent infinite words [Schaeffer and Shallit, 2021]. In this paper, we study the relationship between such a complexity measure and other well-known combinatorial notions related to repetitiveness in the context of infinite words, such as the factor complexity and the recurrence. Furthermore, we introduce new string attractor-based complexity measures, in which the structure and the distribution of positions in a string attractor of the prefixes of infinite words are considered. We show that such measures provide a finer classification of some infinite families of words.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 10:22:59 GMT" } ]
2022-06-02T00:00:00
[ [ "Restivo", "Antonio", "" ], [ "Romana", "Giuseppe", "" ], [ "Sciortino", "Marinella", "" ] ]
new_dataset
0.995543
2206.00437
Miryam de Lhoneux
Heather Lent, Kelechi Ogueji, Miryam de Lhoneux, Orevaoghene Ahia, Anders S{\o}gaard
What a Creole Wants, What a Creole Needs
LREC 2022
null
null
null
cs.CL cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, the natural language processing (NLP) community has given increased attention to the disparity of efforts directed towards high-resource languages over low-resource ones. Efforts to remedy this delta often begin with translations of existing English datasets into other languages. However, this approach ignores that different language communities have different needs. We consider a group of low-resource languages, Creole languages. Creoles are both largely absent from the NLP literature, and also often ignored by society at large due to stigma, despite these languages having sizable and vibrant communities. We demonstrate, through conversations with Creole experts and surveys of Creole-speaking communities, how the things needed from language technology can change dramatically from one language to another, even when the languages are considered to be very similar to each other, as with Creoles. We discuss the prominent themes arising from these conversations, and ultimately demonstrate that useful language technology cannot be built without involving the relevant community.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 12:22:34 GMT" } ]
2022-06-02T00:00:00
[ [ "Lent", "Heather", "" ], [ "Ogueji", "Kelechi", "" ], [ "de Lhoneux", "Miryam", "" ], [ "Ahia", "Orevaoghene", "" ], [ "Søgaard", "Anders", "" ] ]
new_dataset
0.984131
2206.00462
Xiuxin Tang
Xiuxin Tang and Rong Luo
MDS and AMDS symbol-pair codes constructed from repeated-root codes
22 pages. arXiv admin note: substantial text overlap with arXiv:2204.02670
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by-sa/4.0/
Symbol-pair codes introduced by Cassuto and Blaum in 2010 are designed to protect against the pair errors in symbol-pair read channels. One of the central themes in symbol-error correction is the construction of maximal distance separable (MDS) symbol-pair codes that possess the largest possible pair-error correcting performance. Based on repeated-root cyclic codes, we construct two classes of MDS symbol-pair codes for more general generator polynomials and also give a new class of almost MDS (AMDS) symbol-pair codes with the length $lp$. In addition, we derive all MDS and AMDS symbol-pair codes with length $3p$, when the degree of the generator polynomials is no more than 10. The main results are obtained by determining the solutions of certain equations over finite fields.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 12:51:47 GMT" } ]
2022-06-02T00:00:00
[ [ "Tang", "Xiuxin", "" ], [ "Luo", "Rong", "" ] ]
new_dataset
0.998852
2206.00491
David Gillsj\"o
David Gillsj\"o, Gabrielle Flood, Kalle {\AA}str\"om
Semantic Room Wireframe Detection from a Single View
Accepted for ICPR2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reconstruction of indoor surfaces with limited texture information or with repeated textures, a situation common in walls and ceilings, may be difficult with a monocular Structure from Motion system. We propose a Semantic Room Wireframe Detection task to predict a Semantic Wireframe from a single perspective image. Such predictions may be used with shape priors to estimate the Room Layout and aid reconstruction. To train and test the proposed algorithm we create a new set of annotations from the simulated Structured3D dataset. We show qualitatively that the SRW-Net handles complex room geometries better than previous Room Layout Estimation algorithms while quantitatively out-performing the baseline in non-semantic Wireframe Detection.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 13:44:50 GMT" } ]
2022-06-02T00:00:00
[ [ "Gillsjö", "David", "" ], [ "Flood", "Gabrielle", "" ], [ "Åström", "Kalle", "" ] ]
new_dataset
0.999314
2206.00524
Tran Khanh Quoc
Khanh Q. Tran and An T. Nguyen and Phu Gia Hoang and Canh Duc Luu and Trong-Hop Do and Kiet Van Nguyen
Vietnamese Hate and Offensive Detection using PhoBERT-CNN and Social Media Streaming Data
null
null
null
null
cs.CL cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Society needs to develop a system to detect hate and offense to build a healthy and safe environment. However, current research in this field still faces four major shortcomings, including deficient pre-processing techniques, indifference to data imbalance issues, modest performance models, and lacking practical applications. This paper focused on developing an intelligent system capable of addressing these shortcomings. Firstly, we proposed an efficient pre-processing technique to clean comments collected from Vietnamese social media. Secondly, a novel hate speech detection (HSD) model, which is the combination of a pre-trained PhoBERT model and a Text-CNN model, was proposed for solving tasks in Vietnamese. Thirdly, EDA techniques are applied to deal with imbalanced data to improve the performance of classification models. Besides, various experiments were conducted as baselines to compare and investigate the proposed model's performance against state-of-the-art methods. The experiment results show that the proposed PhoBERT-CNN model outperforms SOTA methods and achieves an F1-score of 67,46% and 98,45% on two benchmark datasets, ViHSD and HSD-VLSP, respectively. Finally, we also built a streaming HSD application to demonstrate the practicality of our proposed system.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 14:33:25 GMT" } ]
2022-06-02T00:00:00
[ [ "Tran", "Khanh Q.", "" ], [ "Nguyen", "An T.", "" ], [ "Hoang", "Phu Gia", "" ], [ "Luu", "Canh Duc", "" ], [ "Do", "Trong-Hop", "" ], [ "Van Nguyen", "Kiet", "" ] ]
new_dataset
0.960733
2206.00527
Jasmin Breitenstein
Jasmin Breitenstein and Tim Fingscheidt
Amodal Cityscapes: A New Dataset, its Generation, and an Amodal Semantic Segmentation Challenge Baseline
This paper is accepted at IEEE Intelligent Vehicles Symposium 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Amodal perception terms the ability of humans to imagine the entire shapes of occluded objects. This gives humans an advantage to keep track of everything that is going on, especially in crowded situations. Typical perception functions, however, lack amodal perception abilities and are therefore at a disadvantage in situations with occlusions. Complex urban driving scenarios often experience many different types of occlusions and, therefore, amodal perception for automated vehicles is an important task to investigate. In this paper, we consider the task of amodal semantic segmentation and propose a generic way to generate datasets to train amodal semantic segmentation methods. We use this approach to generate an amodal Cityscapes dataset. Moreover, we propose and evaluate a method as baseline on Amodal Cityscapes, showing its applicability for amodal semantic segmentation in automotive environment perception. We provide the means to re-generate this dataset on github.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 14:38:33 GMT" } ]
2022-06-02T00:00:00
[ [ "Breitenstein", "Jasmin", "" ], [ "Fingscheidt", "Tim", "" ] ]
new_dataset
0.99976
2206.00550
Jakob Moosbauer
Manuel Kauers, Jakob Moosbauer
A Normal Form for Matrix Multiplication Schemes
11 pages
null
null
null
cs.CC
http://creativecommons.org/licenses/by/4.0/
Schemes for exact multiplication of small matrices have a large symmetry group. This group defines an equivalence relation on the set of multiplication schemes. There are algorithms to decide whether two schemes are equivalent. However, for a large number of schemes a pairwise equivalence check becomes cumbersome. In this paper we propose an algorithm to compute a normal form of matrix multiplication schemes. This allows us to decide pairwise equivalence of a larger number of schemes efficiently.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 15:04:41 GMT" } ]
2022-06-02T00:00:00
[ [ "Kauers", "Manuel", "" ], [ "Moosbauer", "Jakob", "" ] ]
new_dataset
0.998481
2206.00623
Matthias Jasny
Matthias Jasny, Lasse Thostrup, Tobias Ziegler, Carsten Binnig
P4DB -- The Case for In-Network OLTP (Extended Technical Report)
Extended Technical Report for: P4DB - The Case for In-Network OLTP
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present a new approach for distributed DBMSs called P4DB, that uses a programmable switch to accelerate OLTP workloads. The main idea of P4DB is that it implements a transaction processing engine on top of a P4-programmable switch. The switch can thus act as an accelerator in the network, especially when it is used to store and process hot (contended) tuples on the switch. In our experiments, we show that P4DB hence provides significant benefits compared to traditional DBMS architectures and can achieve a speedup of up to 8x.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 16:48:47 GMT" } ]
2022-06-02T00:00:00
[ [ "Jasny", "Matthias", "" ], [ "Thostrup", "Lasse", "" ], [ "Ziegler", "Tobias", "" ], [ "Binnig", "Carsten", "" ] ]
new_dataset
0.962792
2206.00666
Moses Openja
Moses Openja, Mohammad Mehdi Morovati, Le An, Foutse Khomh, Mouna Abidi
Technical Debts and Faults in Open-source Quantum Software Systems: An Empirical Study
null
null
null
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum computing is a rapidly growing field attracting the interest of both researchers and software developers. Supported by its numerous open-source tools, developers can now build, test, or run their quantum algorithms. Although the maintenance practices for traditional software systems have been extensively studied, the maintenance of quantum software is still a new field of study but a critical part to ensure the quality of a whole quantum computing system. In this work, we set out to investigate the distribution and evolution of technical debts in quantum software and their relationship with fault occurrences. Understanding these problems could guide future quantum development and provide maintenance recommendations for the key areas where quantum software developers and researchers should pay more attention. In this paper, we empirically studied 118 open-source quantum projects, which were selected from GitHub. The projects are categorized into 10 categories. We found that the studied quantum software suffers from the issues of code convention violation, error-handling, and code design. We also observed a statistically significant correlation between code design, redundant code or code convention, and the occurrences of faults in quantum software.
[ { "version": "v1", "created": "Wed, 1 Jun 2022 17:59:54 GMT" } ]
2022-06-02T00:00:00
[ [ "Openja", "Moses", "" ], [ "Morovati", "Mohammad Mehdi", "" ], [ "An", "Le", "" ], [ "Khomh", "Foutse", "" ], [ "Abidi", "Mouna", "" ] ]
new_dataset
0.995848
1812.11214
Eugene Belilovsky
Mathieu Andreux, Tom\'as Angles, Georgios Exarchakis, Roberto Leonarduzzi, Gaspar Rochette, Louis Thiry, John Zarka, St\'ephane Mallat, Joakim and\'en, Eugene Belilovsky, Joan Bruna, Vincent Lostanlen, Muawiz Chaudhary, Matthew J. Hirn, Edouard Oyallon, Sixin Zhang, Carmine Cella, Michael Eickenberg
Kymatio: Scattering Transforms in Python
null
null
null
null
cs.LG cs.CV cs.SD eess.AS stat.ML
http://creativecommons.org/publicdomain/zero/1.0/
The wavelet scattering transform is an invariant signal representation suitable for many signal processing and machine learning applications. We present the Kymatio software package, an easy-to-use, high-performance Python implementation of the scattering transform in 1D, 2D, and 3D that is compatible with modern deep learning frameworks. All transforms may be executed on a GPU (in addition to CPU), offering a considerable speed up over CPU implementations. The package also has a small memory footprint, resulting inefficient memory usage. The source code, documentation, and examples are available undera BSD license at https://www.kymat.io/
[ { "version": "v1", "created": "Fri, 28 Dec 2018 20:53:29 GMT" }, { "version": "v2", "created": "Sat, 1 Jun 2019 06:00:28 GMT" }, { "version": "v3", "created": "Tue, 31 May 2022 09:46:58 GMT" } ]
2022-06-01T00:00:00
[ [ "Andreux", "Mathieu", "" ], [ "Angles", "Tomás", "" ], [ "Exarchakis", "Georgios", "" ], [ "Leonarduzzi", "Roberto", "" ], [ "Rochette", "Gaspar", "" ], [ "Thiry", "Louis", "" ], [ "Zarka", "John", "" ], [ "Mallat", "Stéphane", "" ], [ "andén", "Joakim", "" ], [ "Belilovsky", "Eugene", "" ], [ "Bruna", "Joan", "" ], [ "Lostanlen", "Vincent", "" ], [ "Chaudhary", "Muawiz", "" ], [ "Hirn", "Matthew J.", "" ], [ "Oyallon", "Edouard", "" ], [ "Zhang", "Sixin", "" ], [ "Cella", "Carmine", "" ], [ "Eickenberg", "Michael", "" ] ]
new_dataset
0.999647
1907.00829
Jesko Hecking-Harbusch
Raven Beutner, Bernd Finkbeiner, Jesko Hecking-Harbusch
Translating Asynchronous Games for Distributed Synthesis (Full Version)
null
null
10.4230/LIPIcs.CONCUR.2019.26
null
cs.LO cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In distributed synthesis, we generate a set of process implementations that, together, accomplish an objective against all possible behaviors of the environment. A lot of recent work has focussed on systems with causal memory, i.e., sets of asynchronous processes that exchange their causal histories upon synchronization. Decidability results for this problem have been stated either in terms of control games, which extend Zielonka's asynchronous automata by partitioning the actions into controllable and uncontrollable, or in terms of Petri games, which extend Petri nets by partitioning the tokens into system and environment players. The precise connection between these two models was so far, however, an open question. In this paper, we provide the first formal connection between control games and Petri games. We establish the equivalence of the two game models based on weak bisimulations between their strategies. For both directions, we show that a game of one type can be translated into an equivalent game of the other type. We provide exponential upper and lower bounds for the translations. Our translations make it possible to transfer and combine decidability results between the two types of games. Exemplarily, we translate decidability in acyclic communication architectures, originally obtained for control games, to Petri games, and decidability in single-process systems, originally obtained for Petri games, to control games.
[ { "version": "v1", "created": "Mon, 1 Jul 2019 14:42:47 GMT" }, { "version": "v2", "created": "Mon, 2 Dec 2019 10:57:17 GMT" } ]
2022-06-01T00:00:00
[ [ "Beutner", "Raven", "" ], [ "Finkbeiner", "Bernd", "" ], [ "Hecking-Harbusch", "Jesko", "" ] ]
new_dataset
0.997805
1911.10038
Matej Ul\v{c}ar
Matej Ul\v{c}ar, Kristiina Vaik, Jessica Lindstr\"om, Milda Dailid\.enait\.e, Marko Robnik-\v{S}ikonja
Multilingual Culture-Independent Word Analogy Datasets
7 pages, LREC2020 conference
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020), pages 4074-4080
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
In text processing, deep neural networks mostly use word embeddings as an input. Embeddings have to ensure that relations between words are reflected through distances in a high-dimensional numeric space. To compare the quality of different text embeddings, typically, we use benchmark datasets. We present a collection of such datasets for the word analogy task in nine languages: Croatian, English, Estonian, Finnish, Latvian, Lithuanian, Russian, Slovenian, and Swedish. We redesigned the original monolingual analogy task to be much more culturally independent and also constructed cross-lingual analogy datasets for the involved languages. We present basic statistics of the created datasets and their initial evaluation using fastText embeddings.
[ { "version": "v1", "created": "Fri, 22 Nov 2019 13:39:06 GMT" }, { "version": "v2", "created": "Fri, 27 Mar 2020 15:32:16 GMT" } ]
2022-06-01T00:00:00
[ [ "Ulčar", "Matej", "" ], [ "Vaik", "Kristiina", "" ], [ "Lindström", "Jessica", "" ], [ "Dailidėnaitė", "Milda", "" ], [ "Robnik-Šikonja", "Marko", "" ] ]
new_dataset
0.998513
2009.13619
Ruofei Chen
Ruofei Chen, Stephanie Balzer, Bernardo Toninho
Ferrite: A Judgmental Embedding of Session Types in Rust
null
null
null
null
cs.PL
http://creativecommons.org/licenses/by-sa/4.0/
This paper introduces Ferrite, a shallow embedding of session types in Rust. In contrast to existing session type libraries and embeddings for mainstream languages, Ferrite not only supports linear session types but also shared session types. Shared session types allow sharing (aliasing) of channels while preserving session fidelity (preservation) using type modalities for acquiring and releasing sessions. Ferrite adopts a propositions as types approach and encodes typing derivations as Rust functions, with the proof of successful type-checking manifesting as a Rust program. We provide an evaluation of Ferrite using Servo as a practical example, and demonstrate how safe communication can be achieved in the canvas component using Ferrite.
[ { "version": "v1", "created": "Mon, 28 Sep 2020 20:54:56 GMT" }, { "version": "v2", "created": "Thu, 17 Dec 2020 21:09:58 GMT" }, { "version": "v3", "created": "Thu, 25 Mar 2021 13:56:58 GMT" }, { "version": "v4", "created": "Sun, 26 Sep 2021 17:40:57 GMT" }, { "version": "v5", "created": "Fri, 27 May 2022 09:01:59 GMT" }, { "version": "v6", "created": "Tue, 31 May 2022 07:48:37 GMT" } ]
2022-06-01T00:00:00
[ [ "Chen", "Ruofei", "" ], [ "Balzer", "Stephanie", "" ], [ "Toninho", "Bernardo", "" ] ]
new_dataset
0.998669
2011.00096
Peter Lindner
Martin Grohe, Peter Lindner
Independence in Infinite Probabilistic Databases
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Probabilistic databases (PDBs) model uncertainty in data. The current standard is to view PDBs as finite probability spaces over relational database instances. Since many attributes in typical databases have infinite domains, such as integers, strings, or real numbers, it is often more natural to view PDBs as infinite probability spaces over database instances. In this paper, we lay the mathematical foundations of infinite probabilistic databases. Our focus then is on independence assumptions. Tuple-independent PDBs play a central role in theory and practice of PDBs. Here, we study infinite tuple-independent PDBs as well as related models such as infinite block-independent disjoint PDBs. While the standard model of PDBs focuses on a set-based semantics, we also study tuple-independent PDBs with a bag semantics and independence in PDBs over uncountable fact spaces. We also propose a new approach to PDBs with an open-world assumption, addressing issues raised by Ceylan et al. (Proc. KR 2016) and generalizing their work, which is still rooted in finite tuple-independent PDBs. Moreover, for countable PDBs we propose an approximate query answering algorithm.
[ { "version": "v1", "created": "Fri, 30 Oct 2020 20:34:39 GMT" }, { "version": "v2", "created": "Tue, 1 Feb 2022 21:25:19 GMT" }, { "version": "v3", "created": "Tue, 31 May 2022 09:52:01 GMT" } ]
2022-06-01T00:00:00
[ [ "Grohe", "Martin", "" ], [ "Lindner", "Peter", "" ] ]
new_dataset
0.993641
2012.01813
Johanna Johansen Ms
Johanna Johansen, Tore Pedersen, Simone Fischer-H\"ubner, Christian Johansen, Gerardo Schneider, Arnold Roosendaal, Harald Zwingelberg, Anders Jakob Sivesind, Josef Noll
A Multidisciplinary Definition of Privacy Labels: The Story of Princess Privacy and the Seven Helpers
29 pages, 6 figures
Information and Computer Security, Vol. 30, No. 3, (2022) pp. 452-469
10.1108/ICS-06-2021-0080
null
cs.CR cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
Privacy is currently in distress and in need of rescue, much like princesses in the all-familiar fairytales. We employ storytelling and metaphors from fairytales to make reader-friendly and streamline our arguments about how a complex concept of Privacy Labeling (the 'knight in shining armor') can be a solution to the current state of Privacy (the 'princess in distress'). We give a precise definition of Privacy Labeling (PL), painting a panoptic portrait from seven different perspectives (the 'seven helpers'): Business, Legal, Regulatory, Usability and Human Factors, Educative, Technological, and Multidisciplinary. We describe a common vision, proposing several important 'traits of character' of PL as well as identifying 'undeveloped potentialities', i.e., open problems on which the community can focus. More specifically, this position paper identifies the stakeholders of the PL and their needs with regard to privacy, describing how PL should be and look like in order to address these needs. Throughout the paper, we highlight goals, characteristics, open problems, and starting points for creating, what we consider to be, the ideal PL. In the end we present three approaches to establish and manage PL, through: self-evaluations, certifications, or community endeavors. Based on these, we sketch a roadmap for future developments.
[ { "version": "v1", "created": "Thu, 3 Dec 2020 10:42:30 GMT" }, { "version": "v2", "created": "Tue, 9 Feb 2021 10:57:17 GMT" }, { "version": "v3", "created": "Sun, 9 May 2021 16:54:58 GMT" } ]
2022-06-01T00:00:00
[ [ "Johansen", "Johanna", "" ], [ "Pedersen", "Tore", "" ], [ "Fischer-Hübner", "Simone", "" ], [ "Johansen", "Christian", "" ], [ "Schneider", "Gerardo", "" ], [ "Roosendaal", "Arnold", "" ], [ "Zwingelberg", "Harald", "" ], [ "Sivesind", "Anders Jakob", "" ], [ "Noll", "Josef", "" ] ]
new_dataset
0.987549
2101.09563
Joseph Hejderup
Joseph Hejderup, Moritz Beller, Konstantinos Triantafyllou, Georgios Gousios
Pr\"azi: From Package-based to Call-based Dependency Networks
42 pages, 14 figures, journal
null
10.1007/s10664-021-10071-9
null
cs.SE
http://creativecommons.org/licenses/by-nc-sa/4.0/
Modern programming languages such as Java, JavaScript, and Rust encourage software reuse by hosting diverse and fast-growing repositories of highly interdependent packages (i.e., reusable libraries) for their users. The standard way to study the interdependence between software packages is to infer a package dependency network by parsing manifest data. Such networks help answer questions such as "How many packages have dependencies to packages with known security issues?" or "What are the most used packages?". However, an overlooked aspect in existing studies is that manifest-inferred relationships do not necessarily examine the actual usage of these dependencies in source code. To better model dependencies between packages, we developed Pr\"azi, an approach combining manifests and call graphs of packages. Pr\"azi constructs a dependency network at the more fine-grained function-level, instead of at the manifest level. This paper discusses a prototypical Pr\"azi implementation for the popular system programming language Rust. We use Pr\"azi to characterize Rust's package repository, Cratesio, at the function level and perform a comparative study with metadata-based networks. Our results show that metadata-based networks generalize how packages use their dependencies. Using Pr\"azi, we find packages call only 40% of their resolved dependencies, and that manual analysis of 34 cases reveals that not all packages use a dependency the same way. We argue that researchers and practitioners interested in understanding how developers or programs use dependencies should account for its context -- not the sum of all resolved dependencies.
[ { "version": "v1", "created": "Sat, 23 Jan 2021 19:10:55 GMT" }, { "version": "v2", "created": "Wed, 27 Jan 2021 13:31:55 GMT" }, { "version": "v3", "created": "Thu, 28 Jan 2021 09:07:31 GMT" }, { "version": "v4", "created": "Wed, 30 Jun 2021 08:58:41 GMT" }, { "version": "v5", "created": "Wed, 20 Oct 2021 11:22:04 GMT" } ]
2022-06-01T00:00:00
[ [ "Hejderup", "Joseph", "" ], [ "Beller", "Moritz", "" ], [ "Triantafyllou", "Konstantinos", "" ], [ "Gousios", "Georgios", "" ] ]
new_dataset
0.997278
2110.05687
Qichao Ying
Qichao Ying, Xiaoxiao Hu, Xiangyu Zhang, Zhenxing Qian and Xinpeng Zhang
RWN: Robust Watermarking Network for Image Cropping Localization
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Image cropping can be maliciously used to manipulate the layout of an image and alter the underlying meaning. Previous image crop detection schemes only predicts whether an image has been cropped, ignoring which part of the image is cropped. This paper presents a novel robust watermarking network (RWN) for image crop localization. We train an anti-crop processor (ACP) that embeds a watermark into a target image. The visually indistinguishable protected image is then posted on the social network instead of the original image. At the recipient's side, ACP extracts the watermark from the attacked image, and we conduct feature matching on the original and extracted watermark to locate the position of the crop in the original image plane. We further extend our scheme to detect tampering attack on the attacked image. Besides, we explore a simple yet efficient method (JPEG-Mixup) to improve the generalization of JPEG robustness. According to our comprehensive experiments, RWN is the first to provide high-accuracy and robust image crop localization. Besides, the accuracy of tamper detection is comparable with many state-of-the-art passive-based methods.
[ { "version": "v1", "created": "Tue, 12 Oct 2021 02:19:42 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 14:57:39 GMT" } ]
2022-06-01T00:00:00
[ [ "Ying", "Qichao", "" ], [ "Hu", "Xiaoxiao", "" ], [ "Zhang", "Xiangyu", "" ], [ "Qian", "Zhenxing", "" ], [ "Zhang", "Xinpeng", "" ] ]
new_dataset
0.954058
2110.08733
Junjue Wang
Junjue Wang, Zhuo Zheng, Ailong Ma, Xiaoyan Lu and Yanfei Zhong
LoveDA: A Remote Sensing Land-Cover Dataset for Domain Adaptive Semantic Segmentation
Accepted by NeurIPS 2021 Datasets and Benchmarks Track
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Deep learning approaches have shown promising results in remote sensing high spatial resolution (HSR) land-cover mapping. However, urban and rural scenes can show completely different geographical landscapes, and the inadequate generalizability of these algorithms hinders city-level or national-level mapping. Most of the existing HSR land-cover datasets mainly promote the research of learning semantic representation, thereby ignoring the model transferability. In this paper, we introduce the Land-cOVEr Domain Adaptive semantic segmentation (LoveDA) dataset to advance semantic and transferable learning. The LoveDA dataset contains 5987 HSR images with 166768 annotated objects from three different cities. Compared to the existing datasets, the LoveDA dataset encompasses two domains (urban and rural), which brings considerable challenges due to the: 1) multi-scale objects; 2) complex background samples; and 3) inconsistent class distributions. The LoveDA dataset is suitable for both land-cover semantic segmentation and unsupervised domain adaptation (UDA) tasks. Accordingly, we benchmarked the LoveDA dataset on eleven semantic segmentation methods and eight UDA methods. Some exploratory studies including multi-scale architectures and strategies, additional background supervision, and pseudo-label analysis were also carried out to address these challenges. The code and data are available at https://github.com/Junjue-Wang/LoveDA.
[ { "version": "v1", "created": "Sun, 17 Oct 2021 06:12:48 GMT" }, { "version": "v2", "created": "Tue, 19 Oct 2021 05:35:35 GMT" }, { "version": "v3", "created": "Thu, 21 Oct 2021 01:26:31 GMT" }, { "version": "v4", "created": "Sun, 24 Oct 2021 10:58:21 GMT" }, { "version": "v5", "created": "Wed, 29 Dec 2021 04:02:41 GMT" }, { "version": "v6", "created": "Tue, 31 May 2022 11:03:05 GMT" } ]
2022-06-01T00:00:00
[ [ "Wang", "Junjue", "" ], [ "Zheng", "Zhuo", "" ], [ "Ma", "Ailong", "" ], [ "Lu", "Xiaoyan", "" ], [ "Zhong", "Yanfei", "" ] ]
new_dataset
0.998961
2201.04288
Shen Yan
Shen Yan, Xuehan Xiong, Anurag Arnab, Zhichao Lu, Mi Zhang, Chen Sun, Cordelia Schmid
Multiview Transformers for Video Recognition
CVPR 2022; arXiv v4: update results on Epic-Kitchens-100
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Video understanding requires reasoning at multiple spatiotemporal resolutions -- from short fine-grained motions to events taking place over longer durations. Although transformer architectures have recently advanced the state-of-the-art, they have not explicitly modelled different spatiotemporal resolutions. To this end, we present Multiview Transformers for Video Recognition (MTV). Our model consists of separate encoders to represent different views of the input video with lateral connections to fuse information across views. We present thorough ablation studies of our model and show that MTV consistently performs better than single-view counterparts in terms of accuracy and computational cost across a range of model sizes. Furthermore, we achieve state-of-the-art results on six standard datasets, and improve even further with large-scale pretraining. Code and checkpoints are available at: https://github.com/google-research/scenic/tree/main/scenic/projects/mtv.
[ { "version": "v1", "created": "Wed, 12 Jan 2022 03:33:57 GMT" }, { "version": "v2", "created": "Thu, 20 Jan 2022 05:38:20 GMT" }, { "version": "v3", "created": "Sat, 23 Apr 2022 19:02:09 GMT" }, { "version": "v4", "created": "Tue, 31 May 2022 06:19:59 GMT" } ]
2022-06-01T00:00:00
[ [ "Yan", "Shen", "" ], [ "Xiong", "Xuehan", "" ], [ "Arnab", "Anurag", "" ], [ "Lu", "Zhichao", "" ], [ "Zhang", "Mi", "" ], [ "Sun", "Chen", "" ], [ "Schmid", "Cordelia", "" ] ]
new_dataset
0.977867
2201.11944
Yi Chen
Bruno Hexsel, Heethesh Vhavle and Yi Chen
DICP: Doppler Iterative Closest Point Algorithm
Accepted at Robotics: Science and Systems (RSS) 2022
null
null
null
cs.RO cs.CV
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a novel algorithm for point cloud registration for range sensors capable of measuring per-return instantaneous radial velocity: Doppler ICP. Existing variants of ICP that solely rely on geometry or other features generally fail to estimate the motion of the sensor correctly in scenarios that have non-distinctive features and/or repetitive geometric structures such as hallways, tunnels, highways, and bridges. We propose a new Doppler velocity objective function that exploits the compatibility of each point's Doppler measurement and the sensor's current motion estimate. We jointly optimize the Doppler velocity objective function and the geometric objective function which sufficiently constrains the point cloud alignment problem even in feature-denied environments. Furthermore, the correspondence matches used for the alignment are improved by pruning away the points from dynamic targets which generally degrade the ICP solution. We evaluate our method on data collected from real sensors and from simulation. Our results show that with the added Doppler velocity residual terms, our method achieves a significant improvement in registration accuracy along with faster convergence, on average, when compared to classical point-to-plane ICP that solely relies on geometric residuals.
[ { "version": "v1", "created": "Fri, 28 Jan 2022 05:51:07 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 04:07:47 GMT" } ]
2022-06-01T00:00:00
[ [ "Hexsel", "Bruno", "" ], [ "Vhavle", "Heethesh", "" ], [ "Chen", "Yi", "" ] ]
new_dataset
0.999378
2202.02179
Guanlan Zhang
Guanlan Zhang, Yipai Du, Hongyu Yu and Michael Yu Wang
DelTact: A Vision-based Tactile Sensor Using Dense Color Pattern
8 pages contents, 1 page references, 8 figures, 2 tables
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Tactile sensing is an essential perception for robots to complete dexterous tasks. As a promising tactile sensing technique, vision-based tactile sensors have been developed to improve robot performance in manipulation and grasping. Here we propose a new design of a vision-based tactile sensor, DelTact. The sensor uses a modular hardware architecture for compactness whilst maintaining a contact measurement of full resolution (798*586) and large area (675mm2). Moreover, it adopts an improved dense random color pattern based on the previous version to achieve high accuracy of contact deformation tracking. In particular, we optimize the color pattern generation process and select the appropriate pattern for coordinating with a dense optical flow algorithm under a real-world experimental sensory setting. The optical flow obtained from the raw image is processed to determine shape and force distribution on the contact surface. We also demonstrate the method to extract contact shape and force distribution from the raw images. Experimental results demonstrate that the sensor is capable of providing tactile measurements with low error and high frequency (40Hz).
[ { "version": "v1", "created": "Fri, 4 Feb 2022 15:12:52 GMT" }, { "version": "v2", "created": "Tue, 15 Feb 2022 07:48:30 GMT" }, { "version": "v3", "created": "Tue, 31 May 2022 07:03:57 GMT" } ]
2022-06-01T00:00:00
[ [ "Zhang", "Guanlan", "" ], [ "Du", "Yipai", "" ], [ "Yu", "Hongyu", "" ], [ "Wang", "Michael Yu", "" ] ]
new_dataset
0.998931
2204.07649
Miriam Cha
Miriam Cha, Kuan Wei Huang, Morgan Schmidt, Gregory Angelides, Mark Hamilton, Sam Goldberg, Armando Cabrera, Phillip Isola, Taylor Perron, Bill Freeman, Yen-Chen Lin, Brandon Swenson, Jean Piou
MultiEarth 2022 -- Multimodal Learning for Earth and Environment Workshop and Challenge
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
The Multimodal Learning for Earth and Environment Challenge (MultiEarth 2022) will be the first competition aimed at the monitoring and analysis of deforestation in the Amazon rainforest at any time and in any weather conditions. The goal of the Challenge is to provide a common benchmark for multimodal information processing and to bring together the earth and environmental science communities as well as multimodal representation learning communities to compare the relative merits of the various multimodal learning methods to deforestation estimation under well-defined and strictly comparable conditions. MultiEarth 2022 will have three sub-challenges: 1) matrix completion, 2) deforestation estimation, and 3) image-to-image translation. This paper presents the challenge guidelines, datasets, and evaluation metrics for the three sub-challenges. Our challenge website is available at https://sites.google.com/view/rainforest-challenge.
[ { "version": "v1", "created": "Fri, 15 Apr 2022 20:59:02 GMT" }, { "version": "v2", "created": "Wed, 27 Apr 2022 02:49:45 GMT" }, { "version": "v3", "created": "Tue, 31 May 2022 13:34:06 GMT" } ]
2022-06-01T00:00:00
[ [ "Cha", "Miriam", "" ], [ "Huang", "Kuan Wei", "" ], [ "Schmidt", "Morgan", "" ], [ "Angelides", "Gregory", "" ], [ "Hamilton", "Mark", "" ], [ "Goldberg", "Sam", "" ], [ "Cabrera", "Armando", "" ], [ "Isola", "Phillip", "" ], [ "Perron", "Taylor", "" ], [ "Freeman", "Bill", "" ], [ "Lin", "Yen-Chen", "" ], [ "Swenson", "Brandon", "" ], [ "Piou", "Jean", "" ] ]
new_dataset
0.988822
2205.06921
Ruofei Chen
Ruo Fei Chen, Stephanie Balzer, and Bernardo Toninho
Ferrite: A Judgmental Embedding of Session Types in Rust
Accidental duplication of arXiv:2009.13619
null
null
null
cs.PL
http://creativecommons.org/licenses/by-sa/4.0/
\emph{Session types} have proved viable in expressing and verifying the protocols of message-passing systems. While message passing is a dominant concurrency paradigm in practice, real world software is written without session types. A limitation of existing session type libraries in mainstream languages is their restriction to linear session types, precluding application scenarios that demand sharing and thus aliasing of channel references. This paper introduces Ferrite, a shallow embedding of session types in Rust that supports both \emph{linear} and \emph{shared} sessions. The formal foundation of Ferrite constitutes the shared session type calculus $\sills$, which Ferrite encodes via a novel \emph{judgmental embedding} technique. The fulcrum of the embedding is the notion of a typing judgment that allows reasoning about shared and linear resources to type a session. Typing rules are then encoded as functions over judgments, with a valid typing derivation manifesting as a well-typed Rust program. This Rust program generated by Ferrite serves as a \emph{certificate}, ensuring that the application will proceed according to the protocol defined by the session type. The paper details the features and implementation of Ferrite and includes a case study on implementing Servo's canvas component in Ferrite.
[ { "version": "v1", "created": "Fri, 13 May 2022 23:05:32 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 07:49:37 GMT" } ]
2022-06-01T00:00:00
[ [ "Chen", "Ruo Fei", "" ], [ "Balzer", "Stephanie", "" ], [ "Toninho", "Bernardo", "" ] ]
new_dataset
0.99838
2205.07025
Gil Ben-Shachar
Gill Barequet and Gil Ben-Shachar
Minimal-Perimeter Lattice Animals and the Constant-Isomer Conjecture
null
null
null
null
cs.CG math.CO
http://creativecommons.org/licenses/by/4.0/
We consider minimal-perimeter lattice animals, providing a set of conditions which are sufficient for a lattice to have the property that inflating all minimal-perimeter animals of a certain size yields (without repetitions) all minimal-perimeter animals of a new, larger size. We demonstrate this result on the two-dimensional square and hexagonal lattices. In addition, we characterize the sizes of minimal-perimeter animals on these lattices that are not created by inflating members of another set of minimal-perimeter animals.
[ { "version": "v1", "created": "Sat, 14 May 2022 10:01:14 GMT" } ]
2022-06-01T00:00:00
[ [ "Barequet", "Gill", "" ], [ "Ben-Shachar", "Gil", "" ] ]
new_dataset
0.999524
2205.09388
Raffaele De Rose Dr.
Raffaele De Rose, Tommaso Zanotti, Francesco Maria Puglisi, Felice Crupi, Paolo Pavan, Marco Lanuzza
Smart Material Implication Using Spin-Transfer Torque Magnetic Tunnel Junctions for Logic-in-Memory Computing
null
Solid-State Electronics 2022
10.1016/j.sse.2022.108390
null
cs.ET physics.app-ph
http://creativecommons.org/licenses/by-nc-nd/4.0/
Smart material implication (SIMPLY) logic has been recently proposed for the design of energy-efficient Logic-in-Memory (LIM) architectures based on non-volatile resistive memory devices. The SIMPLY logic is enabled by adding a comparator to the conventional IMPLY scheme. This allows performing a preliminary READ operation and hence the SET operation only in the case it is actually required. This work explores the SIMPLY logic scheme using nanoscale spin-transfer torque magnetic tunnel junction (STT-MTJ) devices. The performance of the STT-MTJ based SIMPLY architecture is analyzed by varying the load resistor and applied voltages to implement both READ and SET operations, while also investigating the effect of temperature on circuit operation. Obtained results show an existing tradeoff between error rate and energy consumption, which can be effectively managed by properly setting the values of load resistor and applied voltages. In addition, our analysis proves that tracking the temperature dependence of the MTJ properties through a proportional to absolute temperature (PTAT) reference voltage at the input of the comparator is beneficial to mitigate the reliability degradation under temperature variations.
[ { "version": "v1", "created": "Thu, 19 May 2022 08:34:19 GMT" } ]
2022-06-01T00:00:00
[ [ "De Rose", "Raffaele", "" ], [ "Zanotti", "Tommaso", "" ], [ "Puglisi", "Francesco Maria", "" ], [ "Crupi", "Felice", "" ], [ "Pavan", "Paolo", "" ], [ "Lanuzza", "Marco", "" ] ]
new_dataset
0.991409
2205.14191
Lakmal Meegahapola
Wageesha Bangamuarachchi, Anju Chamantha, Lakmal Meegahapola, Salvador Ruiz-Correa, Indika Perera, Daniel Gatica-Perez
Sensing Eating Events in Context: A Smartphone-Only Approach
Accepted for publication at IEEE Access
null
10.1109/ACCESS.2022.3179702
null
cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While the task of automatically detecting eating events has been examined in prior work using various wearable devices, the use of smartphones as standalone devices to infer eating events remains an open issue. This paper proposes a framework that infers eating vs. non-eating events from passive smartphone sensing and evaluates it on a dataset of 58 college students. First, we show that time of the day and features from modalities such as screen usage, accelerometer, app usage, and location are indicative of eating and non-eating events. Then, we show that eating events can be inferred with an AUROC (area under the receiver operating characteristics curve) of 0.65 using subject-independent machine learning models, which can be further improved up to 0.81 for subject-dependent and 0.81 for hybrid models using personalization techniques. Moreover, we show that users have different behavioral and contextual routines around eating episodes requiring specific feature groups to train fully personalized models. These findings are of potential value for future mobile food diary apps that are context-aware by enabling scalable sensing-based eating studies using only smartphones; detecting under-reported eating events, thus increasing data quality in self report-based studies; providing functionality to track food consumption and generate reminders for on-time collection of food diaries; and supporting mobile interventions towards healthy eating practices.
[ { "version": "v1", "created": "Fri, 27 May 2022 18:42:23 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 09:49:33 GMT" } ]
2022-06-01T00:00:00
[ [ "Bangamuarachchi", "Wageesha", "" ], [ "Chamantha", "Anju", "" ], [ "Meegahapola", "Lakmal", "" ], [ "Ruiz-Correa", "Salvador", "" ], [ "Perera", "Indika", "" ], [ "Gatica-Perez", "Daniel", "" ] ]
new_dataset
0.966383
2205.14247
Manuel Olgu\'in Mu\~noz
Manuel Olgu\'in Mu\~noz (1), Seyed Samie Mostafavi (1), Vishnu N. Moothedath (1), James Gross (1) ((1) KTH Royal Institute of Technology)
Ainur: A Framework for Repeatable End-to-End Wireless Edge Computing Testbed Research
6 pages, 6 figures, demo session paper
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Experimental research on wireless networking in combination with edge and cloud computing has been the subject of explosive interest in the last decade. This development has been driven by the increasing complexity of modern wireless technologies and the extensive softwarization of these through projects such as a Open Radio Access Network (O-RAN). In this context, a number of small- to mid-scale testbeds have emerged, employing a variety of technologies to target a wide array of use-cases and scenarios in the context of novel mobile communication technologies such as 5G and beyond-5G. Little work, however, has yet been devoted to developing a standard framework for wireless testbed automation which is hardware-agnostic and compatible with edge- and cloud-native technologies. Such a solution would simplify the development of new testbeds by completely or partially removing the requirement for custom management and orchestration software. In this paper, we present the first such mostly hardware-agnostic wireless testbed automation framework, Ainur. It is designed to configure, manage, orchestrate, and deploy workloads from an end-to-end perspective. Ainur is built on top of cloud-native technologies such as Docker, and is provided as FOSS to the community through the KTH-EXPECA/Ainur repository on GitHub. We demonstrate the utility of the platform with a series of scenarios, showcasing in particular its flexibility with respect to physical link definition, computation placement, and automation of arbitrarily complex experimental scenarios.
[ { "version": "v1", "created": "Fri, 27 May 2022 21:48:25 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 05:39:07 GMT" } ]
2022-06-01T00:00:00
[ [ "Muñoz", "Manuel Olguín", "", "KTH Royal Institute of Technology" ], [ "Mostafavi", "Seyed Samie", "", "KTH Royal Institute of Technology" ], [ "Moothedath", "Vishnu N.", "", "KTH Royal Institute of Technology" ], [ "Gross", "James", "", "KTH Royal Institute of Technology" ] ]
new_dataset
0.998848
2205.14460
Chaofeng Wang
Chaofeng Wang, Sarah Elizabeth Antos, Jessica Grayson Gosling Goldsmith, Luis Miguel Triveno
Visual Perception of Building and Household Vulnerability from Streets
null
null
null
null
cs.LG cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In developing countries, building codes often are outdated or not enforced. As a result, a large portion of the housing stock is substandard and vulnerable to natural hazards and climate related events. Assessing housing quality is key to inform public policies and private investments. Standard assessment methods are typically carried out only on a sample / pilot basis due to its high costs or, when complete, tend to be obsolete due to the lack of compliance with recommended updating standards or not accessible to most users with the level of detail needed to take key policy or business decisions. Thus, we propose an evaluation framework that is cost-efficient for first capture and future updates, and is reliable at the block level. The framework complements existing work of using street view imagery combined with deep learning to automatically extract building information to assist the identification of housing characteristics. We then check its potential for scalability and higher level reliability. For that purpose, we create an index, which synthesises the highest possible level of granularity of data at the housing unit and at the household level at the block level, and assess whether the predictions made by our model could be used to approximate vulnerability conditions with a lower budget and in selected areas. Our results indicated that the predictions from the images are clearly correlated with the index.
[ { "version": "v1", "created": "Sat, 28 May 2022 15:35:47 GMT" } ]
2022-06-01T00:00:00
[ [ "Wang", "Chaofeng", "" ], [ "Antos", "Sarah Elizabeth", "" ], [ "Goldsmith", "Jessica Grayson Gosling", "" ], [ "Triveno", "Luis Miguel", "" ] ]
new_dataset
0.99493
2205.14728
Raviraj Joshi
Raviraj Joshi
L3Cube-MahaNLP: Marathi Natural Language Processing Datasets, Models, and Library
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite being the third most popular language in India, the Marathi language lacks useful NLP resources. Moreover, popular NLP libraries do not have support for the Marathi language. With L3Cube-MahaNLP, we aim to build resources and a library for Marathi natural language processing. We present datasets and transformer models for supervised tasks like sentiment analysis, named entity recognition, and hate speech detection. We have also published a monolingual Marathi corpus for unsupervised language modeling tasks. Overall we present MahaCorpus, MahaSent, MahaNER, and MahaHate datasets and their corresponding MahaBERT models fine-tuned on these datasets. We aim to move ahead of benchmark datasets and prepare useful resources for Marathi. The resources are available at https://github.com/l3cube-pune/MarathiNLP.
[ { "version": "v1", "created": "Sun, 29 May 2022 17:51:00 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 15:15:51 GMT" } ]
2022-06-01T00:00:00
[ [ "Joshi", "Raviraj", "" ] ]
new_dataset
0.999765
2205.14942
Hao Wu
Siyuan Liang, Hao Wu
Edge YOLO: Real-Time Intelligent Object Detection System Based on Edge-Cloud Cooperation in Autonomous Vehicles
null
null
10.1109/TITS.2022.3158253
null
cs.CV cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
Driven by the ever-increasing requirements of autonomous vehicles, such as traffic monitoring and driving assistant, deep learning-based object detection (DL-OD) has been increasingly attractive in intelligent transportation systems. However, it is difficult for the existing DL-OD schemes to realize the responsible, cost-saving, and energy-efficient autonomous vehicle systems due to low their inherent defects of low timeliness and high energy consumption. In this paper, we propose an object detection (OD) system based on edge-cloud cooperation and reconstructive convolutional neural networks, which is called Edge YOLO. This system can effectively avoid the excessive dependence on computing power and uneven distribution of cloud computing resources. Specifically, it is a lightweight OD framework realized by combining pruning feature extraction network and compression feature fusion network to enhance the efficiency of multi-scale prediction to the largest extent. In addition, we developed an autonomous driving platform equipped with NVIDIA Jetson for system-level verification. We experimentally demonstrate the reliability and efficiency of Edge YOLO on COCO2017 and KITTI data sets, respectively. According to COCO2017 standard datasets with a speed of 26.6 frames per second (FPS), the results show that the number of parameters in the entire network is only 25.67 MB, while the accuracy (mAP) is up to 47.3%.
[ { "version": "v1", "created": "Mon, 30 May 2022 09:16:35 GMT" } ]
2022-06-01T00:00:00
[ [ "Liang", "Siyuan", "" ], [ "Wu", "Hao", "" ] ]
new_dataset
0.999455
2205.15053
Thomas Germer
Thomas Germer, Tobias Uelwer and Stefan Harmeling
Deblurring Photographs of Characters Using Deep Neural Networks
15 pages, 13 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present our approach for the Helsinki Deblur Challenge (HDC2021). The task of this challenge is to deblur images of characters without knowing the point spread function (PSF). The organizers provided a dataset of pairs of sharp and blurred images. Our method consists of three steps: First, we estimate a warping transformation of the images to align the sharp images with the blurred ones. Next, we estimate the PSF using a quasi-Newton method. The estimated PSF allows to generate additional pairs of sharp and blurred images. Finally, we train a deep convolutional neural network to reconstruct the sharp images from the blurred images. Our method is able to successfully reconstruct images from the first 10 stages of the HDC 2021 data. Our code is available at https://github.com/hhu-machine-learning/hdc2021-psfnn.
[ { "version": "v1", "created": "Mon, 30 May 2022 12:32:26 GMT" }, { "version": "v2", "created": "Tue, 31 May 2022 07:45:45 GMT" } ]
2022-06-01T00:00:00
[ [ "Germer", "Thomas", "" ], [ "Uelwer", "Tobias", "" ], [ "Harmeling", "Stefan", "" ] ]
new_dataset
0.986426
2205.15359
Yoshimichi Nakatsuka
Yoshimichi Nakatsuka, Ercan Ozturk, Alex Shamis, Andrew Paverd, Peter Pietzuch
CTR: Checkpoint, Transfer, and Restore for Secure Enclaves
null
null
null
null
cs.CR cs.SY eess.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Hardware-based Trusted Execution Environments (TEEs) are becoming increasingly prevalent in cloud computing, forming the basis for confidential computing. However, the security goals of TEEs sometimes conflict with existing cloud functionality, such as VM or process migration, because TEE memory cannot be read by the hypervisor, OS, or other software on the platform. Whilst some newer TEE architectures support migration of entire protected VMs, there is currently no practical solution for migrating individual processes containing in-process TEEs. The inability to migrate such processes leads to operational inefficiencies or even data loss if the host platform must be urgently restarted. We present CTR, a software-only design to retrofit migration functionality into existing TEE architectures, whilst maintaining their expected security guarantees. Our design allows TEEs to be interrupted and migrated at arbitrary points in their execution, thus maintaining compatibility with existing VM and process migration techniques. By cooperatively involving the TEE in the migration process, our design also allows application developers to specify stateful migration-related policies, such as limiting the number of times a particular TEE may be migrated. Our prototype implementation for Intel SGX demonstrates that migration latency increases linearly with the size of the TEE memory and is dominated by TEE system operations.
[ { "version": "v1", "created": "Mon, 30 May 2022 18:08:09 GMT" } ]
2022-06-01T00:00:00
[ [ "Nakatsuka", "Yoshimichi", "" ], [ "Ozturk", "Ercan", "" ], [ "Shamis", "Alex", "" ], [ "Paverd", "Andrew", "" ], [ "Pietzuch", "Peter", "" ] ]
new_dataset
0.998732
2205.15452
Aitor Alvarez-Gila
Aitor Alvarez-Gila, Joost van de Weijer, Yaxing Wang, Estibaliz Garrote
MVMO: A Multi-Object Dataset for Wide Baseline Multi-View Semantic Segmentation
5 pages
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present MVMO (Multi-View, Multi-Object dataset): a synthetic dataset of 116,000 scenes containing randomly placed objects of 10 distinct classes and captured from 25 camera locations in the upper hemisphere. MVMO comprises photorealistic, path-traced image renders, together with semantic segmentation ground truth for every view. Unlike existing multi-view datasets, MVMO features wide baselines between cameras and high density of objects, which lead to large disparities, heavy occlusions and view-dependent object appearance. Single view semantic segmentation is hindered by self and inter-object occlusions that could benefit from additional viewpoints. Therefore, we expect that MVMO will propel research in multi-view semantic segmentation and cross-view semantic transfer. We also provide baselines that show that new research is needed in such fields to exploit the complementary information of multi-view setups.
[ { "version": "v1", "created": "Mon, 30 May 2022 22:37:43 GMT" } ]
2022-06-01T00:00:00
[ [ "Alvarez-Gila", "Aitor", "" ], [ "van de Weijer", "Joost", "" ], [ "Wang", "Yaxing", "" ], [ "Garrote", "Estibaliz", "" ] ]
new_dataset
0.999837
2205.15473
Aaron Ray
Aaron Ray, Alyssa Pierson, Daniela Rus
Free-Space Ellipsoid Graphs for Multi-Agent Target Monitoring
IEEE Intl. Conf. on Robotics and Automation (ICRA) 2022
null
null
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We apply a novel framework for decomposing and reasoning about free space in an environment to a multi-agent persistent monitoring problem. Our decomposition method represents free space as a collection of ellipsoids associated with a weighted connectivity graph. The same ellipsoids used for reasoning about connectivity and distance during high level planning can be used as state constraints in a Model Predictive Control algorithm to enforce collision-free motion. This structure allows for streamlined implementation in distributed multi-agent tasks in 2D and 3D environments. We illustrate its effectiveness for a team of tracking agents tasked with monitoring a group of target agents. Our algorithm uses the ellipsoid decomposition as a primitive for the coordination, path planning, and control of the tracking agents. Simulations with four tracking agents monitoring fifteen dynamic targets in obstacle-rich environments demonstrate the performance of our algorithm.
[ { "version": "v1", "created": "Tue, 31 May 2022 00:04:51 GMT" } ]
2022-06-01T00:00:00
[ [ "Ray", "Aaron", "" ], [ "Pierson", "Alyssa", "" ], [ "Rus", "Daniela", "" ] ]
new_dataset
0.983272
2205.15501
Yiming Zeng
Yiming Zeng, Jiarui Zhang, Ji Liu, Zhenhua Liu, Yuanyuan Yang
Multi-Entanglement Routing Design over Quantum Networks
null
IEEE International Conference on Computer Communications 2022
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum networks are considered as a promising future platform for quantum information exchange and quantum applications, which have capabilities far beyond the traditional communication networks. Remote quantum entanglement is an essential component of a quantum network. How to efficiently design a multi-routing entanglement protocol is a fundamental yet challenging problem. In this paper, we study a quantum entanglement routing problem to simultaneously maximize the number of quantum-user pairs and their expected throughput. Our approach is to formulate the problem as two sequential integer programming steps. We propose efficient entanglement routing algorithms for the two integer programming steps and analyze their time complexity and performance bounds. Results of evaluation highlight that our approach outperforms existing solutions in both served quantum-user pairs numbers and the network expected throughput.
[ { "version": "v1", "created": "Tue, 31 May 2022 01:52:44 GMT" } ]
2022-06-01T00:00:00
[ [ "Zeng", "Yiming", "" ], [ "Zhang", "Jiarui", "" ], [ "Liu", "Ji", "" ], [ "Liu", "Zhenhua", "" ], [ "Yang", "Yuanyuan", "" ] ]
new_dataset
0.995917
2205.15509
Bingqian Lin
Bingqian Lin, Yi Zhu, Zicong Chen, Xiwen Liang, Jianzhuang Liu, Xiaodan Liang
ADAPT: Vision-Language Navigation with Modality-Aligned Action Prompts
Accepted to CVPR 2022
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-Language Navigation (VLN) is a challenging task that requires an embodied agent to perform action-level modality alignment, i.e., make instruction-asked actions sequentially in complex visual environments. Most existing VLN agents learn the instruction-path data directly and cannot sufficiently explore action-level alignment knowledge inside the multi-modal inputs. In this paper, we propose modAlity-aligneD Action PrompTs (ADAPT), which provides the VLN agent with action prompts to enable the explicit learning of action-level modality alignment to pursue successful navigation. Specifically, an action prompt is defined as a modality-aligned pair of an image sub-prompt and a text sub-prompt, where the former is a single-view observation and the latter is a phrase like ''walk past the chair''. When starting navigation, the instruction-related action prompt set is retrieved from a pre-built action prompt base and passed through a prompt encoder to obtain the prompt feature. Then the prompt feature is concatenated with the original instruction feature and fed to a multi-layer transformer for action prediction. To collect high-quality action prompts into the prompt base, we use the Contrastive Language-Image Pretraining (CLIP) model which has powerful cross-modality alignment ability. A modality alignment loss and a sequential consistency loss are further introduced to enhance the alignment of the action prompt and enforce the agent to focus on the related prompt sequentially. Experimental results on both R2R and RxR show the superiority of ADAPT over state-of-the-art methods.
[ { "version": "v1", "created": "Tue, 31 May 2022 02:41:31 GMT" } ]
2022-06-01T00:00:00
[ [ "Lin", "Bingqian", "" ], [ "Zhu", "Yi", "" ], [ "Chen", "Zicong", "" ], [ "Liang", "Xiwen", "" ], [ "Liu", "Jianzhuang", "" ], [ "Liang", "Xiaodan", "" ] ]
new_dataset
0.998794
2205.15535
Pei Liu
Pei Liu, Mattia Fazzini, John Grundy, and Li Li
Do Customized Android Frameworks Keep Pace with Android?
null
null
10.1145/3524842.3527963
MSR '22: Proceedings of the 19th International Conference on Mining Software Repositories
cs.SE
http://creativecommons.org/licenses/by/4.0/
To satisfy varying customer needs, device vendors and OS providers often rely on the open-source nature of the Android OS and offer customized versions of the Android OS. When a new version of the Android OS is released, device vendors and OS providers need to merge the changes from the Android OS into their customizations to account for its bug fixes, security patches, and new features. Because developers of customized OSs might have made changes to code locations that were also modified by the developers of the Android OS, the merge task can be characterized by conflicts, which can be time-consuming and error-prone to resolve. To provide more insight into this critical aspect of the Android ecosystem, we present an empirical study that investigates how eight open-source customizations of the Android OS merge the changes from the Android OS into their projects. The study analyzes how often the developers from the customized OSs merge changes from the Android OS, how often the developers experience textual merge conflicts, and the characteristics of these conflicts. Furthermore, to analyze the effect of the conflicts, the study also analyzes how the conflicts can affect a randomly selected sample of 1,000 apps. After analyzing 1,148 merge operations, we identified that developers perform these operations for 9.7\% of the released versions of the Android OS, developers will encounter at least one conflict in 41.3\% of the merge operations, 58.1\% of the conflicts required developers to change the customized OSs, and 64.4\% of the apps considered use at least one method affected by a conflict. In addition to detailing our results, the paper also discusses the implications of our findings and provides insights for researchers and practitioners working with Android and its customizations.
[ { "version": "v1", "created": "Tue, 31 May 2022 04:45:59 GMT" } ]
2022-06-01T00:00:00
[ [ "Liu", "Pei", "" ], [ "Fazzini", "Mattia", "" ], [ "Grundy", "John", "" ], [ "Li", "Li", "" ] ]
new_dataset
0.989309
2205.15572
Weikai Chen
Weikai Chen, Cheng Lin, Weiyang Li, Bo Yang
3PSDF: Three-Pole Signed Distance Function for Learning Surfaces with Arbitrary Topologies
Accepted to CVPR 2022
null
null
null
cs.CV cs.GR
http://creativecommons.org/licenses/by/4.0/
Recent advances in learning 3D shapes using neural implicit functions have achieved impressive results by breaking the previous barrier of resolution and diversity for varying topologies. However, most of such approaches are limited to closed surfaces as they require the space to be divided into inside and outside. More recent works based on unsigned distance function have been proposed to handle complex geometry containing both the open and closed surfaces. Nonetheless, as their direct outputs are point clouds, robustly obtaining high-quality meshing results from discrete points remains an open question. We present a novel learnable implicit representation, called the three-pole signed distance function (3PSDF), that can represent non-watertight 3D shapes with arbitrary topologies while supporting easy field-to-mesh conversion using the classic Marching Cubes algorithm. The key to our method is the introduction of a new sign, the NULL sign, in addition to the conventional in and out labels. The existence of the null sign could stop the formation of a closed isosurface derived from the bisector of the in/out regions. Further, we propose a dedicated learning framework to effectively learn 3PSDF without worrying about the vanishing gradient due to the null labels. Experimental results show that our approach outperforms the previous state-of-the-art methods in a wide range of benchmarks both quantitatively and qualitatively.
[ { "version": "v1", "created": "Tue, 31 May 2022 07:24:04 GMT" } ]
2022-06-01T00:00:00
[ [ "Chen", "Weikai", "" ], [ "Lin", "Cheng", "" ], [ "Li", "Weiyang", "" ], [ "Yang", "Bo", "" ] ]
new_dataset
0.998833
2205.15599
Alp \"Oktem
Alp \"Oktem, Rodolfo Zevallos, Yasmin Moslem, G\"une\c{s} \"Ozt\"urk, Karen \c{S}arhon
Preparing an Endangered Language for the Digital Age: The Case of Judeo-Spanish
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
We develop machine translation and speech synthesis systems to complement the efforts of revitalizing Judeo-Spanish, the exiled language of Sephardic Jews, which survived for centuries, but now faces the threat of extinction in the digital age. Building on resources created by the Sephardic community of Turkey and elsewhere, we create corpora and tools that would help preserve this language for future generations. For machine translation, we first develop a Spanish to Judeo-Spanish rule-based machine translation system, in order to generate large volumes of synthetic parallel data in the relevant language pairs: Turkish, English and Spanish. Then, we train baseline neural machine translation engines using this synthetic data and authentic parallel data created from translations by the Sephardic community. For text-to-speech synthesis, we present a 3.5 hour single speaker speech corpus for building a neural speech synthesis engine. Resources, model weights and online inference engines are shared publicly.
[ { "version": "v1", "created": "Tue, 31 May 2022 08:26:33 GMT" } ]
2022-06-01T00:00:00
[ [ "Öktem", "Alp", "" ], [ "Zevallos", "Rodolfo", "" ], [ "Moslem", "Yasmin", "" ], [ "Öztürk", "Güneş", "" ], [ "Şarhon", "Karen", "" ] ]
new_dataset
0.992374
2205.15627
Marco Antonio Stranisci
Marco Antonio Stranisci, Simona Frenda, Eleonora Ceccaldi, Valerio Basile, Rossana Damiano, Viviana Patti
APPReddit: a Corpus of Reddit Posts Annotated for Appraisal
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-sa/4.0/
Despite the large number of computational resources for emotion recognition, there is a lack of data sets relying on appraisal models. According to Appraisal theories, emotions are the outcome of a multi-dimensional evaluation of events. In this paper, we present APPReddit, the first corpus of non-experimental data annotated according to this theory. After describing its development, we compare our resource with enISEAR, a corpus of events created in an experimental setting and annotated for appraisal. Results show that the two corpora can be mapped notwithstanding different typologies of data and annotations schemes. A SVM model trained on APPReddit predicts four appraisal dimensions without significant loss. Merging both corpora in a single training set increases the prediction of 3 out of 4 dimensions. Such findings pave the way to a better performing classification model for appraisal prediction.
[ { "version": "v1", "created": "Tue, 31 May 2022 09:11:57 GMT" } ]
2022-06-01T00:00:00
[ [ "Stranisci", "Marco Antonio", "" ], [ "Frenda", "Simona", "" ], [ "Ceccaldi", "Eleonora", "" ], [ "Basile", "Valerio", "" ], [ "Damiano", "Rossana", "" ], [ "Patti", "Viviana", "" ] ]
new_dataset
0.956352
2205.15648
Xing Wang
Xing Wang, Alvin Lim
Reliable and Efficient Broadcast Routing Using Multipoint Relays Over VANET For Vehicle Platooning
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we design and implement a reliable broadcast algorithm over a VANET for supporting multi-hop forwarding of vehicle sensor and control packets that will enable vehicles to platoon with each other in order to form a road train behind the lead truck. In particular, we use multipoint relays (MPRs) for packet transmission, which leads to more efficient communication in a VANET. We evaluate the performance based on simulation by running a platooning simulation application program, and show that with MPRs, the communication in the VANET to form a road train is more efficient and reliable.
[ { "version": "v1", "created": "Tue, 31 May 2022 09:39:31 GMT" } ]
2022-06-01T00:00:00
[ [ "Wang", "Xing", "" ], [ "Lim", "Alvin", "" ] ]
new_dataset
0.988813
2205.15661
Seyed Ali Bahrainian
Seyed Ali Bahrainian, Sheridan Feucht, Carsten Eickhoff
NEWTS: A Corpus for News Topic-Focused Summarization
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Text summarization models are approaching human levels of fidelity. Existing benchmarking corpora provide concordant pairs of full and abridged versions of Web, news or, professional content. To date, all summarization datasets operate under a one-size-fits-all paradigm that may not reflect the full range of organic summarization needs. Several recently proposed models (e.g., plug and play language models) have the capacity to condition the generated summaries on a desired range of themes. These capacities remain largely unused and unevaluated as there is no dedicated dataset that would support the task of topic-focused summarization. This paper introduces the first topical summarization corpus NEWTS, based on the well-known CNN/Dailymail dataset, and annotated via online crowd-sourcing. Each source article is paired with two reference summaries, each focusing on a different theme of the source document. We evaluate a representative range of existing techniques and analyze the effectiveness of different prompting methods.
[ { "version": "v1", "created": "Tue, 31 May 2022 10:01:38 GMT" } ]
2022-06-01T00:00:00
[ [ "Bahrainian", "Seyed Ali", "" ], [ "Feucht", "Sheridan", "" ], [ "Eickhoff", "Carsten", "" ] ]
new_dataset
0.998565
2205.15757
Alex Shamis
Alex Shamis, Peter Pietzuch, Antoine Delignat-Lavaud, Andrew Paverd, and Manuel Costa
Dropbear: Machine Learning Marketplaces made Trustworthy with Byzantine Model Agreement
null
null
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Marketplaces for machine learning (ML) models are emerging as a way for organizations to monetize models. They allow model owners to retain control over hosted models by using cloud resources to execute ML inference requests for a fee, preserving model confidentiality. Clients that rely on hosted models require trustworthy inference results, even when models are managed by third parties. While the resilience and robustness of inference results can be improved by combining multiple independent models, such support is unavailable in today's marketplaces. We describe Dropbear, the first ML model marketplace that provides clients with strong integrity guarantees by combining results from multiple models in a trustworthy fashion. Dropbear replicates inference computation across a model group, which consists of multiple cloud-based GPU nodes belonging to different model owners. Clients receive inference certificates that prove agreement using a Byzantine consensus protocol, even under model heterogeneity and concurrent model updates. To improve performance, Dropbear batches inference and consensus operations separately: it first performs the inference computation across a model group, before ordering requests and model updates. Despite its strong integrity guarantees, Dropbear's performance matches that of state-of-the-art ML inference systems: deployed across 3 cloud sites, it handles 800 requests/s with ImageNet models.
[ { "version": "v1", "created": "Tue, 31 May 2022 12:45:56 GMT" } ]
2022-06-01T00:00:00
[ [ "Shamis", "Alex", "" ], [ "Pietzuch", "Peter", "" ], [ "Delignat-Lavaud", "Antoine", "" ], [ "Paverd", "Andrew", "" ], [ "Costa", "Manuel", "" ] ]
new_dataset
0.986735
2205.15768
Mark Boss
Mark Boss, Andreas Engelhardt, Abhishek Kar, Yuanzhen Li, Deqing Sun, Jonathan T. Barron, Hendrik P. A. Lensch, Varun Jampani
SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections
null
null
null
null
cs.CV cs.GR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inverse rendering of an object under entirely unknown capture conditions is a fundamental challenge in computer vision and graphics. Neural approaches such as NeRF have achieved photorealistic results on novel view synthesis, but they require known camera poses. Solving this problem with unknown camera poses is highly challenging as it requires joint optimization over shape, radiance, and pose. This problem is exacerbated when the input images are captured in the wild with varying backgrounds and illuminations. Standard pose estimation techniques fail in such image collections in the wild due to very few estimated correspondences across images. Furthermore, NeRF cannot relight a scene under any illumination, as it operates on radiance (the product of reflectance and illumination). We propose a joint optimization framework to estimate the shape, BRDF, and per-image camera pose and illumination. Our method works on in-the-wild online image collections of an object and produces relightable 3D assets for several use-cases such as AR/VR. To our knowledge, our method is the first to tackle this severely unconstrained task with minimal user interaction. Project page: https://markboss.me/publication/2022-samurai/ Video: https://youtu.be/LlYuGDjXp-8
[ { "version": "v1", "created": "Tue, 31 May 2022 13:16:48 GMT" } ]
2022-06-01T00:00:00
[ [ "Boss", "Mark", "" ], [ "Engelhardt", "Andreas", "" ], [ "Kar", "Abhishek", "" ], [ "Li", "Yuanzhen", "" ], [ "Sun", "Deqing", "" ], [ "Barron", "Jonathan T.", "" ], [ "Lensch", "Hendrik P. A.", "" ], [ "Jampani", "Varun", "" ] ]
new_dataset
0.996581
2205.15848
Qiancheng Fu
Qiancheng Fu, Qingshan Xu, Yew-Soon Ong, Wenbing Tao
Geo-Neus: Geometry-Consistent Neural Implicit Surfaces Learning for Multi-view Reconstruction
null
null
null
null
cs.CV cs.GR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, neural implicit surfaces learning by volume rendering has become popular for multi-view reconstruction. However, one key challenge remains: existing approaches lack explicit multi-view geometry constraints, hence usually fail to generate geometry consistent surface reconstruction. To address this challenge, we propose geometry-consistent neural implicit surfaces learning for multi-view reconstruction. We theoretically analyze that there exists a gap between the volume rendering integral and point-based signed distance function (SDF) modeling. To bridge this gap, we directly locate the zero-level set of SDF networks and explicitly perform multi-view geometry optimization by leveraging the sparse geometry from structure from motion (SFM) and photometric consistency in multi-view stereo. This makes our SDF optimization unbiased and allows the multi-view geometry constraints to focus on the true surface optimization. Extensive experiments show that our proposed method achieves high-quality surface reconstruction in both complex thin structures and large smooth regions, thus outperforming the state-of-the-arts by a large margin.
[ { "version": "v1", "created": "Tue, 31 May 2022 14:52:07 GMT" } ]
2022-06-01T00:00:00
[ [ "Fu", "Qiancheng", "" ], [ "Xu", "Qingshan", "" ], [ "Ong", "Yew-Soon", "" ], [ "Tao", "Wenbing", "" ] ]
new_dataset
0.985663
2205.15868
Ming Ding
Wenyi Hong, Ming Ding, Wendi Zheng, Xinghan Liu, Jie Tang
CogVideo: Large-scale Pretraining for Text-to-Video Generation via Transformers
null
null
null
null
cs.CV cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large-scale pretrained transformers have created milestones in text (GPT-3) and text-to-image (DALL-E and CogView) generation. Its application to video generation is still facing many challenges: The potential huge computation cost makes the training from scratch unaffordable; The scarcity and weak relevance of text-video datasets hinder the model understanding complex movement semantics. In this work, we present 9B-parameter transformer CogVideo, trained by inheriting a pretrained text-to-image model, CogView2. We also propose multi-frame-rate hierarchical training strategy to better align text and video clips. As (probably) the first open-source large-scale pretrained text-to-video model, CogVideo outperforms all publicly available models at a large margin in machine and human evaluations.
[ { "version": "v1", "created": "Sun, 29 May 2022 19:02:15 GMT" } ]
2022-06-01T00:00:00
[ [ "Hong", "Wenyi", "" ], [ "Ding", "Ming", "" ], [ "Zheng", "Wendi", "" ], [ "Liu", "Xinghan", "" ], [ "Tang", "Jie", "" ] ]
new_dataset
0.982818
2205.15915
Lorenzo Ceragioli
Lorenzo Ceragioli, Letterio Galletta, Pierpaolo Degano and David Basin
IFCIL: An Information Flow Configuration Language for SELinux (Extended Version)
Extended version of the paper "IFCIL: An Information Flow Configuration Language for SELinux"
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Security Enhanced Linux (SELinux) is a security architecture for Linux implementing mandatory access control. It has been used in numerous security-critical contexts ranging from servers to mobile devices. But this is challenging as SELinux security policies are difficult to write, understand, and maintain. Recently, the intermediate language CIL was introduced to foster the development of high-level policy languages and to write structured configurations. However, CIL lacks mechanisms for ensuring that the resulting configurations obey desired information flow policies. To remedy this, we propose IFCIL, a backward compatible extension of CIL for specifying fine-grained information flow requirements for CIL configurations. Using IFCIL, administrators can express, e.g., confidentiality, integrity, and non-interference properties. We also provide a tool to statically verify these requirements.
[ { "version": "v1", "created": "Tue, 31 May 2022 16:03:53 GMT" } ]
2022-06-01T00:00:00
[ [ "Ceragioli", "Lorenzo", "" ], [ "Galletta", "Letterio", "" ], [ "Degano", "Pierpaolo", "" ], [ "Basin", "David", "" ] ]
new_dataset
0.997925
2205.15930
Elmurod Kuriyozov
Sanatbek Matlatipov, Hulkar Rahimboeva, Jaloliddin Rajabov, Elmurod Kuriyozov
Uzbek Sentiment Analysis based on local Restaurant Reviews
The International Conference on Agglutinative Language Technologies as a challenge of Natural Language Processing (ALTNLP) 2022, Koper, Slovenia
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Extracting useful information for sentiment analysis and classification problems from a big amount of user-generated feedback, such as restaurant reviews, is a crucial task of natural language processing, which is not only for customer satisfaction where it can give personalized services, but can also influence the further development of a company. In this paper, we present a work done on collecting restaurant reviews data as a sentiment analysis dataset for the Uzbek language, a member of the Turkic family which is heavily affected by the low-resource constraint, and provide some further analysis of the novel dataset by evaluation using different techniques, from logistic regression based models, to support vector machines, and even deep learning models, such as recurrent neural networks, as well as convolutional neural networks. The paper includes detailed information on how the data was collected, how it was pre-processed for better quality optimization, as well as experimental setups for the evaluation process. The overall evaluation results indicate that by performing pre-processing steps, such as stemming for agglutinative languages, the system yields better results, eventually achieving 91% accuracy result in the best performing model
[ { "version": "v1", "created": "Tue, 31 May 2022 16:21:00 GMT" } ]
2022-06-01T00:00:00
[ [ "Matlatipov", "Sanatbek", "" ], [ "Rahimboeva", "Hulkar", "" ], [ "Rajabov", "Jaloliddin", "" ], [ "Kuriyozov", "Elmurod", "" ] ]
new_dataset
0.999744
2205.15943
Cameron Ballard
Cameron Ballard, Ian Goldstein, Pulak Mehta, Genesis Smothers, Kejsi Take, Victoria Zhong, Rachel Greenstadt, Tobias Lauinger, Damon McCoy
Conspiracy Brokers: Understanding the Monetization of YouTube Conspiracy Theories
null
WWW 2022 Proceedings of the ACM Web Conference, April 2022, Pages 2707-2718
10.1145/3485447.3512142
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Conspiracy theories are increasingly a subject of research interest as society grapples with their rapid growth in areas such as politics or public health. Previous work has established YouTube as one of the most popular sites for people to host and discuss different theories. In this paper, we present an analysis of monetization methods of conspiracy theorist YouTube creators and the types of advertisers potentially targeting this content. We collect 184,218 ad impressions from 6,347 unique advertisers found on conspiracy-focused channels and mainstream YouTube content. We classify the ads into business categories and compare their prevalence between conspiracy and mainstream content. We also identify common offsite monetization methods. In comparison with mainstream content, conspiracy videos had similar levels of ads from well-known brands, but an almost eleven times higher prevalence of likely predatory or deceptive ads. Additionally, we found that conspiracy channels were more than twice as likely as mainstream channels to use offsite monetization methods, and 53% of the demonetized channels we observed were linking to third-party sites for alternative monetization opportunities. Our results indicate that conspiracy theorists on YouTube had many potential avenues to generate revenue, and that predatory ads were more frequently served for conspiracy videos.
[ { "version": "v1", "created": "Tue, 31 May 2022 16:42:52 GMT" } ]
2022-06-01T00:00:00
[ [ "Ballard", "Cameron", "" ], [ "Goldstein", "Ian", "" ], [ "Mehta", "Pulak", "" ], [ "Smothers", "Genesis", "" ], [ "Take", "Kejsi", "" ], [ "Zhong", "Victoria", "" ], [ "Greenstadt", "Rachel", "" ], [ "Lauinger", "Tobias", "" ], [ "McCoy", "Damon", "" ] ]
new_dataset
0.972357
2205.15955
Junlin Han
Junlin Han, Lars Petersson, Hongdong Li, Ian Reid
CropMix: Sampling a Rich Input Distribution via Multi-Scale Cropping
Code: https://github.com/JunlinHan/CropMix
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a simple method, CropMix, for the purpose of producing a rich input distribution from the original dataset distribution. Unlike single random cropping, which may inadvertently capture only limited information, or irrelevant information, like pure background, unrelated objects, etc, we crop an image multiple times using distinct crop scales, thereby ensuring that multi-scale information is captured. The new input distribution, serving as training data, useful for a number of vision tasks, is then formed by simply mixing multiple cropped views. We first demonstrate that CropMix can be seamlessly applied to virtually any training recipe and neural network architecture performing classification tasks. CropMix is shown to improve the performance of image classifiers on several benchmark tasks across-the-board without sacrificing computational simplicity and efficiency. Moreover, we show that CropMix is of benefit to both contrastive learning and masked image modeling towards more powerful representations, where preferable results are achieved when learned representations are transferred to downstream tasks. Code is available at GitHub.
[ { "version": "v1", "created": "Tue, 31 May 2022 16:57:28 GMT" } ]
2022-06-01T00:00:00
[ [ "Han", "Junlin", "" ], [ "Petersson", "Lars", "" ], [ "Li", "Hongdong", "" ], [ "Reid", "Ian", "" ] ]
new_dataset
0.995474
1907.06357
Francisco Revson Fernandes Pereira
Francisco Revson F. Pereira, Ruud Pellikaan and Giuliano Gadioli La Guardia and Francisco Marcos de Assis
Entanglement-assisted Quantum Codes from Algebraic Geometry Codes
Some results in this paper were presented at the 2019 IEEE International Symposium on Information Theory
null
10.1109/TIT.2021.3113367
null
cs.IT math.IT quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum error correcting codes play the role of suppressing noise and decoherence in quantum systems by introducing redundancy. Some strategies can be used to improve the parameters of these codes. For example, entanglement can provide a way for quantum error correcting codes to achieve higher rates than the one obtained via the traditional stabilizer formalism. Such codes are called entanglement-assisted quantum (QUENTA) codes. In this paper, we use algebraic geometry codes to construct several families of QUENTA codes via the Euclidean and the Hermitian construction. Two of the families created have maximal entanglement and have quantum Singleton defect equal to zero or one. Comparing the other families with the codes with the respective quantum Gilbert-Varshamov bound, we show that our codes have a rate that surpasses that bound. At the end, asymptotically good towers of linear complementary dual codes are used to obtain asymptotically good families of maximal entanglement QUENTA codes. Furthermore, a simple comparison with the quantum Gilbert-Varshamov bound demonstrates that using our construction it is possible to create an asymptotically family of QUENTA codes that exceeds this bound.
[ { "version": "v1", "created": "Mon, 15 Jul 2019 08:08:21 GMT" }, { "version": "v2", "created": "Wed, 21 Aug 2019 09:57:20 GMT" } ]
2022-05-31T00:00:00
[ [ "Pereira", "Francisco Revson F.", "" ], [ "Pellikaan", "Ruud", "" ], [ "La Guardia", "Giuliano Gadioli", "" ], [ "de Assis", "Francisco Marcos", "" ] ]
new_dataset
0.999798
2002.00911
Mathieu Gonzalez
Mathieu Gonzalez, Amine Kacete, Albert Murienne, Eric Marchand
L6DNet: Light 6 DoF Network for Robust and Precise Object Pose Estimation with Small Datasets
This work has been accepted at IEEE Robotics and Automation Letters
null
10.1109/LRA.2021.3062605
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Estimating the 3D pose of an object is a challenging task that can be considered within augmented reality or robotic applications. In this paper, we propose a novel approach to perform 6 DoF object pose estimation from a single RGB-D image. We adopt a hybrid pipeline in two stages: data-driven and geometric respectively. The data-driven step consists of a classification CNN to estimate the object 2D location in the image from local patches, followed by a regression CNN trained to predict the 3D location of a set of keypoints in the camera coordinate system. To extract the pose information, the geometric step consists in aligning the 3D points in the camera coordinate system with the corresponding 3D points in world coordinate system by minimizing a registration error, thus computing the pose. Our experiments on the standard dataset LineMod show that our approach is more robust and accurate than state-of-the-art methods. The approach is also validated to achieve a 6 DoF positioning task by visual servoing.
[ { "version": "v1", "created": "Mon, 3 Feb 2020 17:41:29 GMT" }, { "version": "v2", "created": "Mon, 24 Feb 2020 17:02:45 GMT" }, { "version": "v3", "created": "Tue, 25 Feb 2020 07:47:38 GMT" }, { "version": "v4", "created": "Thu, 15 Oct 2020 14:03:03 GMT" }, { "version": "v5", "created": "Thu, 7 Jan 2021 08:18:10 GMT" }, { "version": "v6", "created": "Sun, 29 May 2022 20:51:19 GMT" } ]
2022-05-31T00:00:00
[ [ "Gonzalez", "Mathieu", "" ], [ "Kacete", "Amine", "" ], [ "Murienne", "Albert", "" ], [ "Marchand", "Eric", "" ] ]
new_dataset
0.997587
2006.04583
Leandro Montero
Leandro Montero
Vertex removal in biclique graphs
null
null
null
null
cs.DM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A \textit{biclique} is a maximal induced complete bipartite subgraph. The \textit{biclique graph} of a graph $H$, denoted by $KB(H)$, is the intersection graph of the family of all bicliques of $H$. In this work we address the following question: Given a biclique graph $G=KB(H)$, is it possible to remove a vertex $q$ of $G$, such that $G - \{q\}$ is a biclique graph? And if possible, can we obtain a graph $H'$ such that $G - \{q\} = KB(H')$? We show that the general question has a "no" for answer. However, we prove that if $G$ has a vertex $q$ such that $d(q) = 2$, then $G-\{q\}$ is a biclique graph and we show how to obtain $H'$.
[ { "version": "v1", "created": "Mon, 8 Jun 2020 13:30:34 GMT" }, { "version": "v2", "created": "Tue, 22 Mar 2022 09:50:19 GMT" }, { "version": "v3", "created": "Mon, 30 May 2022 13:58:25 GMT" } ]
2022-05-31T00:00:00
[ [ "Montero", "Leandro", "" ] ]
new_dataset
0.987309
2008.05440
Jie Yang
Jie Yang, Kaichun Mo, Yu-Kun Lai, Leonidas J. Guibas, Lin Gao
DSG-Net: Learning Disentangled Structure and Geometry for 3D Shape Generation
Accept to ACM Transaction on Graphics 2022, 26 pages
null
null
null
cs.GR cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
D shape generation is a fundamental operation in computer graphics. While significant progress has been made, especially with recent deep generative models, it remains a challenge to synthesize high-quality shapes with rich geometric details and complex structure, in a controllable manner. To tackle this, we introduce DSG-Net, a deep neural network that learns a disentangled structured and geometric mesh representation for 3D shapes, where two key aspects of shapes, geometry, and structure, are encoded in a synergistic manner to ensure plausibility of the generated shapes, while also being disentangled as much as possible. This supports a range of novel shape generation applications with disentangled control, such as interpolation of structure (geometry) while keeping geometry (structure) unchanged. To achieve this, we simultaneously learn structure and geometry through variational autoencoders (VAEs) in a hierarchical manner for both, with bijective mappings at each level. In this manner, we effectively encode geometry and structure in separate latent spaces, while ensuring their compatibility: the structure is used to guide the geometry and vice versa. At the leaf level, the part geometry is represented using a conditional part VAE, to encode high-quality geometric details, guided by the structure context as the condition. Our method not only supports controllable generation applications but also produces high-quality synthesized shapes, outperforming state-of-the-art methods. The code has been released at https://github.com/IGLICT/DSG-Net.
[ { "version": "v1", "created": "Wed, 12 Aug 2020 17:06:51 GMT" }, { "version": "v2", "created": "Fri, 14 Aug 2020 02:38:45 GMT" }, { "version": "v3", "created": "Mon, 24 May 2021 14:45:26 GMT" }, { "version": "v4", "created": "Sat, 28 May 2022 17:40:15 GMT" } ]
2022-05-31T00:00:00
[ [ "Yang", "Jie", "" ], [ "Mo", "Kaichun", "" ], [ "Lai", "Yu-Kun", "" ], [ "Guibas", "Leonidas J.", "" ], [ "Gao", "Lin", "" ] ]
new_dataset
0.998164
2103.04807
Peter Steiner
Peter Steiner (1), Azarakhsh Jalalvand (2), Simon Stone (1), Peter Birkholz (2) ((1) Institute for Acoustics and Speech Communication, Technische Universit\"at Dresden, Dresden, Germany, (2) IDLab, Ghent University - imec, Ghent, Belgium)
PyRCN: A Toolbox for Exploration and Application of Reservoir Computing Networks
Preprint accepted for publication in Engineering Applications of Artificial Intelligence
Engineering Applications of Artificial Intelligence 113 (2022) 104964
10.1016/j.engappai.2022.104964
null
cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reservoir Computing Networks (RCNs) belong to a group of machine learning techniques that project the input space non-linearly into a high-dimensional feature space, where the underlying task can be solved linearly. Popular variants of RCNs are capable of solving complex tasks equivalently to widely used deep neural networks, but with a substantially simpler training paradigm based on linear regression. In this paper, we show how to uniformly describe RCNs with small and clearly defined building blocks, and we introduce the Python toolbox PyRCN (Python Reservoir Computing Networks) for optimizing, training and analyzing RCNs on arbitrarily large datasets. The tool is based on widely-used scientific packages and complies with the scikit-learn interface specification. It provides a platform for educational and exploratory analyses of RCNs, as well as a framework to apply RCNs on complex tasks including sequence processing. With a small number of building blocks, the framework allows the implementation of numerous different RCN architectures. We provide code examples on how to set up RCNs for time series prediction and for sequence classification tasks. PyRCN is around ten times faster than reference toolboxes on a benchmark task while requiring substantially less boilerplate code.
[ { "version": "v1", "created": "Mon, 8 Mar 2021 15:00:48 GMT" }, { "version": "v2", "created": "Mon, 11 Oct 2021 14:27:14 GMT" }, { "version": "v3", "created": "Tue, 10 May 2022 13:14:28 GMT" } ]
2022-05-31T00:00:00
[ [ "Steiner", "Peter", "" ], [ "Jalalvand", "Azarakhsh", "" ], [ "Stone", "Simon", "" ], [ "Birkholz", "Peter", "" ] ]
new_dataset
0.954929
2104.04798
Kaleem Nawaz Khan Mr.
Kaleem Nawaz Khan, Najeeb Ullah, Sikandar Ali, Muhammad Salman Khan, Mohammad Nauman and Anwar Ghani
Op2Vec: An Opcode Embedding Technique and Dataset Design for End-to-End Detection of Android Malware
null
null
10.1155/2022/3710968
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Android is one of the leading operating systems for smart phones in terms of market share and usage. Unfortunately, it is also an appealing target for attackers to compromise its security through malicious applications. To tackle this issue, domain experts and researchers are trying different techniques to stop such attacks. All the attempts of securing Android platform are somewhat successful. However, existing detection techniques have severe shortcomings, including the cumbersome process of feature engineering. Designing representative features require expert domain knowledge. There is a need for minimizing human experts' intervention by circumventing handcrafted feature engineering. Deep learning could be exploited by extracting deep features automatically. Previous work has shown that operational codes (opcodes) of executables provide key information to be used with deep learning models for detection process of malicious applications. The only challenge is to feed opcodes information to deep learning models. Existing techniques use one-hot encoding to tackle the challenge. However, the one-hot encoding scheme has severe limitations. In this paper, we introduce; (1) a novel technique for opcodes embedding, which we name Op2Vec, (2) based on the learned Op2Vec we have developed a dataset for end-to-end detection of android malware. Introducing the end-to-end Android malware detection technique avoids expert-intensive handcrafted features extraction, and ensures automation. Some of the recent deep learning-based techniques showed significantly improved results when tested with the proposed approach and achieved an average detection accuracy of 97.47%, precision of 0.976 and F1 score of 0.979.
[ { "version": "v1", "created": "Sat, 10 Apr 2021 15:56:37 GMT" }, { "version": "v2", "created": "Tue, 1 Mar 2022 16:30:43 GMT" } ]
2022-05-31T00:00:00
[ [ "Khan", "Kaleem Nawaz", "" ], [ "Ullah", "Najeeb", "" ], [ "Ali", "Sikandar", "" ], [ "Khan", "Muhammad Salman", "" ], [ "Nauman", "Mohammad", "" ], [ "Ghani", "Anwar", "" ] ]
new_dataset
0.999461
2107.10938
Shi Zhou Dr.
Jie Li, Vasileios Giotsas, Yangyang Wang, Shi Zhou
BGP-Multipath Routing in the Internet
38 pages, 8 figures, 8 tables
Published in IEEE Transactions on Network and Service Management (TNSM) in May 2022
10.1109/TNSM.2022.3177471
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
BGP-Multipath (BGP-M) is a multipath routing technique for load balancing. Distinct from other techniques deployed at a router inside an Autonomous System (AS), BGP-M is deployed at a border router that has installed multiple inter-domain border links to a neighbour AS. It uses the equal-cost multi-path (ECMP) function of a border router to share traffic to a destination prefix on different border links. Despite recent research interests in multipath routing, there is little study on BGP-M. Here we provide the first measurement and a comprehensive analysis of BGP-M routing in the Internet. We extracted information on BGP-M from query data collected from Looking Glass (LG) servers. We revealed that BGP-M has already been extensively deployed and used in the Internet. A particular example is Hurricane Electric (AS6939), a Tier-1 network operator, which has implemented >1,000 cases of BGP-M at 69 of its border routers to prefixes in 611 of its neighbour ASes, including many hyper-giant ASes and large content providers, on both IPv4 and IPv6 Internet. We examined the distribution and operation of BGP-M. We also ran traceroute using RIPE Atlas to infer the routing paths, the schemes of traffic allocation, and the delay on border links. This study provided the state-of-the-art knowledge on BGP-M with novel insights into the unique features and the distinct advantages of BGP-M as an effective and readily available technique for load balancing.
[ { "version": "v1", "created": "Thu, 22 Jul 2021 21:50:40 GMT" }, { "version": "v2", "created": "Sun, 29 May 2022 18:45:19 GMT" } ]
2022-05-31T00:00:00
[ [ "Li", "Jie", "" ], [ "Giotsas", "Vasileios", "" ], [ "Wang", "Yangyang", "" ], [ "Zhou", "Shi", "" ] ]
new_dataset
0.998935
2108.05539
Hongtao Wu
Hongtao Wu, Xin Meng, Sipu Ruan, Gregory Chirikjian
Put the Bear on the Chair! Intelligent Robot Interaction with Previously Unseen Chairs via Robot Imagination
IEEE ICRA 2022. Video demos are available at https://chirikjianlab.github.io/putbearonchair/
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study the problem of autonomously seating a teddy bear on a previously unseen chair. To achieve this goal, we present a novel method for robots to imagine the sitting pose of the bear by physically simulating a virtual humanoid agent sitting on the chair. We also develop a robotic system which leverages motion planning to plan SE(2) motions for a humanoid robot to walk to the chair and whole-body motions to put the bear on it. Furthermore, to cope with cases where the chair is not in an accessible pose for placing the bear, a human assistance module is introduced for a human to follow language instructions given by the robot to rotate the chair and help make the chair accessible. We implement our method with a robot arm and a humanoid robot. We calibrate the proposed system with 3 chairs and test on 12 previously unseen chairs in both accessible and inaccessible poses extensively. Results show that our method enables the robot to autonomously seat the teddy bear on the 12 previously unseen chairs with a very high success rate. The human assistance module is also shown to be very effective in changing the accessibility of the chair. Video demos and more details are available at https://chirikjianlab.github.io/putbearonchair/.
[ { "version": "v1", "created": "Thu, 12 Aug 2021 05:12:40 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 07:19:15 GMT" } ]
2022-05-31T00:00:00
[ [ "Wu", "Hongtao", "" ], [ "Meng", "Xin", "" ], [ "Ruan", "Sipu", "" ], [ "Chirikjian", "Gregory", "" ] ]
new_dataset
0.993233
2108.07425
Xutong Jin
Xutong Jin, Sheng Li, Guoping Wang, Dinesh Manocha
NeuralSound: Learning-based Modal Sound Synthesis With Acoustic Transfer
null
null
null
null
cs.SD cs.GR eess.AS
http://creativecommons.org/licenses/by/4.0/
We present a novel learning-based modal sound synthesis approach that includes a mixed vibration solver for modal analysis and an end-to-end sound radiation network for acoustic transfer. Our mixed vibration solver consists of a 3D sparse convolution network and a Locally Optimal Block Preconditioned Conjugate Gradient module (LOBPCG) for iterative optimization. Moreover, we highlight the correlation between a standard modal vibration solver and our network architecture. Our radiation network predicts the Far-Field Acoustic Transfer maps (FFAT Maps) from the surface vibration of the object. The overall running time of our learning method for any new object is less than one second on a GTX 3080 Ti GPU while maintaining a high sound quality close to the ground truth that is computed using standard numerical methods. We also evaluate the numerical accuracy and perceptual accuracy of our sound synthesis approach on different objects corresponding to various materials.
[ { "version": "v1", "created": "Tue, 17 Aug 2021 03:44:45 GMT" }, { "version": "v2", "created": "Wed, 27 Apr 2022 15:23:26 GMT" }, { "version": "v3", "created": "Fri, 29 Apr 2022 10:16:35 GMT" }, { "version": "v4", "created": "Sat, 28 May 2022 04:38:07 GMT" } ]
2022-05-31T00:00:00
[ [ "Jin", "Xutong", "" ], [ "Li", "Sheng", "" ], [ "Wang", "Guoping", "" ], [ "Manocha", "Dinesh", "" ] ]
new_dataset
0.985837
2109.02818
Hao Chen
Hao Chen
List-decodable Codes and Covering Codes
54 pages, corrected version
null
null
null
cs.IT math.IT
http://creativecommons.org/publicdomain/zero/1.0/
The list-decodable code has been an active topic in theoretical computer science.There are general results about the list-decodability to the Johnson radius and the list-decoding capacity theorem. In this paper we show that rates, list-decodable radius and list sizes are closely related to the classical topic of covering codes. We prove new general simple but strong upper bounds for list-decodable codes in general finite metric spaces based on various covering codes. The general covering code upper bounds can be applied to the case that the volumes of the balls depend on the centers, not only on the radius. Then any good upper bound on the covering radius or the size of covering code imply a good upper bound on the sizes of list-decodable codes. Our results give exponential improvements on the recent generalized Singleton upper bound in STOC 2020 for Hamming metric list-decodable codes, when the code lengths are large. A generalized Singleton upper bound for average-radius list-decodable codes is also given from our general covering code upper bound. We also suggest to study the combinatorial covering list-decodable codes as a natural generalization of combinatorial list-decodable codes. We apply our general covering code upper bounds for list-decodable rank-metric codes, list-decodable subspace codes, list-decodable insertion codes list-decodable deletion codes,list-decodable sum-rank-metric codes and list decodable permutation codes. Some new better results about non-list-decodability of rank-metric codes, subspace codes, sum-rank-metric codes and permutation codes with various metrics are obtained.
[ { "version": "v1", "created": "Tue, 7 Sep 2021 02:04:41 GMT" }, { "version": "v10", "created": "Thu, 25 Nov 2021 23:02:44 GMT" }, { "version": "v11", "created": "Wed, 5 Jan 2022 11:15:03 GMT" }, { "version": "v12", "created": "Mon, 17 Jan 2022 23:32:54 GMT" }, { "version": "v13", "created": "Fri, 27 May 2022 22:40:14 GMT" }, { "version": "v2", "created": "Fri, 17 Sep 2021 13:30:00 GMT" }, { "version": "v3", "created": "Sun, 3 Oct 2021 03:20:08 GMT" }, { "version": "v4", "created": "Tue, 12 Oct 2021 04:03:10 GMT" }, { "version": "v5", "created": "Tue, 19 Oct 2021 07:42:22 GMT" }, { "version": "v6", "created": "Mon, 25 Oct 2021 16:05:38 GMT" }, { "version": "v7", "created": "Fri, 29 Oct 2021 00:42:26 GMT" }, { "version": "v8", "created": "Fri, 19 Nov 2021 10:16:02 GMT" }, { "version": "v9", "created": "Mon, 22 Nov 2021 22:15:57 GMT" } ]
2022-05-31T00:00:00
[ [ "Chen", "Hao", "" ] ]
new_dataset
0.997738
2112.09329
Mikaela Angelina Uy
Mikaela Angelina Uy, Yen-yu Chang, Minhyuk Sung, Purvi Goel, Joseph Lambourne, Tolga Birdal, Leonidas Guibas
Point2Cyl: Reverse Engineering 3D Objects from Point Clouds to Extrusion Cylinders
CVPR 2022
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We propose Point2Cyl, a supervised network transforming a raw 3D point cloud to a set of extrusion cylinders. Reverse engineering from a raw geometry to a CAD model is an essential task to enable manipulation of the 3D data in shape editing software and thus expand their usages in many downstream applications. Particularly, the form of CAD models having a sequence of extrusion cylinders -- a 2D sketch plus an extrusion axis and range -- and their boolean combinations is not only widely used in the CAD community/software but also has great expressivity of shapes, compared to having limited types of primitives (e.g., planes, spheres, and cylinders). In this work, we introduce a neural network that solves the extrusion cylinder decomposition problem in a geometry-grounded way by first learning underlying geometric proxies. Precisely, our approach first predicts per-point segmentation, base/barrel labels and normals, then estimates for the underlying extrusion parameters in differentiable and closed-form formulations. Our experiments show that our approach demonstrates the best performance on two recent CAD datasets, Fusion Gallery and DeepCAD, and we further showcase our approach on reverse engineering and editing.
[ { "version": "v1", "created": "Fri, 17 Dec 2021 05:22:28 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 00:55:47 GMT" } ]
2022-05-31T00:00:00
[ [ "Uy", "Mikaela Angelina", "" ], [ "Chang", "Yen-yu", "" ], [ "Sung", "Minhyuk", "" ], [ "Goel", "Purvi", "" ], [ "Lambourne", "Joseph", "" ], [ "Birdal", "Tolga", "" ], [ "Guibas", "Leonidas", "" ] ]
new_dataset
0.999806
2201.01810
Kamil Erdayandi
Kamil Erdayandi, Amrit Paudel, Lucas Cordeiro, Mustafa A. Mustafa
Privacy-Friendly Peer-to-Peer Energy Trading: A Game Theoretical Approach
To be published in IEEE Power & Energy Society General Meeting (GM), 2022
null
null
null
cs.GT cs.AI cs.CR cs.MA
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a decentralized, privacy-friendly energy trading platform (PFET) based on game theoretical approach - specifically Stackelberg competition. Unlike existing trading schemes, PFET provides a competitive market in which prices and demands are determined based on competition, and computations are performed in a decentralized manner which does not rely on trusted third parties. It uses homomorphic encryption cryptosystem to encrypt sensitive information of buyers and sellers such as sellers$'$ prices and buyers$'$ demands. Buyers calculate total demand on particular seller using an encrypted data and sensitive buyer profile data is hidden from sellers. Hence, privacy of both sellers and buyers is preserved. Through privacy analysis and performance evaluation, we show that PFET preserves users$'$ privacy in an efficient manner.
[ { "version": "v1", "created": "Wed, 5 Jan 2022 20:41:32 GMT" }, { "version": "v2", "created": "Sun, 29 May 2022 00:27:56 GMT" } ]
2022-05-31T00:00:00
[ [ "Erdayandi", "Kamil", "" ], [ "Paudel", "Amrit", "" ], [ "Cordeiro", "Lucas", "" ], [ "Mustafa", "Mustafa A.", "" ] ]
new_dataset
0.975774
2201.02374
Haisen Zhao
Fanchao Zhong and Yonglai Xu and Haisen Zhao and Lin Lu
As-Continuous-As-Possible Extrusion Fabrication of Surface Models
16 pages, 23 figures
null
null
null
cs.GR
http://creativecommons.org/licenses/by/4.0/
We propose a novel computational framework for optimizing the toolpath continuity in fabricating surface models on an extrusion-based 3D printer. Toolpath continuity has been a critical issue for extrusion-based fabrications that affects both quality and efficiency. Transfer moves cause non-smoothor bumpy surfaces and get worse for materials with large inertia like clay. For surface models, the effects of continuity are even more severe, in terms of surface quality and model stability. In this paper, we introduce an original criterion "one-path-patch" (OPP), for representing a shell surface patch that can be traversed in one path considering fabrication constraints. We study the properties of an OPP and the merging operations for OPPs, and propose a bottom-up OPP merging procedure for decomposing the given shell surface into a minimal number of OPPs and generating the "as-continuous-as-possible" (ACAP) toolpath. Furthermore, we customize the path planning algorithm with a curved layer printing scheme, which reduces the staircase defect and improves the toolpath continuity via possibly connecting multiple segments. We evaluate the ACAP algorithm for both ceramic and thermoplastic materials, and results demonstrate that it improves the fabrication of surface models in both surface quality and efficiency.
[ { "version": "v1", "created": "Fri, 7 Jan 2022 09:18:59 GMT" }, { "version": "v2", "created": "Mon, 10 Jan 2022 20:22:50 GMT" }, { "version": "v3", "created": "Sat, 28 May 2022 08:39:43 GMT" } ]
2022-05-31T00:00:00
[ [ "Zhong", "Fanchao", "" ], [ "Xu", "Yonglai", "" ], [ "Zhao", "Haisen", "" ], [ "Lu", "Lin", "" ] ]
new_dataset
0.995073
2202.07133
Chuqing Hu
Chuqing Hu, Sinclair Hudson, Martin Ethier, Mohammad Al-Sharman, Derek Rayside, William Melek
Sim-to-Real Domain Adaptation for Lane Detection and Classification in Autonomous Driving
Accepted by IV 2022
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
While supervised detection and classification frameworks in autonomous driving require large labelled datasets to converge, Unsupervised Domain Adaptation (UDA) approaches, facilitated by synthetic data generated from photo-real simulated environments, are considered low-cost and less time-consuming solutions. In this paper, we propose UDA schemes using adversarial discriminative and generative methods for lane detection and classification applications in autonomous driving. We also present Simulanes dataset generator to create a synthetic dataset that is naturalistic utilizing CARLA's vast traffic scenarios and weather conditions. The proposed UDA frameworks take the synthesized dataset with labels as the source domain, whereas the target domain is the unlabelled real-world data. Using adversarial generative and feature discriminators, the learnt models are tuned to predict the lane location and class in the target domain. The proposed techniques are evaluated using both real-world and our synthetic datasets. The results manifest that the proposed methods have shown superiority over other baseline schemes in terms of detection and classification accuracy and consistency. The ablation study reveals that the size of the simulation dataset plays important roles in the classification performance of the proposed methods. Our UDA frameworks are available at https://github.com/anita-hu/sim2real-lane-detection and our dataset generator is released at https://github.com/anita-hu/simulanes
[ { "version": "v1", "created": "Tue, 15 Feb 2022 02:10:14 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 12:12:24 GMT" } ]
2022-05-31T00:00:00
[ [ "Hu", "Chuqing", "" ], [ "Hudson", "Sinclair", "" ], [ "Ethier", "Martin", "" ], [ "Al-Sharman", "Mohammad", "" ], [ "Rayside", "Derek", "" ], [ "Melek", "William", "" ] ]
new_dataset
0.962917
2203.05788
Xuyang Ma
Xuyang Ma, Han Wu, Du Xu, Katinka Wolter
CBlockSim: A Modular High-Performance Blockchain Simulator
null
null
null
null
cs.DC cs.PF
http://creativecommons.org/licenses/by/4.0/
Blockchain has attracted much attention from both academia and industry since emerging in 2008. Due to the inconvenience of the deployment of large-scale blockchains, blockchain simulators are used to facilitate blockchain design and implementation. We evaluate state-of-the-art simulators applied to both Bitcoin and Ethereum and find that they suffer from low performance and scalability which are significant limitations. To build a more general and faster blockchain simulator, we extend an existing blockchain simulator, i.e. BlockSim. We add a network module integrated with a network topology generation algorithm and a block propagation algorithm to generate a realistic blockchain network and simulate the block propagation efficiently. We design a binary transaction pool structure and migrate BlockSim from Python to C++ so that bitwise operations can be used to accelerate the simulation and reduce memory usage. Moreover, we modularize the simulator based on five primary blockchain processes. Significant blockchain elements including consensus protocols (PoW and PoS), information propagation algorithms (Gossip) and finalization rules (Longest rule and GHOST rule) are implemented in individual modules and can be combined flexibly to simulate different types of blockchains. Experiments demonstrate that the new simulator reduces the simulation time by an order of magnitude and improves scalability, enabling us to simulate more than ten thousand nodes, roughly the size of the Bitcoin and Ethereum networks. Two typical use cases are proposed to investigate network-related issues which are not covered by most other simulators.
[ { "version": "v1", "created": "Fri, 11 Mar 2022 08:03:19 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 14:24:46 GMT" } ]
2022-05-31T00:00:00
[ [ "Ma", "Xuyang", "" ], [ "Wu", "Han", "" ], [ "Xu", "Du", "" ], [ "Wolter", "Katinka", "" ] ]
new_dataset
0.998794
2203.13655
Ahmad Khajenezhad
Ahmad Khajenezhad and Seyed Ali Osia and Mahmood Karimian and Hamid Beigy
Gransformer: Transformer-based Graph Generation
null
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Transformers have become widely used in modern models for various tasks such as natural language processing and machine vision. This paper proposes Gransformer, an algorithm for generating graphs based on the Transformer. We extend a simple autoregressive Transformer encoder to exploit the structural information of the given graph through efficient modifications. The attention mechanism is modified to consider the presence or absence of edges between each pair of nodes. We also introduce a graph-based familiarity measure between node pairs that applies to both the attention and the positional encoding. This measure of familiarity is based on message passing algorithms and contains structural information about the graph. Furthermore, the proposed measure is autoregressive, which allows our mode to acquire the necessary conditional probabilities in a single forward pass. In the output layer, we also use a masked autoencoder for density estimation to efficiently model the sequential generation of dependent edges. Moreover, since we use BFS node orderings, we propose a technique to prevent the model from generating isolated nodes without connection to preceding nodes. We evaluate this method on two real-world datasets and compare it with other state-of-the-art autoregressive graph generation methods. Experimental results have shown that the proposed method performs comparatively to these methods, including recurrent models and graph convolutional networks.
[ { "version": "v1", "created": "Fri, 25 Mar 2022 14:05:12 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 04:29:58 GMT" } ]
2022-05-31T00:00:00
[ [ "Khajenezhad", "Ahmad", "" ], [ "Osia", "Seyed Ali", "" ], [ "Karimian", "Mahmood", "" ], [ "Beigy", "Hamid", "" ] ]
new_dataset
0.958502
2204.09903
Chunbo Lang
Chunbo Lang, Binfei Tu, Gong Cheng, Junwei Han
Beyond the Prototype: Divide-and-conquer Proxies for Few-shot Segmentation
accepted to IJCAI 2022 Long Oral
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Few-shot segmentation, which aims to segment unseen-class objects given only a handful of densely labeled samples, has received widespread attention from the community. Existing approaches typically follow the prototype learning paradigm to perform meta-inference, which fails to fully exploit the underlying information from support image-mask pairs, resulting in various segmentation failures, e.g., incomplete objects, ambiguous boundaries, and distractor activation. To this end, we propose a simple yet versatile framework in the spirit of divide-and-conquer. Specifically, a novel self-reasoning scheme is first implemented on the annotated support image, and then the coarse segmentation mask is divided into multiple regions with different properties. Leveraging effective masked average pooling operations, a series of support-induced proxies are thus derived, each playing a specific role in conquering the above challenges. Moreover, we devise a unique parallel decoder structure that integrates proxies with similar attributes to boost the discrimination power. Our proposed approach, named divide-and-conquer proxies (DCP), allows for the development of appropriate and reliable information as a guide at the "episode" level, not just about the object cues themselves. Extensive experiments on PASCAL-5i and COCO-20i demonstrate the superiority of DCP over conventional prototype-based approaches (up to 5~10% on average), which also establishes a new state-of-the-art. Code is available at github.com/chunbolang/DCP.
[ { "version": "v1", "created": "Thu, 21 Apr 2022 06:21:14 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 12:28:14 GMT" } ]
2022-05-31T00:00:00
[ [ "Lang", "Chunbo", "" ], [ "Tu", "Binfei", "" ], [ "Cheng", "Gong", "" ], [ "Han", "Junwei", "" ] ]
new_dataset
0.984872
2204.10050
Harish Tayyar Madabushi PhD
Harish Tayyar Madabushi, Edward Gow-Smith, Marcos Garcia, Carolina Scarton, Marco Idiart, Aline Villavicencio
SemEval-2022 Task 2: Multilingual Idiomaticity Detection and Sentence Embedding
Data available at https://github.com/H-TayyarMadabushi/SemEval_2022_Task2-idiomaticity and competition website at https://sites.google.com/view/semeval2022task2-idiomaticity
null
null
null
cs.CL
http://creativecommons.org/licenses/by-sa/4.0/
This paper presents the shared task on Multilingual Idiomaticity Detection and Sentence Embedding, which consists of two subtasks: (a) a binary classification task aimed at identifying whether a sentence contains an idiomatic expression, and (b) a task based on semantic text similarity which requires the model to adequately represent potentially idiomatic expressions in context. Each subtask includes different settings regarding the amount of training data. Besides the task description, this paper introduces the datasets in English, Portuguese, and Galician and their annotation procedure, the evaluation metrics, and a summary of the participant systems and their results. The task had close to 100 registered participants organised into twenty five teams making over 650 and 150 submissions in the practice and evaluation phases respectively.
[ { "version": "v1", "created": "Thu, 21 Apr 2022 12:20:52 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 14:35:24 GMT" } ]
2022-05-31T00:00:00
[ [ "Madabushi", "Harish Tayyar", "" ], [ "Gow-Smith", "Edward", "" ], [ "Garcia", "Marcos", "" ], [ "Scarton", "Carolina", "" ], [ "Idiart", "Marco", "" ], [ "Villavicencio", "Aline", "" ] ]
new_dataset
0.999762
2204.14095
Yuting Gao
Yuting Gao, Jinfeng Liu, Zihan Xu, Jun Zhang, Ke Li, Rongrong Ji, Chunhua Shen
PyramidCLIP: Hierarchical Feature Alignment for Vision-language Model Pretraining
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Large-scale vision-language pre-training has achieved promising results on downstream tasks. Existing methods highly rely on the assumption that the image-text pairs crawled from the Internet are in perfect one-to-one correspondence. However, in real scenarios, this assumption can be difficult to hold: the text description, obtained by crawling the affiliated metadata of the image, often suffers from the semantic mismatch and the mutual compatibility. To address these issues, we introduce PyramidCLIP, which constructs an input pyramid with different semantic levels for each modality, and aligns visual elements and linguistic elements in the form of hierarchy via peer-level semantics alignment and cross-level relation alignment. Furthermore, we soften the loss of negative samples (unpaired samples) so as to weaken the strict constraint during the pre-training stage, thus mitigating the risk of forcing the model to distinguish compatible negative pairs. Experiments on five downstream tasks demonstrate the effectiveness of the proposed PyramidCLIP. In particular, with the same amount of 15 million pre-training image-text pairs, PyramidCLIP exceeds CLIP on ImageNet zero-shot classification top-1 accuracy by 10.6%/13.2%/10.0% with ResNet50/ViT-B32/ViT-B16 based image encoder respectively. When scaling to larger datasets, PyramidCLIP achieves the state-of-the-art results on several downstream tasks. In particular, the results of PyramidCLIP-ResNet50 trained on 143M image-text pairs surpass that of CLIP using 400M data on ImageNet zero-shot classification task, significantly improving the data efficiency of CLIP.
[ { "version": "v1", "created": "Fri, 29 Apr 2022 13:38:42 GMT" }, { "version": "v2", "created": "Sat, 28 May 2022 08:52:58 GMT" } ]
2022-05-31T00:00:00
[ [ "Gao", "Yuting", "" ], [ "Liu", "Jinfeng", "" ], [ "Xu", "Zihan", "" ], [ "Zhang", "Jun", "" ], [ "Li", "Ke", "" ], [ "Ji", "Rongrong", "" ], [ "Shen", "Chunhua", "" ] ]
new_dataset
0.995449
2205.06985
Jingya Zang
Jingya Zang, Cuiyun Gao, Yupan Chen, Ruifeng Xu, Lanjun Zhou, Xuan Wang
Generating Tips from Song Reviews: A New Dataset and Framework
null
null
null
null
cs.IR cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Reviews of songs play an important role in online music service platforms. Prior research shows that users can make quicker and more informed decisions when presented with meaningful song reviews. However, reviews of music songs are generally long in length and most of them are non-informative for users. It is difficult for users to efficiently grasp meaningful messages for making decisions. To solve this problem, one practical strategy is to provide tips, i.e., short, concise, empathetic, and self-contained descriptions about songs. Tips are produced from song reviews and should express non-trivial insights about the songs. To the best of our knowledge, no prior studies have explored the tip generation task in music domain. In this paper, we create a dataset named MTips for the task and propose a framework named GENTMS for automatically generating tips from song reviews. The dataset involves 8,003 Chinese tips/non-tips from 128 songs which are distributed in five different song genres. Experimental results show that GENTMS achieves top-10 precision at 85.56%, outperforming the baseline models by at least 3.34%. Besides, to simulate the practical usage of our proposed framework, we also experiment with previously-unseen songs, during which GENTMS also achieves the best performance with top-10 precision at 78.89% on average. The results demonstrate the effectiveness of the proposed framework in tip generation of the music domain.
[ { "version": "v1", "created": "Sat, 14 May 2022 06:40:49 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 07:13:52 GMT" } ]
2022-05-31T00:00:00
[ [ "Zang", "Jingya", "" ], [ "Gao", "Cuiyun", "" ], [ "Chen", "Yupan", "" ], [ "Xu", "Ruifeng", "" ], [ "Zhou", "Lanjun", "" ], [ "Wang", "Xuan", "" ] ]
new_dataset
0.998972
2205.10929
Alain Tchana
Alain Tchana, Raphael Colin, Adrien Le Berre, Vincent Berger, Benoit Combemale, Natacha Crooks, Ludovic Pailler
rgpdOS: GDPR Enforcement By The Operating System
null
null
null
null
cs.OS cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The General Data Protection Regulation (GDPR) forces IT companies to comply with a number of principles when dealing with European citizens' personal data. Non-compliant companies are exposed to penalties which may represent up to 4% of their turnover. Currently, it is very hard for companies driven by personal data to make their applications GDPR-compliant, especially if those applications were developed before the GDPR was established. We present rgpdOS, a GDPR-aware operating system that aims to bring GDPR-compliance to every application, while requiring minimal changes to application code.
[ { "version": "v1", "created": "Sun, 22 May 2022 20:50:20 GMT" }, { "version": "v2", "created": "Mon, 30 May 2022 11:36:09 GMT" } ]
2022-05-31T00:00:00
[ [ "Tchana", "Alain", "" ], [ "Colin", "Raphael", "" ], [ "Berre", "Adrien Le", "" ], [ "Berger", "Vincent", "" ], [ "Combemale", "Benoit", "" ], [ "Crooks", "Natacha", "" ], [ "Pailler", "Ludovic", "" ] ]
new_dataset
0.988467
2205.14156
Andrea Passarella
Marco Conti, Andrea Passarella, Sajal K. Das
The Internet of People (IoP): A New Wave in Pervasive Mobile Computing
arXiv admin note: text overlap with arXiv:2205.13970
Pervasive and Mobile Computing, Volume 41, 2017, Pages 1-27, ISSN 1574-1192
10.1016/j.pmcj.2017.07.009
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber-Physical convergence, the fast expansion of the Internet at its edge, and tighter interactions between human users and their personal mobile devices push towards an Internet where the human user becomes more central than ever, and where their personal devices become their proxies in the cyber world, in addition to acting as a fundamental tool to sense the physical world. The current Internet paradigm, which is infrastructure-centric, is not the right one to cope with such emerging scenario with a wider range of applications. This calls for a radically new Internet paradigm, that we name the Internet of People (IoP), where the humans and their personal devices are not seen merely as end users of applications, but become active elements of the Internet. Note that IoP is not a replacement of the current Internet infrastructure, but it exploits legacy Internet services as (reliable) primitives to achieve end-to-end connectivity on a global-scale. In this visionary paper, we first discuss the key features of the IoP paradigm along with the underlying research issues and challenges. Then we present emerging networking and computing paradigms that are anticipating IoP
[ { "version": "v1", "created": "Fri, 27 May 2022 17:07:28 GMT" } ]
2022-05-31T00:00:00
[ [ "Conti", "Marco", "" ], [ "Passarella", "Andrea", "" ], [ "Das", "Sajal K.", "" ] ]
new_dataset
0.998729
2205.14212
Viresh Ranjan
Viresh Ranjan and Minh Hoai
Exemplar Free Class Agnostic Counting
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
We tackle the task of Class Agnostic Counting, which aims to count objects in a novel object category at test time without any access to labeled training data for that category. All previous class agnostic counting methods cannot work in a fully automated setting, and require computationally expensive test time adaptation. To address these challenges, we propose a visual counter which operates in a fully automated setting and does not require any test time adaptation. Our proposed approach first identifies exemplars from repeating objects in an image, and then counts the repeating objects. We propose a novel region proposal network for identifying the exemplars. After identifying the exemplars, we obtain the corresponding count by using a density estimation based Visual Counter. We evaluate our proposed approach on FSC-147 dataset, and show that it achieves superior performance compared to the existing approaches.
[ { "version": "v1", "created": "Fri, 27 May 2022 19:44:39 GMT" } ]
2022-05-31T00:00:00
[ [ "Ranjan", "Viresh", "" ], [ "Hoai", "Minh", "" ] ]
new_dataset
0.968045
2205.14269
Amir Pouya Aghasadeghi
Amir Pouya Aghasadeghi, Jan Van den Bussche, Julia Stoyanovich
Temporal graph patterns by timed automata
null
null
null
null
cs.DB
http://creativecommons.org/licenses/by/4.0/
Temporal graphs represent graph evolution over time, and have been receiving considerable research attention. Work on expressing temporal graph patterns or discovering temporal motifs typically assumes relatively simple temporal constraints, such as journeys or, more generally, existential constraints, possibly with finite delays. In this paper we propose to use timed automata to express temporal constraints, leading to a general and powerful notion of temporal basic graph pattern (BGP). The new difficulty is the evaluation of the temporal constraint on a large set of matchings. An important benefit of timed automata is that they support an iterative state assignment, which can be useful for early detection of matches and pruning of non-matches. We introduce algorithms to retrieve all instances of a temporal BGP match in a graph, and present results of an extensive experimental evaluation, demonstrating interesting performance trade-offs. We show that an on-demand algorithm that processes total matchings incrementally over time is preferable when dealing with cyclic patterns on sparse graphs. On acyclic patterns or dense graphs, and when connectivity of partial matchings can be guaranteed, the best performance is achieved by maintaining partial matchings over time and allowing automaton evaluation to be fully incremental.
[ { "version": "v1", "created": "Fri, 27 May 2022 23:09:09 GMT" } ]
2022-05-31T00:00:00
[ [ "Aghasadeghi", "Amir Pouya", "" ], [ "Bussche", "Jan Van den", "" ], [ "Stoyanovich", "Julia", "" ] ]
new_dataset
0.990286
2205.14290
Joshua Tan
Joshua Z. Tan and Luke V. Miller
Building net-native agreement systems
null
null
null
null
cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Agreements and contracts are everywhere, but they are built on layers and layers of legal and social institutions. Software is slowly entering into this stack. In this article, we introduce agreement paths, a general model for understanding and decomposing digital agreement systems, and Agreement Engine, an open-source software service for building net-native agreement systems. We demonstrate Agreement Engine by building two example agreement systems: Scarce Knowledge, an app for crowdfunding essays, and Twitter Social Capital, a bot that allows users to form and enforce Twitter agreements.
[ { "version": "v1", "created": "Sat, 28 May 2022 01:12:05 GMT" } ]
2022-05-31T00:00:00
[ [ "Tan", "Joshua Z.", "" ], [ "Miller", "Luke V.", "" ] ]
new_dataset
0.998283
2205.14319
Jinli Liao
Jinli Liao, Yikang Ding, Yoli Shavit, Dihe Huang, Shihao Ren, Jia Guo, Wensen Feng, Kai Zhang
WT-MVSNet: Window-based Transformers for Multi-view Stereo
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, Transformers were shown to enhance the performance of multi-view stereo by enabling long-range feature interaction. In this work, we propose Window-based Transformers (WT) for local feature matching and global feature aggregation in multi-view stereo. We introduce a Window-based Epipolar Transformer (WET) which reduces matching redundancy by using epipolar constraints. Since point-to-line matching is sensitive to erroneous camera pose and calibration, we match windows near the epipolar lines. A second Shifted WT is employed for aggregating global information within cost volume. We present a novel Cost Transformer (CT) to replace 3D convolutions for cost volume regularization. In order to better constrain the estimated depth maps from multiple views, we further design a novel geometric consistency loss (Geo Loss) which punishes unreliable areas where multi-view consistency is not satisfied. Our WT multi-view stereo method (WT-MVSNet) achieves state-of-the-art performance across multiple datasets and ranks $1^{st}$ on Tanks and Temples benchmark.
[ { "version": "v1", "created": "Sat, 28 May 2022 03:32:09 GMT" } ]
2022-05-31T00:00:00
[ [ "Liao", "Jinli", "" ], [ "Ding", "Yikang", "" ], [ "Shavit", "Yoli", "" ], [ "Huang", "Dihe", "" ], [ "Ren", "Shihao", "" ], [ "Guo", "Jia", "" ], [ "Feng", "Wensen", "" ], [ "Zhang", "Kai", "" ] ]
new_dataset
0.993372
2205.14376
Ashkan Nikseresht
Marziyeh Beygi Khormaei, Ashkan Nikseresht and Shohreh Namazi
One-Sided Repeated-Root Two-Dimensional Cyclic and Constacyclic Codes
null
null
null
null
cs.IT math.AC math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we study some repeated-root two-dimensional cyclic and constacyclic codes over a finite field $F=\mathbb{F}_q$. We obtain the generator matrices and generator polynomials of these codes and their duals. We also investigate when such codes are self-dual. Moreover, we prove that if there exists an asymptotically good family of one-sided repeated-root two-dimensional cyclic or constacyclic codes, then there exists an asymptotically good family of simple root two-dimensional cyclic or constacyclic codes with parameters at least as good as the first family. Furthermore, we show that several of the main results of the papers Rajabi and Khashyarmanesh (2018) and Sepasdar and Khashyarmanesh (2016) are not accurate and find other conditions needed for them to hold.
[ { "version": "v1", "created": "Sat, 28 May 2022 09:21:01 GMT" } ]
2022-05-31T00:00:00
[ [ "Khormaei", "Marziyeh Beygi", "" ], [ "Nikseresht", "Ashkan", "" ], [ "Namazi", "Shohreh", "" ] ]
new_dataset
0.955204
2205.14409
Haoran Xie
Qi Zhou, Jiahao Weng, Haoran Xie
Find Your ASMR: A Perceptual Retrieval Interface for Autonomous Sensory Meridian Response Videos
12 pages, 8 figures, in proceedings of HCII2022
null
null
null
cs.HC
http://creativecommons.org/licenses/by/4.0/
Autonomous sensory meridian response (ASMR) is a type of video contents designed to help people relax and feel comfortable. Users usually retrieve ASMR contents from various video websites using only keywords. However, it is challenging to examine satisfactory contents to reflect users' needs for ASMR videos using keywords or content-based retrieval. To solve this issue, we propose a perceptual video retrieval system for ASMR videos and provide a novel retrieval user interface that allows users to retrieve content according to watching purpose and anticipated expectations, such as excitement, calmness, stress and sadness. An ASMR video perception dataset is constructed with annotations on affective responses after watching the videos. To verify the proposed video retrieval system, a user study is conducted showing that users can retrieve satisfactory ASMR contents easily and efficiently compared to conventional keywords-based retrieval systems.
[ { "version": "v1", "created": "Sat, 28 May 2022 12:03:21 GMT" } ]
2022-05-31T00:00:00
[ [ "Zhou", "Qi", "" ], [ "Weng", "Jiahao", "" ], [ "Xie", "Haoran", "" ] ]
new_dataset
0.99927
2205.14434
Raveena Raveena
Raveena and Krishnendra Shekhawat
A Theory of L-shaped Floor-plans
35 pages, 61 figures
null
null
null
cs.DM cs.CG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing graph theoretic approaches are mainly restricted to floor-plans with rectangular boundary. In this paper, we introduce floor-plans with $L$-shaped boundary (boundary with only one concave corner). To ensure the L-shaped boundary, we introduce the concept of non-triviality of a floor-plan. A floor-plan with a rectilinear boundary with at least one concave corner is non-trivial if the number of concave corners can not be reduced, without affecting the modules adjacencies within it. Further, we present necessary and sufficient conditions for the existence of a non-trivial L-shaped floor-plan corresponding to a properly triangulated planar graph (PTPG) $G$. Also, we develop an $O(n^2)$ algorithm for its construction, if it exists.
[ { "version": "v1", "created": "Sat, 28 May 2022 13:46:48 GMT" } ]
2022-05-31T00:00:00
[ [ "Raveena", "", "" ], [ "Shekhawat", "Krishnendra", "" ] ]
new_dataset
0.996598