id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2211.15521
Grace Luo
Grace Luo, Giscard Biamby, Trevor Darrell, Daniel Fried, Anna Rohrbach
G^3: Geolocation via Guidebook Grounding
Findings of EMNLP 2022
null
null
null
cs.CV cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We demonstrate how language can improve geolocation: the task of predicting the location where an image was taken. Here we study explicit knowledge from human-written guidebooks that describe the salient and class-discriminative visual features humans use for geolocation. We propose the task of Geolocation via Guidebook Grounding that uses a dataset of StreetView images from a diverse set of locations and an associated textual guidebook for GeoGuessr, a popular interactive geolocation game. Our approach predicts a country for each image by attending over the clues automatically extracted from the guidebook. Supervising attention with country-level pseudo labels achieves the best performance. Our approach substantially outperforms a state-of-the-art image-only geolocation method, with an improvement of over 5% in Top-1 accuracy. Our dataset and code can be found at https://github.com/g-luo/geolocation_via_guidebook_grounding.
[ { "version": "v1", "created": "Mon, 28 Nov 2022 16:34:40 GMT" } ]
2022-11-29T00:00:00
[ [ "Luo", "Grace", "" ], [ "Biamby", "Giscard", "" ], [ "Darrell", "Trevor", "" ], [ "Fried", "Daniel", "" ], [ "Rohrbach", "Anna", "" ] ]
new_dataset
0.99945
2211.15533
Harm de Vries
Denis Kocetkov, Raymond Li, Loubna Ben Allal, Jia Li, Chenghao Mou, Carlos Mu\~noz Ferrandis, Yacine Jernite, Margaret Mitchell, Sean Hughes, Thomas Wolf, Dzmitry Bahdanau, Leandro von Werra, Harm de Vries
The Stack: 3 TB of permissively licensed source code
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-sa/4.0/
Large Language Models (LLMs) play an ever-increasing role in the field of Artificial Intelligence (AI)--not only for natural language processing but also for code understanding and generation. To stimulate open and responsible research on LLMs for code, we introduce The Stack, a 3.1 TB dataset consisting of permissively licensed source code in 30 programming languages. We describe how we collect the full dataset, construct a permissively licensed subset, present a data governance plan, discuss limitations, and show promising results on text2code benchmarks by training 350M-parameter decoders on different Python subsets. We find that (1) near-deduplicating the data significantly boosts performance across all experiments, and (2) it is possible to match previously reported HumanEval and MBPP performance using only permissively licensed data. We make the dataset available at https://hf.co/BigCode, provide a tool called "Am I in The Stack" (https://hf.co/spaces/bigcode/in-the-stack) for developers to search The Stack for copies of their code, and provide a process for code to be removed from the dataset by following the instructions at https://www.bigcode-project.org/docs/about/the-stack/.
[ { "version": "v1", "created": "Sun, 20 Nov 2022 18:15:30 GMT" } ]
2022-11-29T00:00:00
[ [ "Kocetkov", "Denis", "" ], [ "Li", "Raymond", "" ], [ "Allal", "Loubna Ben", "" ], [ "Li", "Jia", "" ], [ "Mou", "Chenghao", "" ], [ "Ferrandis", "Carlos Muñoz", "" ], [ "Jernite", "Yacine", "" ], [ "Mitchell", "Margaret", "" ], [ "Hughes", "Sean", "" ], [ "Wolf", "Thomas", "" ], [ "Bahdanau", "Dzmitry", "" ], [ "von Werra", "Leandro", "" ], [ "de Vries", "Harm", "" ] ]
new_dataset
0.995679
1709.09480
Daniel Hein
Daniel Hein, Stefan Depeweg, Michel Tokic, Steffen Udluft, Alexander Hentschel, Thomas A. Runkler, Volkmar Sterzing
A Benchmark Environment Motivated by Industrial Control Problems
null
2017 IEEE Symposium Series on Computational Intelligence (SSCI)
10.1109/SSCI.2017.8280935
null
cs.AI cs.LG cs.SY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the research area of reinforcement learning (RL), frequently novel and promising methods are developed and introduced to the RL community. However, although many researchers are keen to apply their methods on real-world problems, implementing such methods in real industry environments often is a frustrating and tedious process. Generally, academic research groups have only limited access to real industrial data and applications. For this reason, new methods are usually developed, evaluated and compared by using artificial software benchmarks. On one hand, these benchmarks are designed to provide interpretable RL training scenarios and detailed insight into the learning process of the method on hand. On the other hand, they usually do not share much similarity with industrial real-world applications. For this reason we used our industry experience to design a benchmark which bridges the gap between freely available, documented, and motivated artificial benchmarks and properties of real industrial problems. The resulting industrial benchmark (IB) has been made publicly available to the RL community by publishing its Java and Python code, including an OpenAI Gym wrapper, on Github. In this paper we motivate and describe in detail the IB's dynamics and identify prototypic experimental settings that capture common situations in real-world industry control problems.
[ { "version": "v1", "created": "Wed, 27 Sep 2017 13:03:52 GMT" }, { "version": "v2", "created": "Tue, 6 Feb 2018 10:59:19 GMT" }, { "version": "v3", "created": "Thu, 24 Nov 2022 13:27:53 GMT" } ]
2022-11-28T00:00:00
[ [ "Hein", "Daniel", "" ], [ "Depeweg", "Stefan", "" ], [ "Tokic", "Michel", "" ], [ "Udluft", "Steffen", "" ], [ "Hentschel", "Alexander", "" ], [ "Runkler", "Thomas A.", "" ], [ "Sterzing", "Volkmar", "" ] ]
new_dataset
0.99898
1901.04056
Anjany Kumar Sekuboyina
Patrick Bilic, Patrick Christ, Hongwei Bran Li, Eugene Vorontsov, Avi Ben-Cohen, Georgios Kaissis, Adi Szeskin, Colin Jacobs, Gabriel Efrain Humpire Mamani, Gabriel Chartrand, Fabian Loh\"ofer, Julian Walter Holch, Wieland Sommer, Felix Hofmann, Alexandre Hostettler, Naama Lev-Cohain, Michal Drozdzal, Michal Marianne Amitai, Refael Vivantik, Jacob Sosna, Ivan Ezhov, Anjany Sekuboyina, Fernando Navarro, Florian Kofler, Johannes C. Paetzold, Suprosanna Shit, Xiaobin Hu, Jana Lipkov\'a, Markus Rempfler, Marie Piraud, Jan Kirschke, Benedikt Wiestler, Zhiheng Zhang, Christian H\"ulsemeyer, Marcel Beetz, Florian Ettlinger, Michela Antonelli, Woong Bae, M\'iriam Bellver, Lei Bi, Hao Chen, Grzegorz Chlebus, Erik B. Dam, Qi Dou, Chi-Wing Fu, Bogdan Georgescu, Xavier Gir\'o-i-Nieto, Felix Gruen, Xu Han, Pheng-Ann Heng, J\"urgen Hesser, Jan Hendrik Moltz, Christian Igel, Fabian Isensee, Paul J\"ager, Fucang Jia, Krishna Chaitanya Kaluva, Mahendra Khened, Ildoo Kim, Jae-Hun Kim, Sungwoong Kim, Simon Kohl, Tomasz Konopczynski, Avinash Kori, Ganapathy Krishnamurthi, Fan Li, Hongchao Li, Junbo Li, Xiaomeng Li, John Lowengrub, Jun Ma, Klaus Maier-Hein, Kevis-Kokitsi Maninis, Hans Meine, Dorit Merhof, Akshay Pai, Mathias Perslev, Jens Petersen, Jordi Pont-Tuset, Jin Qi, Xiaojuan Qi, Oliver Rippel, Karsten Roth, Ignacio Sarasua, Andrea Schenk, Zengming Shen, Jordi Torres, Christian Wachinger, Chunliang Wang, Leon Weninger, Jianrong Wu, Daguang Xu, Xiaoping Yang, Simon Chun-Ho Yu, Yading Yuan, Miao Yu, Liping Zhang, Jorge Cardoso, Spyridon Bakas, Rickmer Braren, Volker Heinemann, Christopher Pal, An Tang, Samuel Kadoury, Luc Soler, Bram van Ginneken, Hayit Greenspan, Leo Joskowicz, Bjoern Menze
The Liver Tumor Segmentation Benchmark (LiTS)
Patrick Bilic, Patrick Christ, Hongwei Bran Li, and Eugene Vorontsov made equal contributions to this work. Published in Medical Image Analysis
Medical Image Analysis (2022) Pg. 102680
10.1016/j.media.2022.102680
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this work, we report the set-up and results of the Liver Tumor Segmentation Benchmark (LiTS), which was organized in conjunction with the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 and the International Conferences on Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2017 and 2018. The image dataset is diverse and contains primary and secondary tumors with varied sizes and appearances with various lesion-to-background levels (hyper-/hypo-dense), created in collaboration with seven hospitals and research institutions. Seventy-five submitted liver and liver tumor segmentation algorithms were trained on a set of 131 computed tomography (CT) volumes and were tested on 70 unseen test images acquired from different patients. We found that not a single algorithm performed best for both liver and liver tumors in the three events. The best liver segmentation algorithm achieved a Dice score of 0.963, whereas, for tumor segmentation, the best algorithms achieved Dices scores of 0.674 (ISBI 2017), 0.702 (MICCAI 2017), and 0.739 (MICCAI 2018). Retrospectively, we performed additional analysis on liver tumor detection and revealed that not all top-performing segmentation algorithms worked well for tumor detection. The best liver tumor detection method achieved a lesion-wise recall of 0.458 (ISBI 2017), 0.515 (MICCAI 2017), and 0.554 (MICCAI 2018), indicating the need for further research. LiTS remains an active benchmark and resource for research, e.g., contributing the liver-related segmentation tasks in \url{http://medicaldecathlon.com/}. In addition, both data and online evaluation are accessible via \url{www.lits-challenge.com}.
[ { "version": "v1", "created": "Sun, 13 Jan 2019 20:38:16 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 09:24:35 GMT" } ]
2022-11-28T00:00:00
[ [ "Bilic", "Patrick", "" ], [ "Christ", "Patrick", "" ], [ "Li", "Hongwei Bran", "" ], [ "Vorontsov", "Eugene", "" ], [ "Ben-Cohen", "Avi", "" ], [ "Kaissis", "Georgios", "" ], [ "Szeskin", "Adi", "" ], [ "Jacobs", "Colin", "" ], [ "Mamani", "Gabriel Efrain Humpire", "" ], [ "Chartrand", "Gabriel", "" ], [ "Lohöfer", "Fabian", "" ], [ "Holch", "Julian Walter", "" ], [ "Sommer", "Wieland", "" ], [ "Hofmann", "Felix", "" ], [ "Hostettler", "Alexandre", "" ], [ "Lev-Cohain", "Naama", "" ], [ "Drozdzal", "Michal", "" ], [ "Amitai", "Michal Marianne", "" ], [ "Vivantik", "Refael", "" ], [ "Sosna", "Jacob", "" ], [ "Ezhov", "Ivan", "" ], [ "Sekuboyina", "Anjany", "" ], [ "Navarro", "Fernando", "" ], [ "Kofler", "Florian", "" ], [ "Paetzold", "Johannes C.", "" ], [ "Shit", "Suprosanna", "" ], [ "Hu", "Xiaobin", "" ], [ "Lipková", "Jana", "" ], [ "Rempfler", "Markus", "" ], [ "Piraud", "Marie", "" ], [ "Kirschke", "Jan", "" ], [ "Wiestler", "Benedikt", "" ], [ "Zhang", "Zhiheng", "" ], [ "Hülsemeyer", "Christian", "" ], [ "Beetz", "Marcel", "" ], [ "Ettlinger", "Florian", "" ], [ "Antonelli", "Michela", "" ], [ "Bae", "Woong", "" ], [ "Bellver", "Míriam", "" ], [ "Bi", "Lei", "" ], [ "Chen", "Hao", "" ], [ "Chlebus", "Grzegorz", "" ], [ "Dam", "Erik B.", "" ], [ "Dou", "Qi", "" ], [ "Fu", "Chi-Wing", "" ], [ "Georgescu", "Bogdan", "" ], [ "Giró-i-Nieto", "Xavier", "" ], [ "Gruen", "Felix", "" ], [ "Han", "Xu", "" ], [ "Heng", "Pheng-Ann", "" ], [ "Hesser", "Jürgen", "" ], [ "Moltz", "Jan Hendrik", "" ], [ "Igel", "Christian", "" ], [ "Isensee", "Fabian", "" ], [ "Jäger", "Paul", "" ], [ "Jia", "Fucang", "" ], [ "Kaluva", "Krishna Chaitanya", "" ], [ "Khened", "Mahendra", "" ], [ "Kim", "Ildoo", "" ], [ "Kim", "Jae-Hun", "" ], [ "Kim", "Sungwoong", "" ], [ "Kohl", "Simon", "" ], [ "Konopczynski", "Tomasz", "" ], [ "Kori", "Avinash", "" ], [ "Krishnamurthi", "Ganapathy", "" ], [ "Li", "Fan", "" ], [ "Li", "Hongchao", "" ], [ "Li", "Junbo", "" ], [ "Li", "Xiaomeng", "" ], [ "Lowengrub", "John", "" ], [ "Ma", "Jun", "" ], [ "Maier-Hein", "Klaus", "" ], [ "Maninis", "Kevis-Kokitsi", "" ], [ "Meine", "Hans", "" ], [ "Merhof", "Dorit", "" ], [ "Pai", "Akshay", "" ], [ "Perslev", "Mathias", "" ], [ "Petersen", "Jens", "" ], [ "Pont-Tuset", "Jordi", "" ], [ "Qi", "Jin", "" ], [ "Qi", "Xiaojuan", "" ], [ "Rippel", "Oliver", "" ], [ "Roth", "Karsten", "" ], [ "Sarasua", "Ignacio", "" ], [ "Schenk", "Andrea", "" ], [ "Shen", "Zengming", "" ], [ "Torres", "Jordi", "" ], [ "Wachinger", "Christian", "" ], [ "Wang", "Chunliang", "" ], [ "Weninger", "Leon", "" ], [ "Wu", "Jianrong", "" ], [ "Xu", "Daguang", "" ], [ "Yang", "Xiaoping", "" ], [ "Yu", "Simon Chun-Ho", "" ], [ "Yuan", "Yading", "" ], [ "Yu", "Miao", "" ], [ "Zhang", "Liping", "" ], [ "Cardoso", "Jorge", "" ], [ "Bakas", "Spyridon", "" ], [ "Braren", "Rickmer", "" ], [ "Heinemann", "Volker", "" ], [ "Pal", "Christopher", "" ], [ "Tang", "An", "" ], [ "Kadoury", "Samuel", "" ], [ "Soler", "Luc", "" ], [ "van Ginneken", "Bram", "" ], [ "Greenspan", "Hayit", "" ], [ "Joskowicz", "Leo", "" ], [ "Menze", "Bjoern", "" ] ]
new_dataset
0.999865
1902.04419
Krishna Gopal Benerjee
Krishna Gopal Benerjee and Sourav Deb and Manish K Gupta
On Conflict Free DNA Codes
12 pages, Draft (Table VI and Table VII are updated)
null
10.1007/s12095-020-00459-7
null
cs.IT cs.ET math.CO math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
DNA storage has emerged as an important area of research. The reliability of DNA storage system depends on designing the DNA strings (called DNA codes) that are sufficiently dissimilar. In this work, we introduce DNA codes that satisfy a special constraint. Each codeword of the DNA code has a specific property that any two consecutive sub-strings of the DNA codeword will not be the same (a generalization of homo-polymers constraint). This is in addition to the usual constraints such as Hamming, reverse, reverse-complement and $GC$-content. We believe that the new constraint will help further in reducing the errors during reading and writing data into the synthetic DNA strings. We also present a construction (based on a variant of stochastic local search algorithm) to calculate the size of the DNA codes with all the above constraints, which improves the lower bounds from the existing literature, for some specific cases. Moreover, a recursive isometric map between binary vectors and DNA strings is proposed. Using the map and the well known binary codes we obtain few classes of DNA codes with all the constraints including the property that the constructed DNA codewords are free from the hairpin-like secondary structures.
[ { "version": "v1", "created": "Tue, 12 Feb 2019 14:49:49 GMT" }, { "version": "v2", "created": "Fri, 1 Mar 2019 12:42:39 GMT" }, { "version": "v3", "created": "Tue, 9 Jul 2019 03:31:03 GMT" } ]
2022-11-28T00:00:00
[ [ "Benerjee", "Krishna Gopal", "" ], [ "Deb", "Sourav", "" ], [ "Gupta", "Manish K", "" ] ]
new_dataset
0.999643
2105.05796
Tomasz Stanis{\l}awek
Tomasz Stanis{\l}awek and Filip Grali\'nski and Anna Wr\'oblewska and Dawid Lipi\'nski and Agnieszka Kaliska and Paulina Rosalska and Bartosz Topolski and Przemys{\l}aw Biecek
Kleister: Key Information Extraction Datasets Involving Long Documents with Complex Layouts
accepted to ICDAR 2021
International Conference on Document Analysis and Recognition ICDAR 2021
10.1007/978-3-030-86549-8_36
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The relevance of the Key Information Extraction (KIE) task is increasingly important in natural language processing problems. But there are still only a few well-defined problems that serve as benchmarks for solutions in this area. To bridge this gap, we introduce two new datasets (Kleister NDA and Kleister Charity). They involve a mix of scanned and born-digital long formal English-language documents. In these datasets, an NLP system is expected to find or infer various types of entities by employing both textual and structural layout features. The Kleister Charity dataset consists of 2,788 annual financial reports of charity organizations, with 61,643 unique pages and 21,612 entities to extract. The Kleister NDA dataset has 540 Non-disclosure Agreements, with 3,229 unique pages and 2,160 entities to extract. We provide several state-of-the-art baseline systems from the KIE domain (Flair, BERT, RoBERTa, LayoutLM, LAMBERT), which show that our datasets pose a strong challenge to existing models. The best model achieved an 81.77% and an 83.57% F1-score on respectively the Kleister NDA and the Kleister Charity datasets. We share the datasets to encourage progress on more in-depth and complex information extraction tasks.
[ { "version": "v1", "created": "Wed, 12 May 2021 17:08:01 GMT" } ]
2022-11-28T00:00:00
[ [ "Stanisławek", "Tomasz", "" ], [ "Graliński", "Filip", "" ], [ "Wróblewska", "Anna", "" ], [ "Lipiński", "Dawid", "" ], [ "Kaliska", "Agnieszka", "" ], [ "Rosalska", "Paulina", "" ], [ "Topolski", "Bartosz", "" ], [ "Biecek", "Przemysław", "" ] ]
new_dataset
0.999554
2108.10290
Martin Knoche
Martin Knoche, Stefan H\"ormann, Gerhard Rigoll
Cross-Quality LFW: A Database for Analyzing Cross-Resolution Image Face Recognition in Unconstrained Environments
9 pages, 4 figures, 2 tables
null
10.1109/FG52635.2021.9666960
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Real-world face recognition applications often deal with suboptimal image quality or resolution due to different capturing conditions such as various subject-to-camera distances, poor camera settings, or motion blur. This characteristic has an unignorable effect on performance. Recent cross-resolution face recognition approaches used simple, arbitrary, and unrealistic down- and up-scaling techniques to measure robustness against real-world edge-cases in image quality. Thus, we propose a new standardized benchmark dataset and evaluation protocol derived from the famous Labeled Faces in the Wild (LFW). In contrast to previous derivatives, which focus on pose, age, similarity, and adversarial attacks, our Cross-Quality Labeled Faces in the Wild (XQLFW) maximizes the quality difference. It contains only more realistic synthetically degraded images when necessary. Our proposed dataset is then used to further investigate the influence of image quality on several state-of-the-art approaches. With XQLFW, we show that these models perform differently in cross-quality cases, and hence, the generalizing capability is not accurately predicted by their performance on LFW. Additionally, we report baseline accuracy with recent deep learning models explicitly trained for cross-resolution applications and evaluate the susceptibility to image quality. To encourage further research in cross-resolution face recognition and incite the assessment of image quality robustness, we publish the database and code for evaluation.
[ { "version": "v1", "created": "Mon, 23 Aug 2021 17:04:32 GMT" }, { "version": "v2", "created": "Thu, 26 Aug 2021 08:05:36 GMT" }, { "version": "v3", "created": "Fri, 25 Nov 2022 11:44:07 GMT" } ]
2022-11-28T00:00:00
[ [ "Knoche", "Martin", "" ], [ "Hörmann", "Stefan", "" ], [ "Rigoll", "Gerhard", "" ] ]
new_dataset
0.952872
2109.03569
Florent Bartoccioni
Florent Bartoccioni, \'Eloi Zablocki, Patrick P\'erez, Matthieu Cord, Karteek Alahari
LiDARTouch: Monocular metric depth estimation with a few-beam LiDAR
null
null
null
null
cs.CV cs.AI cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-based depth estimation is a key feature in autonomous systems, which often relies on a single camera or several independent ones. In such a monocular setup, dense depth is obtained with either additional input from one or several expensive LiDARs, e.g., with 64 beams, or camera-only methods, which suffer from scale-ambiguity and infinite-depth problems. In this paper, we propose a new alternative of densely estimating metric depth by combining a monocular camera with a light-weight LiDAR, e.g., with 4 beams, typical of today's automotive-grade mass-produced laser scanners. Inspired by recent self-supervised methods, we introduce a novel framework, called LiDARTouch, to estimate dense depth maps from monocular images with the help of ``touches'' of LiDAR, i.e., without the need for dense ground-truth depth. In our setup, the minimal LiDAR input contributes on three different levels: as an additional model's input, in a self-supervised LiDAR reconstruction objective function, and to estimate changes of pose (a key component of self-supervised depth estimation architectures). Our LiDARTouch framework achieves new state of the art in self-supervised depth estimation on the KITTI dataset, thus supporting our choices of integrating the very sparse LiDAR signal with other visual features. Moreover, we show that the use of a few-beam LiDAR alleviates scale ambiguity and infinite-depth issues that camera-only methods suffer from. We also demonstrate that methods from the fully-supervised depth-completion literature can be adapted to a self-supervised regime with a minimal LiDAR signal.
[ { "version": "v1", "created": "Wed, 8 Sep 2021 12:06:31 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 13:12:08 GMT" } ]
2022-11-28T00:00:00
[ [ "Bartoccioni", "Florent", "" ], [ "Zablocki", "Éloi", "" ], [ "Pérez", "Patrick", "" ], [ "Cord", "Matthieu", "" ], [ "Alahari", "Karteek", "" ] ]
new_dataset
0.99277
2112.08544
Revanth Reddy
Revanth Gangi Reddy, Sai Chetan, Zhenhailong Wang, Yi R. Fung, Kathryn Conger, Ahmed Elsayed, Martha Palmer, Preslav Nakov, Eduard Hovy, Kevin Small, Heng Ji
NewsClaims: A New Benchmark for Claim Detection from News with Attribute Knowledge
Accepted at EMNLP 2022
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Claim detection and verification are crucial for news understanding and have emerged as promising technologies for mitigating misinformation and disinformation in the news. However, most existing work has focused on claim sentence analysis while overlooking additional crucial attributes (e.g., the claimer and the main object associated with the claim). In this work, we present NewsClaims, a new benchmark for attribute-aware claim detection in the news domain. We extend the claim detection problem to include extraction of additional attributes related to each claim and release 889 claims annotated over 143 news articles. NewsClaims aims to benchmark claim detection systems in emerging scenarios, comprising unseen topics with little or no training data. To this end, we see that zero-shot and prompt-based baselines show promising performance on this benchmark, while still considerably behind human performance.
[ { "version": "v1", "created": "Thu, 16 Dec 2021 00:50:24 GMT" }, { "version": "v2", "created": "Mon, 14 Mar 2022 23:19:29 GMT" }, { "version": "v3", "created": "Tue, 24 May 2022 13:19:20 GMT" }, { "version": "v4", "created": "Wed, 23 Nov 2022 20:43:56 GMT" } ]
2022-11-28T00:00:00
[ [ "Reddy", "Revanth Gangi", "" ], [ "Chetan", "Sai", "" ], [ "Wang", "Zhenhailong", "" ], [ "Fung", "Yi R.", "" ], [ "Conger", "Kathryn", "" ], [ "Elsayed", "Ahmed", "" ], [ "Palmer", "Martha", "" ], [ "Nakov", "Preslav", "" ], [ "Hovy", "Eduard", "" ], [ "Small", "Kevin", "" ], [ "Ji", "Heng", "" ] ]
new_dataset
0.98433
2112.15093
Jingye Chen
Haiyang Yu, Jingye Chen, Bin Li, Jianqi Ma, Mengnan Guan, Xixi Xu, Xiaocong Wang, Shaobo Qu, Xiangyang Xue
Benchmarking Chinese Text Recognition: Datasets, Baselines, and an Empirical Study
Code is available at https://github.com/FudanVI/benchmarking-chinese-text-recognition
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The flourishing blossom of deep learning has witnessed the rapid development of text recognition in recent years. However, the existing text recognition methods are mainly proposed for English texts. As another widely-spoken language, Chinese text recognition (CTR) in all ways has extensive application markets. Based on our observations, we attribute the scarce attention on CTR to the lack of reasonable dataset construction standards, unified evaluation protocols, and results of the existing baselines. To fill this gap, we manually collect CTR datasets from publicly available competitions, projects, and papers. According to application scenarios, we divide the collected datasets into four categories including scene, web, document, and handwriting datasets. Besides, we standardize the evaluation protocols in CTR. With unified evaluation protocols, we evaluate a series of representative text recognition methods on the collected datasets to provide baselines. The experimental results indicate that the performance of baselines on CTR datasets is not as good as that on English datasets due to the characteristics of Chinese texts that are quite different from the Latin alphabet. Moreover, we observe that by introducing radical-level supervision as an auxiliary task, the performance of baselines can be further boosted. The code and datasets are made publicly available at https://github.com/FudanVI/benchmarking-chinese-text-recognition
[ { "version": "v1", "created": "Thu, 30 Dec 2021 15:30:52 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 12:03:17 GMT" } ]
2022-11-28T00:00:00
[ [ "Yu", "Haiyang", "" ], [ "Chen", "Jingye", "" ], [ "Li", "Bin", "" ], [ "Ma", "Jianqi", "" ], [ "Guan", "Mengnan", "" ], [ "Xu", "Xixi", "" ], [ "Wang", "Xiaocong", "" ], [ "Qu", "Shaobo", "" ], [ "Xue", "Xiangyang", "" ] ]
new_dataset
0.999544
2204.06979
Daniel Coquelin
Daniel Coquelin, Behnood Rasti, Markus G\"otz, Pedram Ghamisi, Richard Gloaguen, and Achim Streit
HyDe: The First Open-Source, Python-Based, GPU-Accelerated Hyperspectral Denoising Package
5 pages
null
10.1109/WHISPERS56178.2022.9955088
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
As with any physical instrument, hyperspectral cameras induce different kinds of noise in the acquired data. Therefore, Hyperspectral denoising is a crucial step for analyzing hyperspectral images (HSIs). Conventional computational methods rarely use GPUs to improve efficiency and are not fully open-source. Alternatively, deep learning-based methods are often open-source and use GPUs, but their training and utilization for real-world applications remain non-trivial for many researchers. Consequently, we propose HyDe: the first open-source, GPU-accelerated Python-based, hyperspectral image denoising toolbox, which aims to provide a large set of methods with an easy-to-use environment. HyDe includes a variety of methods ranging from low-rank wavelet-based methods to deep neural network (DNN) models. HyDe's interface dramatically improves the interoperability of these methods and the performance of the underlying functions. In fact, these methods maintain similar HSI denoising performance to their original implementations while consuming nearly ten times less energy. Furthermore, we present a method for training DNNs for denoising HSIs which are not spatially related to the training dataset, i.e., training on ground-level HSIs for denoising HSIs with other perspectives including airborne, drone-borne, and space-borne. To utilize the trained DNNs, we show a sliding window method to effectively denoise HSIs which would otherwise require more than 40 GB. The package can be found at: \url{https://github.com/Helmholtz-AI-Energy/HyDe}.
[ { "version": "v1", "created": "Thu, 14 Apr 2022 14:08:55 GMT" } ]
2022-11-28T00:00:00
[ [ "Coquelin", "Daniel", "" ], [ "Rasti", "Behnood", "" ], [ "Götz", "Markus", "" ], [ "Ghamisi", "Pedram", "" ], [ "Gloaguen", "Richard", "" ], [ "Streit", "Achim", "" ] ]
new_dataset
0.993399
2204.07719
Amnon Drory
Amnon Drory, Shai Avidan and Raja Giryes
Stress-Testing Point Cloud Registration on Automotive LiDAR
Accepted to the NeurIPS 2022 workshop on Machine Learning for Autonomous Driving. Project Page: https://github.com/AmnonDrory/LidarRegistration
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
Rigid Point Cloud Registration (PCR) algorithms aim to estimate the 6-DOF relative motion between two point clouds, which is important in various fields, including autonomous driving. Recent years have seen a significant improvement in global PCR algorithms, i.e. algorithms that can handle a large relative motion. This has been demonstrated in various scenarios, including indoor scenes, but has only been minimally tested in the Automotive setting, where point clouds are produced by vehicle-mounted LiDAR sensors. In this work, we aim to answer questions that are important for automotive applications, including: which of the new algorithms is the most accurate, and which is fastest? How transferable are deep-learning approaches, e.g. what happens when you train a network with data from Boston, and run it in a vehicle in Singapore? How small can the overlap between point clouds be before the algorithms start to deteriorate? To what extent are the algorithms rotation invariant? Our results are at times surprising. When comparing robust parameter estimation methods for registration, we find that the fastest and most accurate is not one of the newest approaches. Instead, it is a modern variant of the well known RANSAC technique. We also suggest a new outlier filtering method, Grid-Prioritized Filtering (GPF), to further improve it. An additional contribution of this work is an algorithm for selecting challenging sets of frame-pairs from automotive LiDAR datasets. This enables meaningful benchmarking in the Automotive LiDAR setting, and can also improve training for learning algorithms.
[ { "version": "v1", "created": "Sat, 16 Apr 2022 05:10:55 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 13:20:27 GMT" } ]
2022-11-28T00:00:00
[ [ "Drory", "Amnon", "" ], [ "Avidan", "Shai", "" ], [ "Giryes", "Raja", "" ] ]
new_dataset
0.997627
2205.06887
Pritam Sarkar
Pritam Sarkar, Aaron Posen, Ali Etemad
AVCAffe: A Large Scale Audio-Visual Dataset of Cognitive Load and Affect for Remote Work
Accepted in AAAI 2023
null
null
null
cs.HC cs.CV cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
We introduce AVCAffe, the first Audio-Visual dataset consisting of Cognitive load and Affect attributes. We record AVCAffe by simulating remote work scenarios over a video-conferencing platform, where subjects collaborate to complete a number of cognitively engaging tasks. AVCAffe is the largest originally collected (not collected from the Internet) affective dataset in English language. We recruit 106 participants from 18 different countries of origin, spanning an age range of 18 to 57 years old, with a balanced male-female ratio. AVCAffe comprises a total of 108 hours of video, equivalent to more than 58,000 clips along with task-based self-reported ground truth labels for arousal, valence, and cognitive load attributes such as mental demand, temporal demand, effort, and a few others. We believe AVCAffe would be a challenging benchmark for the deep learning research community given the inherent difficulty of classifying affect and cognitive load in particular. Moreover, our dataset fills an existing timely gap by facilitating the creation of learning systems for better self-management of remote work meetings, and further study of hypotheses regarding the impact of remote work on cognitive load and affective states.
[ { "version": "v1", "created": "Fri, 13 May 2022 20:55:25 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 04:37:15 GMT" } ]
2022-11-28T00:00:00
[ [ "Sarkar", "Pritam", "" ], [ "Posen", "Aaron", "" ], [ "Etemad", "Ali", "" ] ]
new_dataset
0.999846
2205.11108
D. Murugan
Petchiammal A, Briskline Kiruba S, D. Murugan, Pandarasamy A
Paddy Doctor: A Visual Image Dataset for Automated Paddy Disease Classification and Benchmarking
null
null
10.1145/3570991.3570994
null
cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
One of the critical biotic stress factors paddy farmers face is diseases caused by bacteria, fungi, and other organisms. These diseases affect plants' health severely and lead to significant crop loss. Most of these diseases can be identified by regularly observing the leaves and stems under expert supervision. In a country with vast agricultural regions and limited crop protection experts, manual identification of paddy diseases is challenging. Thus, to add a solution to this problem, it is necessary to automate the disease identification process and provide easily accessible decision support tools to enable effective crop protection measures. However, the lack of availability of public datasets with detailed disease information limits the practical implementation of accurate disease detection systems. This paper presents \emph{Paddy Doctor}, a visual image dataset for identifying paddy diseases. Our dataset contains 16,225 annotated paddy leaf images across 13 classes (12 diseases and normal leaf). We benchmarked the \emph{Paddy Doctor} dataset using a Convolutional Neural Network (CNN) and four transfer learning based models (VGG16, MobileNet, Xception, and ResNet34). The experimental results showed that ResNet34 achieved the highest F1-score of 97.50%. We release our dataset and reproducible code in the open source for community use.
[ { "version": "v1", "created": "Mon, 23 May 2022 07:57:40 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 11:23:28 GMT" } ]
2022-11-28T00:00:00
[ [ "A", "Petchiammal", "" ], [ "S", "Briskline Kiruba", "" ], [ "Murugan", "D.", "" ], [ "A", "Pandarasamy", "" ] ]
new_dataset
0.999763
2206.04510
Si Shen
Si Shen, Jiangfeng Liu, Litao Lin, Ying Huang, Lin Zhang, Chang Liu, Yutong Feng, Dongbo Wang
SsciBERT: A Pre-trained Language Model for Social Science Texts
24 pages,2 figures
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The academic literature of social sciences records human civilization and studies human social problems. With its large-scale growth, the ways to quickly find existing research on relevant issues have become an urgent demand for researchers. Previous studies, such as SciBERT, have shown that pre-training using domain-specific texts can improve the performance of natural language processing tasks. However, the pre-trained language model for social sciences is not available so far. In light of this, the present research proposes a pre-trained model based on the abstracts published in the Social Science Citation Index (SSCI) journals. The models, which are available on GitHub (https://github.com/S-T-Full-Text-Knowledge-Mining/SSCI-BERT), show excellent performance on discipline classification, abstract structure-function recognition, and named entity recognition tasks with the social sciences literature.
[ { "version": "v1", "created": "Thu, 9 Jun 2022 13:49:04 GMT" }, { "version": "v2", "created": "Sat, 11 Jun 2022 14:47:38 GMT" }, { "version": "v3", "created": "Fri, 25 Nov 2022 03:28:20 GMT" } ]
2022-11-28T00:00:00
[ [ "Shen", "Si", "" ], [ "Liu", "Jiangfeng", "" ], [ "Lin", "Litao", "" ], [ "Huang", "Ying", "" ], [ "Zhang", "Lin", "" ], [ "Liu", "Chang", "" ], [ "Feng", "Yutong", "" ], [ "Wang", "Dongbo", "" ] ]
new_dataset
0.996114
2206.09059
Tejas Srinivasan
Tejas Srinivasan, Ting-Yun Chang, Leticia Leonor Pinto Alva, Georgios Chochlakis, Mohammad Rostami, Jesse Thomason
CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks
Accepted to NeurIPS 2022 Datasets and Benchmarks track
null
null
null
cs.CL cs.AI cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Current state-of-the-art vision-and-language models are evaluated on tasks either individually or in a multi-task setting, overlooking the challenges of continually learning (CL) tasks as they arrive. Existing CL benchmarks have facilitated research on task adaptation and mitigating "catastrophic forgetting", but are limited to vision-only and language-only tasks. We present CLiMB, a benchmark to study the challenge of learning multimodal tasks in a CL setting, and to systematically evaluate how upstream continual learning can rapidly generalize to new multimodal and unimodal tasks. CLiMB includes implementations of several CL algorithms and a modified Vision-Language Transformer (ViLT) model that can be deployed on both multimodal and unimodal tasks. We find that common CL methods can help mitigate forgetting during multimodal task learning, but do not enable cross-task knowledge transfer. We envision that CLiMB will facilitate research on a new class of CL algorithms for this challenging multimodal setting.
[ { "version": "v1", "created": "Sat, 18 Jun 2022 00:16:37 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 21:40:45 GMT" } ]
2022-11-28T00:00:00
[ [ "Srinivasan", "Tejas", "" ], [ "Chang", "Ting-Yun", "" ], [ "Alva", "Leticia Leonor Pinto", "" ], [ "Chochlakis", "Georgios", "" ], [ "Rostami", "Mohammad", "" ], [ "Thomason", "Jesse", "" ] ]
new_dataset
0.994955
2206.15153
Can Xiang
Can Xiang, Chunming Tang
Some $3$-designs and shortened codes from binary cyclic codes with three zeros
20 pages. arXiv admin note: text overlap with arXiv:2110.03881, arXiv:2007.05923
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Linear codes and $t$-designs are interactive with each other. It is well known that some $t$-designs have been constructed by using certain linear codes in recent years. However, only a small number of infinite families of the extended codes of linear codes holding an infinite family of $t$-designs with $t\geq 3$ are reported in the literature. In this paper, we study the extended codes of the augmented codes of a class of binary cyclic codes with three zeros and their dual codes, and show that those codes hold $3$-designs. Furthermore, we obtain some shortened codes from the studied cyclic codes and explicitly determine their parameters. Some of those shortened codes are optimal or almost optimal.
[ { "version": "v1", "created": "Thu, 30 Jun 2022 09:36:38 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 12:07:54 GMT" } ]
2022-11-28T00:00:00
[ [ "Xiang", "Can", "" ], [ "Tang", "Chunming", "" ] ]
new_dataset
0.996548
2207.13345
Tomasz Kryjak
Piotr Wzorek and Tomasz Kryjak
Traffic Sign Detection With Event Cameras and DCNN
Accepted for the SPA 2022 conference, Poznan, Poland
null
10.23919/SPA53010.2022.9927864
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
In recent years, event cameras (DVS - Dynamic Vision Sensors) have been used in vision systems as an alternative or supplement to traditional cameras. They are characterised by high dynamic range, high temporal resolution, low latency, and reliable performance in limited lighting conditions -- parameters that are particularly important in the context of advanced driver assistance systems (ADAS) and self-driving cars. In this work, we test whether these rather novel sensors can be applied to the popular task of traffic sign detection. To this end, we analyse different representations of the event data: event frame, event frequency, and the exponentially decaying time surface, and apply video frame reconstruction using a deep neural network called FireNet. We use the deep convolutional neural network YOLOv4 as a detector. For particular representations, we obtain a detection accuracy in the range of 86.9-88.9% mAP@0.5. The use of a fusion of the considered representations allows us to obtain a detector with higher accuracy of 89.9% mAP@0.5. In comparison, the detector for the frames reconstructed with FireNet is characterised by an accuracy of 72.67% mAP@0.5. The results obtained illustrate the potential of event cameras in automotive applications, either as standalone sensors or in close cooperation with typical frame-based cameras.
[ { "version": "v1", "created": "Wed, 27 Jul 2022 08:01:54 GMT" } ]
2022-11-28T00:00:00
[ [ "Wzorek", "Piotr", "" ], [ "Kryjak", "Tomasz", "" ] ]
new_dataset
0.999072
2208.00001
Muhammad Asad
Muhammad Asad, Reuben Dorent, Tom Vercauteren
FastGeodis: Fast Generalised Geodesic Distance Transform
Accepted at Journal of Open Source Software (JOSS)
null
10.21105/joss.04532
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
The FastGeodis package provides an efficient implementation for computing Geodesic and Euclidean distance transforms (or a mixture of both), targeting efficient utilisation of CPU and GPU hardware. In particular, it implements the paralellisable raster scan method from Criminisi et al. (2009), where elements in a row (2D) or plane (3D) can be computed with parallel threads. This package is able to handle 2D as well as 3D data, where it achieves up to a 20x speedup on a CPU and up to a 74x speedup on a GPU as compared to an existing open-source library (Wang, 2020) that uses a non-parallelisable single-thread CPU implementation. The performance speedups reported here were evaluated using 3D volume data on an Nvidia GeForce Titan X (12 GB) with a 6-Core Intel Xeon E5-1650 CPU. Further in-depth comparison of performance improvements are discussed in the FastGeodis documentation: https://fastgeodis.readthedocs.io
[ { "version": "v1", "created": "Tue, 26 Jul 2022 15:01:37 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2022 23:33:37 GMT" } ]
2022-11-28T00:00:00
[ [ "Asad", "Muhammad", "" ], [ "Dorent", "Reuben", "" ], [ "Vercauteren", "Tom", "" ] ]
new_dataset
0.998069
2208.03196
Vinod Kumar Chauhan
Vinod Kumar Chauhan, Anshul Thakur, Odhran O'Donoghue and David A. Clifton
COPER: Continuous Patient State Perceiver
2 figures; presented in IEEE International Conference on Biomedical and Health Informatics (IEEE BHI-2022)
null
10.1109/BHI56158.2022.9926807
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In electronic health records (EHRs), irregular time-series (ITS) occur naturally due to patient health dynamics, reflected by irregular hospital visits, diseases/conditions and the necessity to measure different vitals signs at each visit etc. ITS present challenges in training machine learning algorithms which mostly are built on assumption of coherent fixed dimensional feature space. In this paper, we propose a novel COntinuous patient state PERceiver model, called COPER, to cope with ITS in EHRs. COPER uses Perceiver model and the concept of neural ordinary differential equations (ODEs) to learn the continuous time dynamics of patient state, i.e., continuity of input space and continuity of output space. The neural ODEs help COPER to generate regular time-series to feed to Perceiver model which has the capability to handle multi-modality large-scale inputs. To evaluate the performance of the proposed model, we use in-hospital mortality prediction task on MIMIC-III dataset and carefully design experiments to study irregularity. The results are compared with the baselines which prove the efficacy of the proposed model.
[ { "version": "v1", "created": "Fri, 5 Aug 2022 14:32:57 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 13:46:38 GMT" } ]
2022-11-28T00:00:00
[ [ "Chauhan", "Vinod Kumar", "" ], [ "Thakur", "Anshul", "" ], [ "O'Donoghue", "Odhran", "" ], [ "Clifton", "David A.", "" ] ]
new_dataset
0.993065
2210.08836
Peirong Zhang
Peirong Zhang, Jiajia Jiang, Yuliang Liu, Lianwen Jin
MSDS: A Large-Scale Chinese Signature and Token Digit String Dataset for Handwriting Verification
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
Although online handwriting verification has made great progress recently, the verification performances are still far behind the real usage owing to the small scale of the datasets as well as the limited biometric mediums. Therefore, this paper proposes a new handwriting verification benchmark dataset named Multimodal Signature and Digit String (MSDS), which consists of two subsets: MSDS-ChS (Chinese Signatures) and MSDS-TDS (Token Digit Strings), contributed by 402 users, with 20 genuine samples and 20 skilled forgeries per user per subset. MSDS-ChS consists of handwritten Chinese signatures, which, to the best of our knowledge, is the largest publicly available Chinese signature dataset for handwriting verification, at least eight times larger than existing online datasets. Meanwhile, MSDS-TDS consists of handwritten Token Digit Strings, i.e, the actual phone numbers of users, which have not been explored yet. Extensive experiments with different baselines are respectively conducted for MSDS-ChS and MSDS-TDS. Surprisingly, verification performances of state-of-the-art methods on MSDS-TDS are generally better than those on MSDS-ChS, which indicates that the handwritten Token Digit String could be a more effective biometric than handwritten Chinese signature. This is a promising discovery that could inspire us to explore new biometric traits. The MSDS dataset is available at https://github.com/HCIILAB/MSDS.
[ { "version": "v1", "created": "Mon, 17 Oct 2022 08:23:12 GMT" }, { "version": "v2", "created": "Fri, 21 Oct 2022 04:57:21 GMT" }, { "version": "v3", "created": "Thu, 17 Nov 2022 14:18:15 GMT" }, { "version": "v4", "created": "Thu, 24 Nov 2022 13:25:00 GMT" } ]
2022-11-28T00:00:00
[ [ "Zhang", "Peirong", "" ], [ "Jiang", "Jiajia", "" ], [ "Liu", "Yuliang", "" ], [ "Jin", "Lianwen", "" ] ]
new_dataset
0.99978
2210.15871
Henghui Ding
Henghui Ding, Chang Liu, Suchen Wang, Xudong Jiang
VLT: Vision-Language Transformer and Query Generation for Referring Segmentation
TPAMI
null
10.1109/TPAMI.2022.3217852
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a Vision-Language Transformer (VLT) framework for referring segmentation to facilitate deep interactions among multi-modal information and enhance the holistic understanding to vision-language features. There are different ways to understand the dynamic emphasis of a language expression, especially when interacting with the image. However, the learned queries in existing transformer works are fixed after training, which cannot cope with the randomness and huge diversity of the language expressions. To address this issue, we propose a Query Generation Module, which dynamically produces multiple sets of input-specific queries to represent the diverse comprehensions of language expression. To find the best among these diverse comprehensions, so as to generate a better mask, we propose a Query Balance Module to selectively fuse the corresponding responses of the set of queries. Furthermore, to enhance the model's ability in dealing with diverse language expressions, we consider inter-sample learning to explicitly endow the model with knowledge of understanding different language expressions to the same object. We introduce masked contrastive learning to narrow down the features of different expressions for the same target object while distinguishing the features of different objects. The proposed approach is lightweight and achieves new state-of-the-art referring segmentation results consistently on five datasets.
[ { "version": "v1", "created": "Fri, 28 Oct 2022 03:36:07 GMT" } ]
2022-11-28T00:00:00
[ [ "Ding", "Henghui", "" ], [ "Liu", "Chang", "" ], [ "Wang", "Suchen", "" ], [ "Jiang", "Xudong", "" ] ]
new_dataset
0.991777
2211.07521
Eslam Bakr
Eslam Mohamed Bakr, Ahmad El Sallab, Mohsen A. Rashwan
PKCAM: Previous Knowledge Channel Attention Module
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, attention mechanisms have been explored with ConvNets, both across the spatial and channel dimensions. However, from our knowledge, all the existing methods devote the attention modules to capture local interactions from a uni-scale. In this paper, we propose a Previous Knowledge Channel Attention Module(PKCAM), that captures channel-wise relations across different layers to model the global context. Our proposed module PKCAM is easily integrated into any feed-forward CNN architectures and trained in an end-to-end fashion with a negligible footprint due to its lightweight property. We validate our novel architecture through extensive experiments on image classification and object detection tasks with different backbones. Our experiments show consistent improvements in performances against their counterparts. Our code is published at https://github.com/eslambakr/EMCA.
[ { "version": "v1", "created": "Mon, 14 Nov 2022 16:49:11 GMT" }, { "version": "v2", "created": "Fri, 25 Nov 2022 16:03:20 GMT" } ]
2022-11-28T00:00:00
[ [ "Bakr", "Eslam Mohamed", "" ], [ "Sallab", "Ahmad El", "" ], [ "Rashwan", "Mohsen A.", "" ] ]
new_dataset
0.987758
2211.08788
Yong Hu
Yong Hu, Fandong Meng, Jie Zhou
CSCD-IME: Correcting Spelling Errors Generated by Pinyin IME
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Chinese Spelling Correction (CSC) is a task to detect and correct spelling mistakes in texts. In fact, most of Chinese input is based on pinyin input method, so the study of spelling errors in this process is more practical and valuable. However, there is still no research dedicated to this essential scenario. In this paper, we first present a Chinese Spelling Correction Dataset for errors generated by pinyin IME (CSCD-IME), including 40,000 annotated sentences from real posts of official media on Sina Weibo. Furthermore, we propose a novel method to automatically construct large-scale and high-quality pseudo data by simulating the input through pinyin IME. A series of analyses and experiments on CSCD-IME show that spelling errors produced by pinyin IME hold a particular distribution at pinyin level and semantic level and are challenging enough. Meanwhile, our proposed pseudo-data construction method can better fit this error distribution and improve the performance of CSC systems. Finally, we provide a useful guide to using pseudo data, including the data scale, the data source, and the training strategy.
[ { "version": "v1", "created": "Wed, 16 Nov 2022 09:25:42 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 09:37:41 GMT" } ]
2022-11-28T00:00:00
[ [ "Hu", "Yong", "" ], [ "Meng", "Fandong", "" ], [ "Zhou", "Jie", "" ] ]
new_dataset
0.999835
2211.12139
Emily Muller
Emily Muller, Emily Gemmell, Ishmam Choudhury, Ricky Nathvani, Antje Barbara Metzler, James Bennett, Emily Denton, Seth Flaxman, Majid Ezzati
City-Wide Perceptions of Neighbourhood Quality using Street View Images
null
null
null
null
cs.CV cs.CY
http://creativecommons.org/licenses/by/4.0/
The interactions of individuals with city neighbourhoods is determined, in part, by the perceived quality of urban environments. Perceived neighbourhood quality is a core component of urban vitality, influencing social cohesion, sense of community, safety, activity and mental health of residents. Large-scale assessment of perceptions of neighbourhood quality was pioneered by the Place Pulse projects. Researchers demonstrated the efficacy of crowd-sourcing perception ratings of image pairs across 56 cities and training a model to predict perceptions from street-view images. Variation across cities may limit Place Pulse's usefulness for assessing within-city perceptions. In this paper, we set forth a protocol for city-specific dataset collection for the perception: 'On which street would you prefer to walk?'. This paper describes our methodology, based in London, including collection of images and ratings, web development, model training and mapping. Assessment of within-city perceptions of neighbourhoods can identify inequities, inform planning priorities, and identify temporal dynamics. Code available: https://emilymuller1991.github.io/urban-perceptions/.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 10:16:35 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 11:09:23 GMT" } ]
2022-11-28T00:00:00
[ [ "Muller", "Emily", "" ], [ "Gemmell", "Emily", "" ], [ "Choudhury", "Ishmam", "" ], [ "Nathvani", "Ricky", "" ], [ "Metzler", "Antje Barbara", "" ], [ "Bennett", "James", "" ], [ "Denton", "Emily", "" ], [ "Flaxman", "Seth", "" ], [ "Ezzati", "Majid", "" ] ]
new_dataset
0.99652
2211.13090
Sifeng He
Sifeng He, Yue He, Minlong Lu, Chen Jiang, Xudong Yang, Feng Qian, Xiaobo Zhang, Lei Yang, Jiandong Zhang
TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision
Accepted by the Thirty-Seventh AAAI Conference on Artificial Intelligence(AAAI2023)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Video copy localization aims to precisely localize all the copied segments within a pair of untrimmed videos in video retrieval applications. Previous methods typically start from frame-to-frame similarity matrix generated by cosine similarity between frame-level features of the input video pair, and then detect and refine the boundaries of copied segments on similarity matrix under temporal constraints. In this paper, we propose TransVCL: an attention-enhanced video copy localization network, which is optimized directly from initial frame-level features and trained end-to-end with three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for similarity matrix generation, and a temporal alignment module for copied segments localization. In contrast to previous methods demanding the handcrafted similarity matrix, TransVCL incorporates long-range temporal information between feature sequence pair using self- and cross- attention layers. With the joint design and optimization of three components, the similarity matrix can be learned to present more discriminative copied patterns, leading to significant improvements over previous methods on segment-level labeled datasets (VCSL and VCDB). Besides the state-of-the-art performance in fully supervised setting, the attention architecture facilitates TransVCL to further exploit unlabeled or simply video-level labeled data. Additional experiments of supplementing video-level labeled datasets including SVD and FIVR reveal the high flexibility of TransVCL from full supervision to semi-supervision (with or without video-level annotation). Code is publicly available at https://github.com/transvcl/TransVCL.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 16:19:45 GMT" }, { "version": "v2", "created": "Thu, 24 Nov 2022 01:55:14 GMT" } ]
2022-11-28T00:00:00
[ [ "He", "Sifeng", "" ], [ "He", "Yue", "" ], [ "Lu", "Minlong", "" ], [ "Jiang", "Chen", "" ], [ "Yang", "Xudong", "" ], [ "Qian", "Feng", "" ], [ "Zhang", "Xiaobo", "" ], [ "Yang", "Lei", "" ], [ "Zhang", "Jiandong", "" ] ]
new_dataset
0.976836
2211.13251
Keqiang Sun
Keqiang Sun, Shangzhe Wu, Ning Zhang, Zhaoyang Huang, Quan Wang, Hongsheng Li
CGOF++: Controllable 3D Face Synthesis with Conditional Generative Occupancy Fields
This article is an extension of the NeurIPS'22 paper arXiv:2206.08361
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Capitalizing on the recent advances in image generation models, existing controllable face image synthesis methods are able to generate high-fidelity images with some levels of controllability, e.g., controlling the shapes, expressions, textures, and poses of the generated face images. However, previous methods focus on controllable 2D image generative models, which are prone to producing inconsistent face images under large expression and pose changes. In this paper, we propose a new NeRF-based conditional 3D face synthesis framework, which enables 3D controllability over the generated face images by imposing explicit 3D conditions from 3D face priors. At its core is a conditional Generative Occupancy Field (cGOF++) that effectively enforces the shape of the generated face to conform to a given 3D Morphable Model (3DMM) mesh, built on top of EG3D [1], a recent tri-plane-based generative model. To achieve accurate control over fine-grained 3D face shapes of the synthesized images, we additionally incorporate a 3D landmark loss as well as a volume warping loss into our synthesis framework. Experiments validate the effectiveness of the proposed method, which is able to generate high-fidelity face images and shows more precise 3D controllability than state-of-the-art 2D-based controllable face synthesis methods.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 19:02:50 GMT" } ]
2022-11-28T00:00:00
[ [ "Sun", "Keqiang", "" ], [ "Wu", "Shangzhe", "" ], [ "Zhang", "Ning", "" ], [ "Huang", "Zhaoyang", "" ], [ "Wang", "Quan", "" ], [ "Li", "Hongsheng", "" ] ]
new_dataset
0.986775
2211.13376
Xinying Qiu
Xinying Qiu, Guofeng Shi
InDEX: Indonesian Idiom and Expression Dataset for Cloze Test
Accepted to "2022 International Conference on Asian Language Processing (IALP)"
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
We propose InDEX, an Indonesian Idiom and Expression dataset for cloze test. The dataset contains 10438 unique sentences for 289 idioms and expressions for which we generate 15 different types of distractors, resulting in a large cloze-style corpus. Many baseline models of cloze test reading comprehension apply BERT with random initialization to learn embedding representation. But idioms and fixed expressions are different such that the literal meaning of the phrases may or may not be consistent with their contextual meaning. Therefore, we explore different ways to combine static and contextual representations for a stronger baseline model. Experimentations show that combining definition and random initialization will better support cloze test model performance for idioms whether independently or mixed with fixed expressions. While for fixed expressions with no special meaning, static embedding with random initialization is sufficient for cloze test model.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 02:05:47 GMT" } ]
2022-11-28T00:00:00
[ [ "Qiu", "Xinying", "" ], [ "Shi", "Guofeng", "" ] ]
new_dataset
0.999795
2211.13382
Yao Lai
Yao Lai, Yao Mu, Ping Luo
MaskPlace: Fast Chip Placement via Reinforced Visual Representation Learning
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Placement is an essential task in modern chip design, aiming at placing millions of circuit modules on a 2D chip canvas. Unlike the human-centric solution, which requires months of intense effort by hardware engineers to produce a layout to minimize delay and energy consumption, deep reinforcement learning has become an emerging autonomous tool. However, the learning-centric method is still in its early stage, impeded by a massive design space of size ten to the order of a few thousand. This work presents MaskPlace to automatically generate a valid chip layout design within a few hours, whose performance can be superior or comparable to recent advanced approaches. It has several appealing benefits that prior arts do not have. Firstly, MaskPlace recasts placement as a problem of learning pixel-level visual representation to comprehensively describe millions of modules on a chip, enabling placement in a high-resolution canvas and a large action space. It outperforms recent methods that represent a chip as a hypergraph. Secondly, it enables training the policy network by an intuitive reward function with dense reward, rather than a complicated reward function with sparse reward from previous methods. Thirdly, extensive experiments on many public benchmarks show that MaskPlace outperforms existing RL approaches in all key performance metrics, including wirelength, congestion, and density. For example, it achieves 60%-90% wirelength reduction and guarantees zero overlaps. We believe MaskPlace can improve AI-assisted chip layout design. The deliverables are released at https://laiyao1.github.io/maskplace.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 02:22:09 GMT" } ]
2022-11-28T00:00:00
[ [ "Lai", "Yao", "" ], [ "Mu", "Yao", "" ], [ "Luo", "Ping", "" ] ]
new_dataset
0.994925
2211.13391
Zhifeng Zhu
Yue Xin, Kang Zhou, Xuanyao Fong, Yumeng Yang, Shenghua Gao, Zhifeng Zhu
Electrical Tunable Spintronic Neuron with Trainable Activation Function
26 pages, 9 figures
null
null
null
cs.ET cond-mat.dis-nn
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Spintronic devices have been widely studied for the hardware realization of artificial neurons. The stochastic switching of magnetic tunnel junction driven by the spin torque is commonly used to produce the sigmoid activation function. However, the shape of the activation function in previous studies is fixed during the training of neural network. This restricts the updating of weights and results in a limited performance. In this work, we exploit the physics behind the spin torque induced magnetization switching to enable the dynamic change of the activation function during the training process. Specifically, the pulse width and magnetic anisotropy can be electrically controlled to change the slope of activation function, which enables a faster or slower change of output required by the backpropagation algorithm. This is also similar to the idea of batch normalization that is widely used in the machine learning. Thus, this work demonstrates that the algorithms are no longer limited to the software implementation. They can in fact be realized by the spintronic hardware using a single device. Finally, we show that the accuracy of hand-written digit recognition can be improved from 88% to 91.3% by using these trainable spintronic neurons without introducing additional energy consumption. Our proposals can stimulate the hardware realization of spintronic neural networks.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 03:11:20 GMT" } ]
2022-11-28T00:00:00
[ [ "Xin", "Yue", "" ], [ "Zhou", "Kang", "" ], [ "Fong", "Xuanyao", "" ], [ "Yang", "Yumeng", "" ], [ "Gao", "Shenghua", "" ], [ "Zhu", "Zhifeng", "" ] ]
new_dataset
0.998082
2211.13432
Keisuke Okumura
Keisuke Okumura
LaCAM: Search-Based Algorithm for Quick Multi-Agent Pathfinding
to be presented at AAAI-23
null
null
null
cs.AI cs.MA cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel complete algorithm for multi-agent pathfinding (MAPF) called lazy constraints addition search for MAPF (LaCAM). MAPF is a problem of finding collision-free paths for multiple agents on graphs and is the foundation of multi-robot coordination. LaCAM uses a two-level search to find solutions quickly, even with hundreds of agents or more. At the low-level, it searches constraints about agents' locations. At the high-level, it searches a sequence of all agents' locations, following the constraints specified by the low-level. Our exhaustive experiments reveal that LaCAM is comparable to or outperforms state-of-the-art sub-optimal MAPF algorithms in a variety of scenarios, regarding success rate, planning time, and solution quality of sum-of-costs.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 06:27:18 GMT" } ]
2022-11-28T00:00:00
[ [ "Okumura", "Keisuke", "" ] ]
new_dataset
0.972694
2211.13536
Nana Obayashi
Andrea Vicari, Nana Obayashi, Francesco Stella, Gaetan Raynaud, Karen Mulleners, Cosimo Della Santina, and Josie Hughes
Proprioceptive Sensing of Soft Tentacles with Model Based Reconstruction for Controller Optimization
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
The success of soft robots in displaying emergent behaviors is tightly linked to the compliant interaction with the environment. However, to exploit such phenomena, proprioceptive sensing methods which do not hinder their softness are needed. In this work we propose a new sensing approach for soft underwater slender structures based on embedded pressure sensors and use a learning-based pipeline to link the sensor readings to the shape of the soft structure. Using two different modeling techniques, we compare the pose reconstruction accuracy and identify the optimal approach. Using the proprioceptive sensing capabilities we show how this information can be used to assess the swimming performance over a number of metrics, namely swimming thrust, tip deflection, and the traveling wave index. We conclude by demonstrating the robustness of the embedded sensor on a free swimming soft robotic squid swimming at a maximum velocity of 9.5 cm/s, with the absolute tip deflection being predicted within an error less than 9% without the aid of external sensors.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 11:11:32 GMT" } ]
2022-11-28T00:00:00
[ [ "Vicari", "Andrea", "" ], [ "Obayashi", "Nana", "" ], [ "Stella", "Francesco", "" ], [ "Raynaud", "Gaetan", "" ], [ "Mulleners", "Karen", "" ], [ "Della Santina", "Cosimo", "" ], [ "Hughes", "Josie", "" ] ]
new_dataset
0.994158
2211.13573
Ehsan Tohidi Dr
Fariba Armandoust, Ehsan Tohidi, Martin Kasparick, Li Wang, Ahmet Hasim Gokceoglu, and Slawomir Stanczak
MIMO Systems with Reconfigurable Antennas: Joint Channel Estimation and Mode Selection
null
null
null
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
Reconfigurable antennas (RAs) are a promising technology to enhance the capacity and coverage of wireless communication systems. However, RA systems have two major challenges: (i) High computational complexity of mode selection, and (ii) High overhead of channel estimation for all modes. In this paper, we develop a low-complexity iterative mode selection algorithm for data transmission in an RA-MIMO system. Furthermore, we study channel estimation of an RA multi-user MIMO system. However, given the coherence time, it is challenging to estimate channels of all modes. We propose a mode selection scheme to select a subset of modes, train channels for the selected subset, and predict channels for the remaining modes. In addition, we propose a prediction scheme based on pattern correlation between modes. Representative simulation results demonstrate the system's channel estimation error and achievable sum-rate for various selected modes and different signal-to-noise ratios (SNRs).
[ { "version": "v1", "created": "Thu, 24 Nov 2022 12:48:49 GMT" } ]
2022-11-28T00:00:00
[ [ "Armandoust", "Fariba", "" ], [ "Tohidi", "Ehsan", "" ], [ "Kasparick", "Martin", "" ], [ "Wang", "Li", "" ], [ "Gokceoglu", "Ahmet Hasim", "" ], [ "Stanczak", "Slawomir", "" ] ]
new_dataset
0.97717
2211.13622
Per Erik Strandberg
Per Erik Strandberg
The Westermo test results data set
null
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
There is a growing body of knowledge in the computer science, software engineering, software testing and software test automation disciplines. However, there is a challenge for researchers to evaluate their research findings, innovations and tools due to lack of realistic data. This paper presents the Westermo test results data set, more than one million verdicts from testing of embedded systems, from more than five hundred consecutive days of nightly testing. The data also contains information on code changes in both the software under test and the test framework used for testing. This data set can support the research community in particular with respect to the regression test selection problem, flaky tests, test results visualization, etc.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 14:16:56 GMT" } ]
2022-11-28T00:00:00
[ [ "Strandberg", "Per Erik", "" ] ]
new_dataset
0.999503
2211.13680
Federico Benzi
Federico Benzi, Cristian Mancus, Cristian Secchi
Whole-Body Control of a Mobile Manipulator for Passive Collaborative Transportation
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Human-robot collaborative tasks foresee interactions between humans and robots with various degrees of complexity. Specifically, for tasks which involve physical contact among the agents, challenges arise in the modelling and control of such interaction. In this paper we propose a control architecture capable of ensuring a flexible and robustly stable physical human-robot interaction, focusing on a collaborative transportation task. The architecture is deployed onto a mobile manipulator, modelled as a whole-body structure, which aids the operator during the transportation of an unwieldy load. Thanks to passivity techniques, the controller adapts its interaction parameters online while preserving robust stability for the overall system, thus experimentally validating the architecture.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 15:53:34 GMT" } ]
2022-11-28T00:00:00
[ [ "Benzi", "Federico", "" ], [ "Mancus", "Cristian", "" ], [ "Secchi", "Cristian", "" ] ]
new_dataset
0.999322
2211.13776
Dojun Park
Dojun Park and Seohyun Park
German Phoneme Recognition with Text-to-Phoneme Data Augmentation
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In this study, we experimented to examine the effect of adding the most frequent n phoneme bigrams to the basic vocabulary on the German phoneme recognition model using the text-to-phoneme data augmentation strategy. As a result, compared to the baseline model, the vowel30 model and the const20 model showed an increased BLEU score of more than 1 point, and the total30 model showed a significant decrease in the BLEU score of more than 20 points, showing that the phoneme bigrams could have a positive or negative effect on the model performance. In addition, we identified the types of errors that the models repeatedly showed through error analysis.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 19:32:49 GMT" } ]
2022-11-28T00:00:00
[ [ "Park", "Dojun", "" ], [ "Park", "Seohyun", "" ] ]
new_dataset
0.993497
2211.13812
Ali Sekhavati
Ali Sekhavati and Won-Sook Lee
Multi-Template Temporal Siamese Network for Long-Term Object Tracking
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Siamese Networks are one of most popular visual object tracking methods for their high speed and high accuracy tracking ability as long as the target is well identified. However, most Siamese Network based trackers use the first frame as the ground truth of an object and fail when target appearance changes significantly in next frames. They also have dif iculty distinguishing the target from similar other objects in the frame. We propose two ideas to solve both problems. The first idea is using a bag of dynamic templates, containing diverse, similar, and recent target features and continuously updating it with diverse target appearances. The other idea is to let a network learn the path history and project a potential future target location in a next frame. This tracker achieves state-of-the-art performance on the long-term tracking dataset UAV20L by improving the success rate by a large margin of 15% (65.4 vs 56.6) compared to the state-of-the-art method, HiFT. The of icial python code of this paper is publicly available.
[ { "version": "v1", "created": "Thu, 24 Nov 2022 22:07:33 GMT" } ]
2022-11-28T00:00:00
[ [ "Sekhavati", "Ali", "" ], [ "Lee", "Won-Sook", "" ] ]
new_dataset
0.958391
2211.13887
Yuxing Qiu
Yuxing Qiu, Feng Gao, Minchen Li, Govind Thattai, Yin Yang, Chenfanfu Jiang
TPA-Net: Generate A Dataset for Text to Physics-based Animation
null
null
null
null
cs.AI cs.CL cs.CV cs.GR eess.IV
http://creativecommons.org/licenses/by/4.0/
Recent breakthroughs in Vision-Language (V&L) joint research have achieved remarkable results in various text-driven tasks. High-quality Text-to-video (T2V), a task that has been long considered mission-impossible, was proven feasible with reasonably good results in latest works. However, the resulting videos often have undesired artifacts largely because the system is purely data-driven and agnostic to the physical laws. To tackle this issue and further push T2V towards high-level physical realism, we present an autonomous data generation technique and a dataset, which intend to narrow the gap with a large number of multi-modal, 3D Text-to-Video/Simulation (T2V/S) data. In the dataset, we provide high-resolution 3D physical simulations for both solids and fluids, along with textual descriptions of the physical phenomena. We take advantage of state-of-the-art physical simulation methods (i) Incremental Potential Contact (IPC) and (ii) Material Point Method (MPM) to simulate diverse scenarios, including elastic deformations, material fractures, collisions, turbulence, etc. Additionally, high-quality, multi-view rendering videos are supplied for the benefit of T2V, Neural Radiance Fields (NeRF), and other communities. This work is the first step towards fully automated Text-to-Video/Simulation (T2V/S). Live examples and subsequent work are at https://sites.google.com/view/tpa-net.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 04:26:41 GMT" } ]
2022-11-28T00:00:00
[ [ "Qiu", "Yuxing", "" ], [ "Gao", "Feng", "" ], [ "Li", "Minchen", "" ], [ "Thattai", "Govind", "" ], [ "Yang", "Yin", "" ], [ "Jiang", "Chenfanfu", "" ] ]
new_dataset
0.999833
2211.13896
Xiangyu Xi
Xiangyu Xi, Jianwei Lv, Shuaipeng Liu, Wei Ye, Fan Yang and Guanglu Wan
MUSIED: A Benchmark for Event Detection from Multi-Source Heterogeneous Informal Texts
Accepted at EMNLP 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event detection (ED) identifies and classifies event triggers from unstructured texts, serving as a fundamental task for information extraction. Despite the remarkable progress achieved in the past several years, most research efforts focus on detecting events from formal texts (e.g., news articles, Wikipedia documents, financial announcements). Moreover, the texts in each dataset are either from a single source or multiple yet relatively homogeneous sources. With massive amounts of user-generated text accumulating on the Web and inside enterprises, identifying meaningful events in these informal texts, usually from multiple heterogeneous sources, has become a problem of significant practical value. As a pioneering exploration that expands event detection to the scenarios involving informal and heterogeneous texts, we propose a new large-scale Chinese event detection dataset based on user reviews, text conversations, and phone conversations in a leading e-commerce platform for food service. We carefully investigate the proposed dataset's textual informality and multi-source heterogeneity characteristics by inspecting data samples quantitatively and qualitatively. Extensive experiments with state-of-the-art event detection methods verify the unique challenges posed by these characteristics, indicating that multi-source informal event detection remains an open problem and requires further efforts. Our benchmark and code are released at \url{https://github.com/myeclipse/MUSIED}.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 05:05:29 GMT" } ]
2022-11-28T00:00:00
[ [ "Xi", "Xiangyu", "" ], [ "Lv", "Jianwei", "" ], [ "Liu", "Shuaipeng", "" ], [ "Ye", "Wei", "" ], [ "Yang", "Fan", "" ], [ "Wan", "Guanglu", "" ] ]
new_dataset
0.999836
2211.13925
Krishna Gopal Benerjee
Shibsankar Das, Krishna Gopal Benerjee and Adrish Banerjee
On DNA Codes Over the Non-Chain Ring $\mathbb{Z}_4+u\mathbb{Z}_4+u^2\mathbb{Z}_4$ with $u^3=1$
This paper has been presented in IEEE Information Theory Workshop (ITW) 2022, Mumbai, INDIA
null
null
null
cs.IT math.IT
http://creativecommons.org/licenses/by/4.0/
In this paper, we present a novel design strategy of DNA codes with length $3n$ over the non-chain ring $R=\mathbb{Z}_4+u\mathbb{Z}_4+u^2\mathbb{Z}_4$ with $64$ elements and $u^3=1$, where $n$ denotes the length of a code over $R$. We first study and analyze a distance conserving map defined over the ring $R$ into the length-$3$ DNA sequences. Then, we derive some conditions on the generator matrix of a linear code over $R$, which leads to a DNA code with reversible, reversible-complement, homopolymer $2$-run-length, and $\frac{w}{3n}$-GC-content constraints for integer $w$ ($0\leq w\leq 3n$). Finally, we propose a new construction of DNA codes using Reed-Muller type generator matrices. This allows us to obtain DNA codes with reversible, reversible-complement, homopolymer $2$-run-length, and $\frac{2}{3}$-GC-content constraints.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 06:42:04 GMT" } ]
2022-11-28T00:00:00
[ [ "Das", "Shibsankar", "" ], [ "Benerjee", "Krishna Gopal", "" ], [ "Banerjee", "Adrish", "" ] ]
new_dataset
0.999675
2211.13930
Weinan He
Weinan He, Canming Huang, Zhanhao Xiao, Yongmei Liu
TRAC: A Textual Benchmark for Reasoning about Actions and Change
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Reasoning about actions and change (RAC) is essential to understand and interact with the ever-changing environment. Previous AI research has shown the importance of fundamental and indispensable knowledge of actions, i.e., preconditions and effects. However, traditional methods rely on logical formalization which hinders practical applications. With recent transformer-based language models (LMs), reasoning over text is desirable and seemingly feasible, leading to the question of whether LMs can effectively and efficiently learn to solve RAC problems. We propose four essential RAC tasks as a comprehensive textual benchmark and generate problems in a way that minimizes the influence of other linguistic requirements (e.g., grounding) to focus on RAC. The resulting benchmark, TRAC, encompassing problems of various complexities, facilitates a more granular evaluation of LMs, precisely targeting the structural generalization ability much needed for RAC. Experiments with three high-performing transformers indicates that additional efforts are needed to tackle challenges raised by TRAC.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 06:54:30 GMT" } ]
2022-11-28T00:00:00
[ [ "He", "Weinan", "" ], [ "Huang", "Canming", "" ], [ "Xiao", "Zhanhao", "" ], [ "Liu", "Yongmei", "" ] ]
new_dataset
0.991457
2211.13940
Jiayin Sun
Jiayin Sun, Hong Wang and Qiulei Dong
Spatial-Temporal Attention Network for Open-Set Fine-Grained Image Recognition
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Triggered by the success of transformers in various visual tasks, the spatial self-attention mechanism has recently attracted more and more attention in the computer vision community. However, we empirically found that a typical vision transformer with the spatial self-attention mechanism could not learn accurate attention maps for distinguishing different categories of fine-grained images. To address this problem, motivated by the temporal attention mechanism in brains, we propose a spatial-temporal attention network for learning fine-grained feature representations, called STAN, where the features learnt by implementing a sequence of spatial self-attention operations corresponding to multiple moments are aggregated progressively. The proposed STAN consists of four modules: a self-attention backbone module for learning a sequence of features with self-attention operations, a spatial feature self-organizing module for facilitating the model training, a spatial-temporal feature learning module for aggregating the re-organized features via a Long Short-Term Memory network, and a context-aware module that is implemented as the forget block of the spatial-temporal feature learning module for preserving/forgetting the long-term memory by utilizing contextual information. Then, we propose a STAN-based method for open-set fine-grained recognition by integrating the proposed STAN network with a linear classifier, called STAN-OSFGR. Extensive experimental results on 3 fine-grained datasets and 2 coarse-grained datasets demonstrate that the proposed STAN-OSFGR outperforms 9 state-of-the-art open-set recognition methods significantly in most cases.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 07:46:42 GMT" } ]
2022-11-28T00:00:00
[ [ "Sun", "Jiayin", "" ], [ "Wang", "Hong", "" ], [ "Dong", "Qiulei", "" ] ]
new_dataset
0.992625
2211.13990
Muhammad Azeem Akbar
Muhammad Azeem Akbar, Arif Ali Khan, Sajjad Mahmood, Saima Rafi
Quantum Software Engineering: A New Genre of Computing
null
null
null
null
cs.SE cs.PL
http://creativecommons.org/licenses/by/4.0/
Quantum computing (QC) is no longer only a scientific interest but is rapidly becoming an industrially available technology that can potentially tackle the limitations of classical computing. Over the last few years, major technology giants have invested in developing hardware and programming frameworks to develop quantum-specific applications. QC hardware technologies are gaining momentum, however, operationalizing the QC technologies trigger the need for software-intensive methodologies, techniques, processes, tools, roles, and responsibilities for developing industrial-centric quantum software applications. This paper presents the vision of the quantum software engineering (QSE) life cycle consisting of quantum requirements engineering, quantum software design, quantum software implementation, quantum software testing, and quantum software maintenance. This paper particularly calls for joint contributions of software engineering research and industrial community to present real-world solutions to support the entire quantum software development activities. The proposed vision facilitates the researchers and practitioners to propose new processes, reference architectures, novel tools, and practices to leverage quantum computers and develop emerging and next generations of quantum software.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 09:56:00 GMT" } ]
2022-11-28T00:00:00
[ [ "Akbar", "Muhammad Azeem", "" ], [ "Khan", "Arif Ali", "" ], [ "Mahmood", "Sajjad", "" ], [ "Rafi", "Saima", "" ] ]
new_dataset
0.998132
2211.14054
Nick Michiels
Steven Moonen and Bram Vanherle and Joris de Hoog and Taoufik Bourgana and Abdellatif Bey-Temsamani and Nick Michiels
CAD2Render: A Modular Toolkit for GPU-accelerated Photorealistic Synthetic Data Generation for the Manufacturing Industry
Accepted at the Workshop on Photorealistic Image and Environment Synthesis for Computer Vision (PIES-CV) at WACV23
null
null
null
cs.CV cs.GR cs.LG
http://creativecommons.org/licenses/by/4.0/
The use of computer vision for product and assembly quality control is becoming ubiquitous in the manufacturing industry. Lately, it is apparent that machine learning based solutions are outperforming classical computer vision algorithms in terms of performance and robustness. However, a main drawback is that they require sufficiently large and labeled training datasets, which are often not available or too tedious and too time consuming to acquire. This is especially true for low-volume and high-variance manufacturing. Fortunately, in this industry, CAD models of the manufactured or assembled products are available. This paper introduces CAD2Render, a GPU-accelerated synthetic data generator based on the Unity High Definition Render Pipeline (HDRP). CAD2Render is designed to add variations in a modular fashion, making it possible for high customizable data generation, tailored to the needs of the industrial use case at hand. Although CAD2Render is specifically designed for manufacturing use cases, it can be used for other domains as well. We validate CAD2Render by demonstrating state of the art performance in two industrial relevant setups. We demonstrate that the data generated by our approach can be used to train object detection and pose estimation models with a high enough accuracy to direct a robot. The code for CAD2Render is available at https://github.com/EDM-Research/CAD2Render.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 12:17:35 GMT" } ]
2022-11-28T00:00:00
[ [ "Moonen", "Steven", "" ], [ "Vanherle", "Bram", "" ], [ "de Hoog", "Joris", "" ], [ "Bourgana", "Taoufik", "" ], [ "Bey-Temsamani", "Abdellatif", "" ], [ "Michiels", "Nick", "" ] ]
new_dataset
0.986923
2211.14073
Nathan Morsa
Nathan Morsa
EDGAR: Embedded Detection of Gunshots by AI in Real-time
19 pages, 4 figures, submitted to the 7th Workshop on Advanced Analytics and Learning on Temporal Data
null
null
null
cs.LG eess.SP
http://creativecommons.org/licenses/by/4.0/
Electronic shot counters allow armourers to perform preventive and predictive maintenance based on quantitative measurements, improving reliability, reducing the frequency of accidents, and reducing maintenance costs. To answer a market pressure for both low lead time to market and increased customisation, we aim to solve the shot detection and shot counting problem in a generic way through machine learning. In this study, we describe a method allowing one to construct a dataset with minimal labelling effort by only requiring the total number of shots fired in a time series. To our knowledge, this is the first study to propose a technique, based on learning from label proportions, that is able to exploit these weak labels to derive an instance-level classifier able to solve the counting problem and the more general discrimination problem. We also show that this technique can be deployed in heavily constrained microcontrollers while still providing hard real-time (<100ms) inference. We evaluate our technique against a state-of-the-art unsupervised algorithm and show a sizeable improvement, suggesting that the information from the weak labels is successfully leveraged. Finally, we evaluate our technique against human-generated state-of-the-art algorithms and show that it provides comparable performance and significantly outperforms them in some offline and real-world benchmarks.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 12:51:19 GMT" } ]
2022-11-28T00:00:00
[ [ "Morsa", "Nathan", "" ] ]
new_dataset
0.9995
2211.14076
Wolfgang Steiner
L\'eo Poirier (ENS Lyon), Wolfgang Steiner (IRIF)
Factor-balanced $S$-adic languages
null
null
null
null
cs.FL math.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
A set of words, also called a language, is letter-balanced if the number of occurrences of each letter only depends on the length of the word, up to a constant. Similarly, a language is factor-balanced if the difference of the number of occurrences of any given factor in words of the same length is bounded. The most prominent example of a letter-balanced but not factor-balanced language is given by the Thue-Morse sequence. We establish connections between the two notions, in particular for languages given by substitutions and, more generally, by sequences of substitutions. We show that the two notions essentially coincide when the sequence of substitutions is proper. For the example of Thue-Morse-Sturmian languages, we give a full characterisation of factor-balancedness.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 12:53:06 GMT" } ]
2022-11-28T00:00:00
[ [ "Poirier", "Léo", "", "ENS Lyon" ], [ "Steiner", "Wolfgang", "", "IRIF" ] ]
new_dataset
0.977915
2211.14125
Thomas Jantos
Thomas Jantos, Mohamed Amin Hamdad, Wolfgang Granig, Stephan Weiss, Jan Steinbrener
PoET: Pose Estimation Transformer for Single-View, Multi-Object 6D Pose Estimation
Supplementary material available: https://www.aau.at/wp-content/uploads/2022/09/jantos_poet.pdf , Code available: https://github.com/aau-cns/poet
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurate 6D object pose estimation is an important task for a variety of robotic applications such as grasping or localization. It is a challenging task due to object symmetries, clutter and occlusion, but it becomes more challenging when additional information, such as depth and 3D models, is not provided. We present a transformer-based approach that takes an RGB image as input and predicts a 6D pose for each object in the image. Besides the image, our network does not require any additional information such as depth maps or 3D object models. First, the image is passed through an object detector to generate feature maps and to detect objects. Then, the feature maps are fed into a transformer with the detected bounding boxes as additional information. Afterwards, the output object queries are processed by a separate translation and rotation head. We achieve state-of-the-art results for RGB-only approaches on the challenging YCB-V dataset. We illustrate the suitability of the resulting model as pose sensor for a 6-DoF state estimation task. Code is available at https://github.com/aau-cns/poet.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 14:07:14 GMT" } ]
2022-11-28T00:00:00
[ [ "Jantos", "Thomas", "" ], [ "Hamdad", "Mohamed Amin", "" ], [ "Granig", "Wolfgang", "" ], [ "Weiss", "Stephan", "" ], [ "Steinbrener", "Jan", "" ] ]
new_dataset
0.974046
2211.14138
Ferenc Fejes
Ferenc Fejes, P\'eter Antal, M\'arton Kerekes
The TSN Building Blocks in Linux
Draft of the paper submitted to Netdev 0x16 conference. Link to the submission: https://netdevconf.info/0x16/session.html?The-TSN-building-blocks-in-Linux
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Various application areas e.g. industrial automation, professional audio-video, automotive in-vehicle, aerospace on-board, and mobile fronthaul networks require deterministic communication: loss-less forwarding with bounded maximum latency. There is a lot of ongoing standardization activity in different organizations to provide vendor-agnostic building blocks for Time-Sensitive Networking (TSN), what is aimed as the universal solution for deterministic forwarding in OSI Layer-2 networks. Furthermore, the implementation of those standards is also happening in Linux. Some of them require software changes only, but others have hardware support requirements. In this paper, we give an overview of the implementation of the main TSN standards in the mainline Linux kernel. Furthermore, we provide measurement results on key functionality in support of TSN, e.g., scheduled transmission and Linux bridging characteristics.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 14:30:35 GMT" } ]
2022-11-28T00:00:00
[ [ "Fejes", "Ferenc", "" ], [ "Antal", "Péter", "" ], [ "Kerekes", "Márton", "" ] ]
new_dataset
0.997882
2211.14163
Xiong Lu
Xiong Lu, Yuxing Yan, Beibei Qi, Huang Qian, Junbin Sun, Aaron Quigley
Contactless Haptic Display Through Magnetic Field Control
null
in IEEE Transactions on Haptics, vol. 15, no. 2, pp. 328-338, 1 April-June 2022
10.1109/TOH.2022.3151673
null
cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Haptic rendering enables people to touch, perceive, and manipulate virtual objects in a virtual environment. Using six cascaded identical hollow disk electromagnets and a small permanent magnet attached to an operator's finger, this paper proposes and develops an untethered haptic interface through magnetic field control. The concentric hole inside the six cascaded electromagnets provides the workspace, where the 3D position of the permanent magnet is tracked with a Microsoft Kinect sensor. The driving currents of six cascaded electromagnets are calculated in real-time for generating the desired magnetic force. Offline data from an FEA (finite element analysis) based simulation, determines the relationship between the magnetic force, the driving currents, and the position of the permanent magnet. A set of experiments including the virtual object recognition experiment, the virtual surface identification experiment, and the user perception evaluation experiment were conducted to demonstrate the proposed system, where Microsoft HoloLens holographic glasses are used for visual rendering. The proposed magnetic haptic display leads to an untethered and non-contact interface for natural haptic rendering applications, which overcomes the constraints of mechanical linkages in tool-based traditional haptic devices.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 15:10:22 GMT" } ]
2022-11-28T00:00:00
[ [ "Lu", "Xiong", "" ], [ "Yan", "Yuxing", "" ], [ "Qi", "Beibei", "" ], [ "Qian", "Huang", "" ], [ "Sun", "Junbin", "" ], [ "Quigley", "Aaron", "" ] ]
new_dataset
0.998376
2211.14196
Douglas Stebila
Jason Goertzen and Douglas Stebila
Post-Quantum Signatures in DNSSEC via Request-Based Fragmentation
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
The Domain Name System Security Extensions (DNSSEC) provide authentication of DNS responses using digital signatures. DNS operates primarily over UDP, which leads to several constraints: notably, packets should be at most 1232 bytes long to avoid problems during transmission. Larger DNS responses either need to be fragmented into several UDP responses or the request would need to be repeated over TCP, neither of which is sufficiently reliable in today's DNS ecosystem. While RSA or elliptic curve digital signatures are sufficiently small to avoid this problem, even for DNSSEC packets containing both a public key and a signature, this problem is unavoidable when considering the larger sizes of post-quantum schemes. We propose ARRF, a method of fragmenting DNS resource records at the application layer (rather than the transport layer) that is request-based, meaning the initial response contains a truncated fragment and then the requester sends follow-up requests for the remaining fragments. Using request-based fragmentation avoids problems identified for several previously proposed (and rejected) application-level DNS fragmentation techniques. We implement our approach and evaluate its performance in a simulated network when used for the three post-quantum digital signature schemes selected by NIST for standardization (Falcon, Dilithium, and SPHINCS+) at the 128-bit security level. Our experiments show that our request-based fragmentation approach provides substantially lower resolution times compared to standard DNS over UDP with TCP fallback, for all the tested post-quantum algorithms, and with less data transmitted in the case of both Falcon and Dilithium. Furthermore, our request-based fragmentation design can be implemented relatively easily: our implementation is in fact a small daemon that can sit in front of a DNS name server or resolver to fragment/reassemble transparently.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 15:54:50 GMT" } ]
2022-11-28T00:00:00
[ [ "Goertzen", "Jason", "" ], [ "Stebila", "Douglas", "" ] ]
new_dataset
0.995038
2211.14233
\'Etienne Andr\'e
\'Etienne Andr\'e, Shapagat Bolat, Engel Lefaucheux, Dylan Marinho
strategFTO: Untimed control for timed opacity
This work is partially supported by the ANR-NRF French-Singaporean research program ProMiS (ANR-19-CE25-0015 / 2019 ANR NRF 0092) and the ANR research program BisoUS. Experiments presented in this paper were carried out using the Grid'5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several universities as well as other organizations
Proceedings of the 8th International Workshop on Formal Techniques for Safety-Critical Systems (FTSCS 2022)
10.1145/3563822.3568013
null
cs.CR cs.FL cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce a prototype tool strategFTO addressing the verification of a security property in critical software. We consider a recent definition of timed opacity where an attacker aims to deduce some secret while having access only to the total execution time. The system, here modeled by timed automata, is deemed opaque if for any execution time, there are either no corresponding runs, or both public and private corresponding runs. We focus on the untimed control problem: exhibiting a controller, i.e., a set of allowed actions, such that the system restricted to those actions is fully timed-opaque. We first show that this problem is not more complex than the full timed opacity problem, and then we propose an algorithm, implemented and evaluated in practice.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 16:47:45 GMT" } ]
2022-11-28T00:00:00
[ [ "André", "Étienne", "" ], [ "Bolat", "Shapagat", "" ], [ "Lefaucheux", "Engel", "" ], [ "Marinho", "Dylan", "" ] ]
new_dataset
0.999052
2211.14234
Christopher Banks
Christopher J. Banks, Jessica Enright, Sibylle Mohr, Rowland R. Kao
Bovine Tuberculosis in Britain: identifying signatures of polarisation and controversy on Twitter
null
null
null
null
cs.SI
http://creativecommons.org/licenses/by/4.0/
Approaches to disease control are influenced by and reflected in public opinion, and the two are intrinsically entwined. Bovine tuberculosis (bTB) in British cattle and badgers is one example where there is a high degree of polarisation in opinion. Bovine viral diarrhoea (BVD), on the other hand, does not have the same controversy. In this paper we examine how language subjectivity on Twitter differs when comparing the discourses surrounding bTB and BVD, using a combination of network analysis and language and sentiment analysis. That data used for this study was collected from the Twitter public API over a two-year period. We investigated the network structure, language content, and user profiles of tweets featuring both diseases. While analysing network structure showed little difference between the two disease topics, elements of the structure allowed us to better investigate the language structure and profile of users. We found distinct differences between the language and sentiment used in tweets about each disease, and in the profile of the users who were doing the tweeting. We hope that this will guide further investigation and potential avenues for surveillance or the control of misinformation.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 16:53:33 GMT" } ]
2022-11-28T00:00:00
[ [ "Banks", "Christopher J.", "" ], [ "Enright", "Jessica", "" ], [ "Mohr", "Sibylle", "" ], [ "Kao", "Rowland R.", "" ] ]
new_dataset
0.998318
2211.14259
\'Etienne Bamas
\'Etienne Bamas, Lars Rohwedder
Better Trees for Santa Claus
Abstract abridged to meet arXiv requirements
null
null
null
cs.DS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We revisit the problem max-min degree arborescence, which was introduced by Bateni et al. [STOC'09] as a central special case of the general Santa Claus problem, which constitutes a notorious open question in approximation algorithms. In the former problem we are given a directed graph with sources and sinks and our goal is to find vertex disjoint arborescences rooted in the sources such that at each non-sink vertex of an arborescence the out-degree is at least $k$, where $k$ is to be maximized. This problem is of particular interest, since it appears to capture much of the difficulty of the Santa Claus problem: (1) like in the Santa Claus problem the configuration LP has a large integrality gap in this case and (2) previous progress by Bateni et al. was quickly generalized to the Santa Claus problem (Chakrabarty et al. [FOCS'09]). These results remain the state-of-the-art both for the Santa Claus problem and for max-min degree arborescence and they yield a polylogarithmic approximation in quasi-polynomial time. We present an exponential improvement to this, a $\mathrm{poly}(\log\log n)$-approximation in quasi-polynomial time for the max-min degree arborescence problem. To the best of our knowledge, this is the first example of breaking the logarithmic barrier for a special case of the Santa Claus problem, where the configuration LP cannot be utilized.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 17:38:15 GMT" } ]
2022-11-28T00:00:00
[ [ "Bamas", "Étienne", "" ], [ "Rohwedder", "Lars", "" ] ]
new_dataset
0.958924
2211.14261
Nishanth Rao
Nishanth Rao, Suresh Sundaram, Pushpak Jagtap
Temporal Waypoint Navigation of Multi-UAV Payload System using Barrier Functions
Submitted to ECC 2023
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
Aerial package transportation often requires complex spatial and temporal specifications to be satisfied in order to ensure safe and timely delivery from one point to another. It is usually efficient to transport versatile payloads using multiple UAVs that can work collaboratively to achieve the desired task. The complex temporal specifications can be handled coherently by applying Signal Temporal Logic (STL) to dynamical systems. This paper addresses the problem of waypoint navigation of a multi-UAV payload system under temporal specifications using higher-order time-varying control barrier functions (HOCBFs). The complex nonlinear system of relative degree two is transformed into a simple linear system using input-output feedback linearization. An optimization-based control law is then derived to achieve the temporal waypoint navigation of the payload. The controller's efficacy and real-time implementability are demonstrated by simulating a package delivery scenario inside a high-fidelity Gazebo simulation environment.
[ { "version": "v1", "created": "Fri, 25 Nov 2022 17:44:53 GMT" } ]
2022-11-28T00:00:00
[ [ "Rao", "Nishanth", "" ], [ "Sundaram", "Suresh", "" ], [ "Jagtap", "Pushpak", "" ] ]
new_dataset
0.983468
2102.02465
Lizhi Sun
Lizhi Sun, Shuocheng Wang, Hao Wu, Yuhang Gong, Fengyuan Xu, Yunxin Liu, Hao Han, Sheng Zhong
LEAP: TrustZone Based Developer-Friendly TEE for Intelligent Mobile Apps
Accepted by IEEE Transactions on Mobile Computing
IEEE Trans. Mobile Comput.(2022)1-18
10.1109/TMC.2022.3207745
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
ARM TrustZone is widely deployed on commercial-off-the-shelf mobile devices for secure execution. However, many Apps cannot enjoy this feature because it brings many constraints to App developers. Previous works have been proposed to build a secure execution environment for developers on top of TrustZone. Unfortunately, these works are still not fully-fledged solutions for mobile Apps, especially for emerging intelligent Apps. To this end, we propose LEAP, which is a lightweight developer-friendly TEE solution for mobile Apps. LEAP enables isolated codes to execute in parallel and access peripheral (e.g., mobile GPUs) with ease, flexibly manages system resources upon different workloads, and offers the auto DevOps tool to help developers prepare the codes running on it. We implement the LEAP prototype on the off-the-shelf ARM platform and conduct extensive experiments on it. The experimental results show that Apps can be adapted to run with LEAP easily and efficiently. Compared to the state-of-the-art work along this research line, LEAP can achieve an average 3.57x speedup in supporting intelligent Apps using mobile GPU acceleration.
[ { "version": "v1", "created": "Thu, 4 Feb 2021 07:49:36 GMT" }, { "version": "v2", "created": "Tue, 20 Sep 2022 13:43:16 GMT" }, { "version": "v3", "created": "Wed, 23 Nov 2022 05:31:46 GMT" } ]
2022-11-24T00:00:00
[ [ "Sun", "Lizhi", "" ], [ "Wang", "Shuocheng", "" ], [ "Wu", "Hao", "" ], [ "Gong", "Yuhang", "" ], [ "Xu", "Fengyuan", "" ], [ "Liu", "Yunxin", "" ], [ "Han", "Hao", "" ], [ "Zhong", "Sheng", "" ] ]
new_dataset
0.999694
2201.11433
Arda Goknil
Ferhat Erata, Arda Goknil, Eren Y{\i}ld{\i}z, Kas{\i}m Sinan Y{\i}ld{\i}r{\i}m, Ruzica Piskac, Jakub Szefer, and G\"ok\c{c}in Sezgin
ETAP: Energy-aware Timing Analysis of Intermittent Programs
Corrected typos in the previous submission
null
10.1145/3563216
null
cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Energy harvesting battery-free embedded devices rely only on ambient energy harvesting that enables stand-alone and sustainable IoT applications. These devices execute programs when the harvested ambient energy in their energy reservoir is sufficient to operate and stop execution abruptly (and start charging) otherwise. These intermittent programs have varying timing behavior under different energy conditions, hardware configurations, and program structures. This paper presents Energy-aware Timing Analysis of intermittent Programs (ETAP), a probabilistic symbolic execution approach that analyzes the timing and energy behavior of intermittent programs at compile time. ETAP symbolically executes the given program while taking time and energy cost models for ambient energy and dynamic energy consumption into account. We evaluated ETAP on several intermittent programs and compared the compile-time analysis results with executions on real hardware. The results show that ETAP's normalized prediction accuracy is 99.5%, and it speeds up the timing analysis by at least two orders of magnitude compared to manual testing.
[ { "version": "v1", "created": "Thu, 27 Jan 2022 10:40:18 GMT" }, { "version": "v2", "created": "Thu, 3 Feb 2022 20:15:28 GMT" } ]
2022-11-24T00:00:00
[ [ "Erata", "Ferhat", "" ], [ "Goknil", "Arda", "" ], [ "Yıldız", "Eren", "" ], [ "Yıldırım", "Kasım Sinan", "" ], [ "Piskac", "Ruzica", "" ], [ "Szefer", "Jakub", "" ], [ "Sezgin", "Gökçin", "" ] ]
new_dataset
0.998524
2203.00314
Ziwei Ji
Ziwei Ji, Yan Xu, I-Tsun Cheng, Samuel Cahyawijaya, Rita Frieske, Etsuko Ishii, Min Zeng, Andrea Madotto, Pascale Fung
VScript: Controllable Script Generation with Visual Presentation
null
AACL Demo (2022)
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
In order to offer a customized script tool and inspire professional scriptwriters, we present VScript. It is a controllable pipeline that generates complete scripts, including dialogues and scene descriptions, as well as presents visually using video retrieval. With an interactive interface, our system allows users to select genres and input starting words that control the theme and development of the generated script. We adopt a hierarchical structure, which first generates the plot, then the script and its visual presentation. A novel approach is also introduced to plot-guided dialogue generation by treating it as an inverse dialogue summarization. The experiment results show that our approach outperforms the baselines on both automatic and human evaluations, especially in genre control.
[ { "version": "v1", "created": "Tue, 1 Mar 2022 09:43:02 GMT" }, { "version": "v2", "created": "Thu, 13 Oct 2022 05:34:49 GMT" } ]
2022-11-24T00:00:00
[ [ "Ji", "Ziwei", "" ], [ "Xu", "Yan", "" ], [ "Cheng", "I-Tsun", "" ], [ "Cahyawijaya", "Samuel", "" ], [ "Frieske", "Rita", "" ], [ "Ishii", "Etsuko", "" ], [ "Zeng", "Min", "" ], [ "Madotto", "Andrea", "" ], [ "Fung", "Pascale", "" ] ]
new_dataset
0.970679
2205.13284
Miguel \'A. Gonz\'alez-Santamarta
Miguel \'Angel Gonz\'alez-Santamarta, Francisco Javier Rodr\'iguez-Lera, Camino Fern\'andez Llamas, Francisco Mart\'in Rico, and Vicente Matell\'an Olivera
YASMIN: Yet Another State MachINe library for ROS 2
4 pages, 2 figures, ROSCon FR 2022
null
10.1007/978-3-031-21062-4_43
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
State machines are a common mechanism for defining behaviors in robots, defining them based on identifiable stages. There are several libraries available for easing the implementation of state machines in ROS 1, as SMACH or SMACC, but there are fewer alternatives for ROS 2. YASMIN is yet another library specifically designed for ROS 2 for easing the design of robotic behaviors using state machines. It is available in C++ and Python, provides some default states to speed up the development, and a web viewer for monitoring the execution of the system and helping in the debugging.
[ { "version": "v1", "created": "Thu, 26 May 2022 11:43:02 GMT" } ]
2022-11-24T00:00:00
[ [ "González-Santamarta", "Miguel Ángel", "" ], [ "Rodríguez-Lera", "Francisco Javier", "" ], [ "Llamas", "Camino Fernández", "" ], [ "Rico", "Francisco Martín", "" ], [ "Olivera", "Vicente Matellán", "" ] ]
new_dataset
0.999729
2205.15712
Anna Wr\'oblewska
Micha{\l} Mo\.zd\.zonek, Anna Wr\'oblewska, Sergiy Tkachuk, Szymon {\L}ukasik
Multilingual Transformers for Product Matching -- Experiments and a New Benchmark in Polish
11 pages, 5 figures
revised version: 2022 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE) 2022
10.1109/fuzz-ieee55066.2022.9882843
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Product matching corresponds to the task of matching identical products across different data sources. It typically employs available product features which, apart from being multimodal, i.e., comprised of various data types, might be non-homogeneous and incomplete. The paper shows that pre-trained, multilingual Transformer models, after fine-tuning, are suitable for solving the product matching problem using textual features both in English and Polish languages. We tested multilingual mBERT and XLM-RoBERTa models in English on Web Data Commons - training dataset and gold standard for large-scale product matching. The obtained results show that these models perform similarly to the latest solutions tested on this set, and in some cases, the results were even better. Additionally, we prepared a new dataset entirely in Polish and based on offers in selected categories obtained from several online stores for the research purpose. It is the first open dataset for product matching tasks in Polish, which allows comparing the effectiveness of the pre-trained models. Thus, we also showed the baseline results obtained by the fine-tuned mBERT and XLM-RoBERTa models on the Polish datasets.
[ { "version": "v1", "created": "Tue, 31 May 2022 12:00:05 GMT" }, { "version": "v2", "created": "Wed, 1 Jun 2022 07:59:45 GMT" } ]
2022-11-24T00:00:00
[ [ "Możdżonek", "Michał", "" ], [ "Wróblewska", "Anna", "" ], [ "Tkachuk", "Sergiy", "" ], [ "Łukasik", "Szymon", "" ] ]
new_dataset
0.996982
2207.10397
Bei Chen
Bei Chen, Fengji Zhang, Anh Nguyen, Daoguang Zan, Zeqi Lin, Jian-Guang Lou, Weizhu Chen
CodeT: Code Generation with Generated Tests
null
null
null
null
cs.CL cs.AI cs.PL cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The task of generating code solutions for a given programming problem can benefit from the use of pre-trained language models such as Codex, which can produce multiple diverse samples. However, a major challenge for this task is to select the most appropriate solution from the multiple samples generated by the pre-trained language models. A natural way to evaluate the quality and correctness of a code solution is to run it against a set of test cases, but the manual creation of such test cases is often costly and time-consuming. In this paper, we propose a novel method, CodeT, that leverages the same pre-trained language models to automatically generate test cases for the code samples, thus reducing the human effort and increasing the coverage of the test scenarios. CodeT then executes the code samples using the generated test cases, and performs a dual execution agreement, which considers both the consistency of the outputs against the generated test cases and the agreement of the outputs with other code samples. We conduct comprehensive experiments on four benchmarks, HumanEval, MBPP, APPS and CodeContests, using five different pre-trained language models with varying sizes and capabilities. Our results show that CodeT can significantly improve the performance of code solution selection over previous methods, achieving remarkable and consistent gains across different models and benchmarks. For instance, CodeT improves the pass@1 metric on HumanEval to 65.8%, which represents an absolute improvement of 18.8% over the code-davinci-002 model, and an absolute improvement of more than 20% over the previous state-of-the-art results.
[ { "version": "v1", "created": "Thu, 21 Jul 2022 10:18:37 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2022 07:42:10 GMT" } ]
2022-11-24T00:00:00
[ [ "Chen", "Bei", "" ], [ "Zhang", "Fengji", "" ], [ "Nguyen", "Anh", "" ], [ "Zan", "Daoguang", "" ], [ "Lin", "Zeqi", "" ], [ "Lou", "Jian-Guang", "" ], [ "Chen", "Weizhu", "" ] ]
new_dataset
0.994424
2208.08846
Sebastian Neef
Sebastian Neef and Nils Wisiol
Oh SSH-it, what's my fingerprint? A Large-Scale Analysis of SSH Host Key Fingerprint Verification Records in the DNS
Preprint; submitted to CANS 2022; accepted at CANS 2022 and published in Springer LNCS vol 13641
In: Beresford, A.R., Patra, A., Bellini, E. (eds) Cryptology and Network Security. CANS 2022. Lecture Notes in Computer Science, vol 13641. Springer, Cham
10.1007/978-3-031-20974-1_4
null
cs.CR cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The SSH protocol is commonly used to access remote systems on the Internet, as it provides an encrypted and authenticated channel for communication. If upon establishing a new connection, the presented server key is unknown to the client, the user is asked to verify the key fingerprint manually, which is prone to errors and often blindly trusted. The SSH standard describes an alternative to such manual key verification: using the Domain Name System (DNS) to publish the server key information in SSHFP records. In this paper, we conduct a large-scale Internet study to measure the prevalence of SSHFP records among DNS domain names. We scan the Tranco 1M list and over 500 million names from the certificate transparency log over the course of 26 days. The results show that in two studied populations, about 1 in 10,000 domains has SSHFP records, with more than half of them deployed without using DNSSEC, drastically reducing security benefits.
[ { "version": "v1", "created": "Thu, 18 Aug 2022 14:29:47 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2022 08:40:41 GMT" } ]
2022-11-24T00:00:00
[ [ "Neef", "Sebastian", "" ], [ "Wisiol", "Nils", "" ] ]
new_dataset
0.997348
2210.15787
Ivan Stanojevi\'c
Ivan Stanojevi\'c, Vojin \v{S}enk
Convolutional Codes with Optimum Bidirectional Distance Profile
null
null
null
null
cs.IT math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper we present tables of convolutional codes with an optimum bidirectional distance profile (OBDP), defined as the minimum of the distance profiles of the code and its corresponding "reverse" code. Such codes minimize the average complexity of bidirectional sequential decoding algorithms. The computer search is accelerated by the facts that optimum distance profile (ODP) codes of larger memory must have ODP codes of smaller memory as their "prefixes", and that OBDP codes can be obtained by "concatenating" ODP and reverse ODP codes of smaller memory.
[ { "version": "v1", "created": "Thu, 27 Oct 2022 22:11:45 GMT" }, { "version": "v2", "created": "Mon, 31 Oct 2022 21:35:48 GMT" }, { "version": "v3", "created": "Tue, 22 Nov 2022 23:39:23 GMT" } ]
2022-11-24T00:00:00
[ [ "Stanojević", "Ivan", "" ], [ "Šenk", "Vojin", "" ] ]
new_dataset
0.950269
2211.11294
Yordan Raykov
Kasper Claes, Valentina Ticcinelli, Reham Badawy, Yordan P. Raykov, Luc J.W. Evers, Max A. Little
TSDF: A simple yet comprehensive, unified data storage and exchange format standard for digital biosensor data in health applications
null
null
null
null
cs.DB
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Digital sensors are increasingly being used to monitor the change over time of physiological processes in biological health and disease, often using wearable devices. This generates very large amounts of digital sensor data, for which, a consensus on a common storage, exchange and archival data format standard, has yet to be reached. To address this gap, we propose Time Series Data Format (TSDF): a unified, standardized format for storing all types of physiological sensor data, across diverse disease areas. We pose a series of format design criteria and review in detail current storage and exchange formats. When judged against these criteria, we find these current formats lacking, and propose a very simple, intuitive standard for both numerical sensor data and metadata, based on raw binary data and JSON-format text files, for sensor measurements/timestamps and metadata, respectively. By focusing on the common characteristics of diverse biosensor data, we define a set of necessary and sufficient metadata fields for storing, processing, exchanging, archiving and reliably interpreting, multi-channel biological time series data. Our aim is for this standardized format to increase the interpretability and exchangeability of data, thereby contributing to scientific reproducibility in studies where digital biosensor data forms a key evidence base.
[ { "version": "v1", "created": "Mon, 21 Nov 2022 09:36:22 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 23:18:22 GMT" } ]
2022-11-24T00:00:00
[ [ "Claes", "Kasper", "" ], [ "Ticcinelli", "Valentina", "" ], [ "Badawy", "Reham", "" ], [ "Raykov", "Yordan P.", "" ], [ "Evers", "Luc J. W.", "" ], [ "Little", "Max A.", "" ] ]
new_dataset
0.9998
2211.12224
Igor Donevski
Igor Donevski, Marco Virgili, Nithin Babu, Jimmy Jessen Nielsen, Andrew J. Forsyth, Constantinos B. Papadias, Petar Popovski
Sustainable Wireless Services with UAV Swarms Tailored to Renewable Energy Sources
To be published in Transactions on Smart Grid
null
null
null
cs.NI eess.SP
http://creativecommons.org/licenses/by/4.0/
Unmanned Aerial Vehicle (UAV) swarms are often required in off-grid scenarios, such as disaster-struck, war-torn or rural areas, where the UAVs have no access to the power grid and instead rely on renewable energy. Considering a main battery fed from two renewable sources, wind and solar, we scale such a system based on the financial budget, environmental characteristics, and seasonal variations. Interestingly, the source of energy is correlated with the energy expenditure of the UAVs, since strong winds cause UAV hovering to become increasingly energy-hungry. The aim is to maximize the cost efficiency of coverage at a particular location, which is a combinatorial optimization problem for dimensioning of the multivariate energy generation system under non-convex criteria. We have devised a customized algorithm by lowering the processing complexity and reducing the solution space through sampling. Evaluation is done with condensed real-world data on wind, solar energy, and traffic load per unit area, driven by vendor-provided prices. The implementation was tested in four locations, with varying wind or solar intensity. The best results were achieved in locations with mild wind presence and strong solar irradiation, while locations with strong winds and low solar intensity require higher Capital Expenditure (CAPEX) allocation.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 12:30:39 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2022 17:16:56 GMT" } ]
2022-11-24T00:00:00
[ [ "Donevski", "Igor", "" ], [ "Virgili", "Marco", "" ], [ "Babu", "Nithin", "" ], [ "Nielsen", "Jimmy Jessen", "" ], [ "Forsyth", "Andrew J.", "" ], [ "Papadias", "Constantinos B.", "" ], [ "Popovski", "Petar", "" ] ]
new_dataset
0.991444
2211.12352
Chao Wang
Chao Wang, Ana Serrano, Xingang Pan, Bin Chen, Hans-Peter Seidel, Christian Theobalt, Karol Myszkowski, Thomas Leimkuehler
GlowGAN: Unsupervised Learning of HDR Images from LDR Images in the Wild
null
null
null
null
cs.CV eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most in-the-wild images are stored in Low Dynamic Range (LDR) form, serving as a partial observation of the High Dynamic Range (HDR) visual world. Despite limited dynamic range, these LDR images are often captured with different exposures, implicitly containing information about the underlying HDR image distribution. Inspired by this intuition, in this work we present, to the best of our knowledge, the first method for learning a generative model of HDR images from in-the-wild LDR image collections in a fully unsupervised manner. The key idea is to train a generative adversarial network (GAN) to generate HDR images which, when projected to LDR under various exposures, are indistinguishable from real LDR images. The projection from HDR to LDR is achieved via a camera model that captures the stochasticity in exposure and camera response function. Experiments show that our method GlowGAN can synthesize photorealistic HDR images in many challenging cases such as landscapes, lightning, or windows, where previous supervised generative models produce overexposed images. We further demonstrate the new application of unsupervised inverse tone mapping (ITM) enabled by GlowGAN. Our ITM method does not need HDR images or paired multi-exposure images for training, yet it reconstructs more plausible information for overexposed regions than state-of-the-art supervised learning models trained on such data.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 15:42:08 GMT" }, { "version": "v2", "created": "Wed, 23 Nov 2022 10:12:43 GMT" } ]
2022-11-24T00:00:00
[ [ "Wang", "Chao", "" ], [ "Serrano", "Ana", "" ], [ "Pan", "Xingang", "" ], [ "Chen", "Bin", "" ], [ "Seidel", "Hans-Peter", "" ], [ "Theobalt", "Christian", "" ], [ "Myszkowski", "Karol", "" ], [ "Leimkuehler", "Thomas", "" ] ]
new_dataset
0.960065
2211.12544
Casey Peat
Casey Peat, Oliver Batchelor, Richard Green, James Atlas
Zero NeRF: Registration with Zero Overlap
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present Zero-NeRF, a projective surface registration method that, to the best of our knowledge, offers the first general solution capable of alignment between scene representations with minimal or zero visual correspondence. To do this, we enforce consistency between visible surfaces of partial and complete reconstructions, which allows us to constrain occluded geometry. We use a NeRF as our surface representation and the NeRF rendering pipeline to perform this alignment. To demonstrate the efficacy of our method, we register real-world scenes from opposite sides with infinitesimal overlaps that cannot be accurately registered using prior methods, and we compare these results against widely used registration methods.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 19:29:48 GMT" } ]
2022-11-24T00:00:00
[ [ "Peat", "Casey", "" ], [ "Batchelor", "Oliver", "" ], [ "Green", "Richard", "" ], [ "Atlas", "James", "" ] ]
new_dataset
0.997344
2211.12570
Tharindu Ranasinghe Dr
Marcos Zampieri, Tharindu Ranasinghe, Mrinal Chaudhari, Saurabh Gaikwad, Prajwal Krishna, Mayuresh Nene, Shrunali Paygude
Predicting the Type and Target of Offensive Social Media Posts in Marathi
This is a preprint of an article published in the Journal of Intelligent Information Systems, Springer. The final authenticated version is available online at https://link.springer.com/article/10.1007/s13278-022-00906-8
null
null
null
cs.CL cs.AI cs.CY cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
The presence of offensive language on social media is very common motivating platforms to invest in strategies to make communities safer. This includes developing robust machine learning systems capable of recognizing offensive content online. Apart from a few notable exceptions, most research on automatic offensive language identification has dealt with English and a few other high resource languages such as French, German, and Spanish. In this paper we address this gap by tackling offensive language identification in Marathi, a low-resource Indo-Aryan language spoken in India. We introduce the Marathi Offensive Language Dataset v.2.0 or MOLD 2.0 and present multiple experiments on this dataset. MOLD 2.0 is a much larger version of MOLD with expanded annotation to the levels B (type) and C (target) of the popular OLID taxonomy. MOLD 2.0 is the first hierarchical offensive language dataset compiled for Marathi, thus opening new avenues for research in low-resource Indo-Aryan languages. Finally, we also introduce SeMOLD, a larger dataset annotated following the semi-supervised methods presented in SOLID.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 20:36:44 GMT" } ]
2022-11-24T00:00:00
[ [ "Zampieri", "Marcos", "" ], [ "Ranasinghe", "Tharindu", "" ], [ "Chaudhari", "Mrinal", "" ], [ "Gaikwad", "Saurabh", "" ], [ "Krishna", "Prajwal", "" ], [ "Nene", "Mayuresh", "" ], [ "Paygude", "Shrunali", "" ] ]
new_dataset
0.991227
2211.12604
Nataliya Shapovalova
Tejas Khot, Nataliya Shapovalova, Silviu Andrei, Walterio Mayol-Cuevas
SuperTran: Reference Based Video Transformer for Enhancing Low Bitrate Streams in Real Time
4 pages
null
null
null
cs.CV cs.LG eess.IV
http://creativecommons.org/licenses/by/4.0/
This work focuses on low bitrate video streaming scenarios (e.g. 50 - 200Kbps) where the video quality is severely compromised. We present a family of novel deep generative models for enhancing perceptual video quality of such streams by performing super-resolution while also removing compression artifacts. Our model, which we call SuperTran, consumes as input a single high-quality, high-resolution reference images in addition to the low-quality, low-resolution video stream. The model thus learns how to borrow or copy visual elements like textures from the reference image and fill in the remaining details from the low resolution stream in order to produce perceptually enhanced output video. The reference frame can be sent once at the start of the video session or be retrieved from a gallery. Importantly, the resulting output has substantially better detail than what has been otherwise possible with methods that only use a low resolution input such as the SuperVEGAN method. SuperTran works in real-time (up to 30 frames/sec) on the cloud alongside standard pipelines.
[ { "version": "v1", "created": "Tue, 22 Nov 2022 22:03:11 GMT" } ]
2022-11-24T00:00:00
[ [ "Khot", "Tejas", "" ], [ "Shapovalova", "Nataliya", "" ], [ "Andrei", "Silviu", "" ], [ "Mayol-Cuevas", "Walterio", "" ] ]
new_dataset
0.99876
2211.12656
Huangying Zhan
Huangying Zhan, Jiyang Zheng, Yi Xu, Ian Reid, Hamid Rezatofighi
ActiveRMAP: Radiance Field for Active Mapping And Planning
Under review
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-sa/4.0/
A high-quality 3D reconstruction of a scene from a collection of 2D images can be achieved through offline/online mapping methods. In this paper, we explore active mapping from the perspective of implicit representations, which have recently produced compelling results in a variety of applications. One of the most popular implicit representations - Neural Radiance Field (NeRF), first demonstrated photorealistic rendering results using multi-layer perceptrons, with promising offline 3D reconstruction as a by-product of the radiance field. More recently, researchers also applied this implicit representation for online reconstruction and localization (i.e. implicit SLAM systems). However, the study on using implicit representation for active vision tasks is still very limited. In this paper, we are particularly interested in applying the neural radiance field for active mapping and planning problems, which are closely coupled tasks in an active system. We, for the first time, present an RGB-only active vision framework using radiance field representation for active 3D reconstruction and planning in an online manner. Specifically, we formulate this joint task as an iterative dual-stage optimization problem, where we alternatively optimize for the radiance field representation and path planning. Experimental results suggest that the proposed method achieves competitive results compared to other offline methods and outperforms active reconstruction methods using NeRFs.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 01:19:30 GMT" } ]
2022-11-24T00:00:00
[ [ "Zhan", "Huangying", "" ], [ "Zheng", "Jiyang", "" ], [ "Xu", "Yi", "" ], [ "Reid", "Ian", "" ], [ "Rezatofighi", "Hamid", "" ] ]
new_dataset
0.996292
2211.12668
Gong Cheng
Xiao Li, Yin Zhu, Sichen Liu, Jiangzhou Ju, Yuzhong Qu, Gong Cheng
DyRRen: A Dynamic Retriever-Reranker-Generator Model for Numerical Reasoning over Tabular and Textual Data
9 pages, accepted by AAAI 2023
null
null
null
cs.CL cs.AI cs.IR
http://creativecommons.org/licenses/by/4.0/
Numerical reasoning over hybrid data containing tables and long texts has recently received research attention from the AI community. To generate an executable reasoning program consisting of math and table operations to answer a question, state-of-the-art methods use a retriever-generator pipeline. However, their retrieval results are static, while different generation steps may rely on different sentences. To attend to the retrieved information that is relevant to each generation step, in this paper, we propose DyRRen, an extended retriever-reranker-generator framework where each generation step is enhanced by a dynamic reranking of retrieved sentences. It outperforms existing baselines on the FinQA dataset.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 02:41:50 GMT" } ]
2022-11-24T00:00:00
[ [ "Li", "Xiao", "" ], [ "Zhu", "Yin", "" ], [ "Liu", "Sichen", "" ], [ "Ju", "Jiangzhou", "" ], [ "Qu", "Yuzhong", "" ], [ "Cheng", "Gong", "" ] ]
new_dataset
0.994311
2211.12737
Pierre Chambon
Pierre Chambon, Christian Bluethgen, Jean-Benoit Delbrouck, Rogier Van der Sluijs, Ma{\l}gorzata Po{\l}acin, Juan Manuel Zambrano Chaves, Tanishq Mathew Abraham, Shivanshu Purohit, Curtis P. Langlotz, Akshay Chaudhari
RoentGen: Vision-Language Foundation Model for Chest X-ray Generation
19 pages
null
null
null
cs.CV cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multimodal models trained on large natural image-text pair datasets have exhibited astounding abilities in generating high-quality images. Medical imaging data is fundamentally different to natural images, and the language used to succinctly capture relevant details in medical data uses a different, narrow but semantically rich, domain-specific vocabulary. Not surprisingly, multi-modal models trained on natural image-text pairs do not tend to generalize well to the medical domain. Developing generative imaging models faithfully representing medical concepts while providing compositional diversity could mitigate the existing paucity of high-quality, annotated medical imaging datasets. In this work, we develop a strategy to overcome the large natural-medical distributional shift by adapting a pre-trained latent diffusion model on a corpus of publicly available chest x-rays (CXR) and their corresponding radiology (text) reports. We investigate the model's ability to generate high-fidelity, diverse synthetic CXR conditioned on text prompts. We assess the model outputs quantitatively using image quality metrics, and evaluate image quality and text-image alignment by human domain experts. We present evidence that the resulting model (RoentGen) is able to create visually convincing, diverse synthetic CXR images, and that the output can be controlled to a new extent by using free-form text prompts including radiology-specific language. Fine-tuning this model on a fixed training set and using it as a data augmentation method, we measure a 5% improvement of a classifier trained jointly on synthetic and real images, and a 3% improvement when trained on a larger but purely synthetic training set. Finally, we observe that this fine-tuning distills in-domain knowledge in the text-encoder and can improve its representation capabilities of certain diseases like pneumothorax by 25%.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 06:58:09 GMT" } ]
2022-11-24T00:00:00
[ [ "Chambon", "Pierre", "" ], [ "Bluethgen", "Christian", "" ], [ "Delbrouck", "Jean-Benoit", "" ], [ "Van der Sluijs", "Rogier", "" ], [ "Połacin", "Małgorzata", "" ], [ "Chaves", "Juan Manuel Zambrano", "" ], [ "Abraham", "Tanishq Mathew", "" ], [ "Purohit", "Shivanshu", "" ], [ "Langlotz", "Curtis P.", "" ], [ "Chaudhari", "Akshay", "" ] ]
new_dataset
0.956254
2211.12752
Abhilasha Sancheti
Abhilasha Sancheti, Aparna Garimella, Balaji Vasan Srinivasan, Rachel Rudinger
Agent-Specific Deontic Modality Detection in Legal Language
Accepted at EMNLP 2022
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Legal documents are typically long and written in legalese, which makes it particularly difficult for laypeople to understand their rights and duties. While natural language understanding technologies can be valuable in supporting such understanding in the legal domain, the limited availability of datasets annotated for deontic modalities in the legal domain, due to the cost of hiring experts and privacy issues, is a bottleneck. To this end, we introduce, LEXDEMOD, a corpus of English contracts annotated with deontic modality expressed with respect to a contracting party or agent along with the modal triggers. We benchmark this dataset on two tasks: (i) agent-specific multi-label deontic modality classification, and (ii) agent-specific deontic modality and trigger span detection using Transformer-based (Vaswani et al., 2017) language models. Transfer learning experiments show that the linguistic diversity of modal expressions in LEXDEMOD generalizes reasonably from lease to employment and rental agreements. A small case study indicates that a model trained on LEXDEMOD can detect red flags with high recall. We believe our work offers a new research direction for deontic modality detection in the legal domain.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 07:32:23 GMT" } ]
2022-11-24T00:00:00
[ [ "Sancheti", "Abhilasha", "" ], [ "Garimella", "Aparna", "" ], [ "Srinivasan", "Balaji Vasan", "" ], [ "Rudinger", "Rachel", "" ] ]
new_dataset
0.999626
2211.12796
Deepa Gopinath PhD
Deepa P Gopinath, Thennal D K, Vrinda V Nair, Swaraj K S, Sachin G
IMaSC -- ICFOSS Malayalam Speech Corpus
18 pages, 8 figures
null
null
null
cs.SD cs.CL eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
Modern text-to-speech (TTS) systems use deep learning to synthesize speech increasingly approaching human quality, but they require a database of high quality audio-text sentence pairs for training. Malayalam, the official language of the Indian state of Kerala and spoken by 35+ million people, is a low resource language in terms of available corpora for TTS systems. In this paper, we present IMaSC, a Malayalam text and speech corpora containing approximately 50 hours of recorded speech. With 8 speakers and a total of 34,473 text-audio pairs, IMaSC is larger than every other publicly available alternative. We evaluated the database by using it to train TTS models for each speaker based on a modern deep learning architecture. Via subjective evaluation, we show that our models perform significantly better in terms of naturalness compared to previous studies and publicly available models, with an average mean opinion score of 4.50, indicating that the synthesized speech is close to human quality.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 09:21:01 GMT" } ]
2022-11-24T00:00:00
[ [ "Gopinath", "Deepa P", "" ], [ "K", "Thennal D", "" ], [ "Nair", "Vrinda V", "" ], [ "S", "Swaraj K", "" ], [ "G", "Sachin", "" ] ]
new_dataset
0.999749
2211.12799
Mital Kinderkhedia
Juan Cruz Viotti, Mital Kinderkhedia
Benchmarking JSON BinPack
41 Pages. arXiv admin note: substantial text overlap with arXiv:2201.03051
null
null
null
cs.SE
http://creativecommons.org/licenses/by/4.0/
In this paper, we present benchmark results for a pre-production implementation of a novel serialization specification: JSON BinPack. JSON BinPack is a schema-driven and schema-less sequential binary serialization specification based on JSON Schema. It is rich in diverse encodings, and is developed to improve network performance and reduce the operational costs of Internet-based software systems. We present benchmark results for 27 JSON documents and for each plot, we show the schema-driven and schema-less serialization specifications that produce the smallest bit-strings. Through extensive plots and statistical comparisons, we show that JSON BinPack in schema-driven mode is as space-efficient or more space-efficient than every other serialization specification for the 27 documents under consideration. In comparison to JSON, JSON BinPack in schema-driven mode provides a median and average size reductions of 86.7% and 78.7%, respectively. We also show that the schema-less mode of the JSON BinPack binary serialization specification is as space-efficient or more space-efficient than every other schema-less serialization specification for the 27 documents under consideration. In comparison to JSON, JSON BinPack in schema-less mode provides a median and average size reductions of 30.6% and 30.5%, respectively. Unlike other considered schema-driven binary serialization specifications, JSON BinPack in schema-driven mode is space-efficient in comparison to best-case compressed JSON in terms of the median and average with size reductions of 76.1% and 66.8%, respectively. We have made our benchmark results available at jviotti/binary-json-size-benchmark on GitHub.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 09:33:05 GMT" } ]
2022-11-24T00:00:00
[ [ "Viotti", "Juan Cruz", "" ], [ "Kinderkhedia", "Mital", "" ] ]
new_dataset
0.999788
2211.12827
Tianyu Wang
Zhenghao Xing, Tianyu Wang, Xiaowei Hu, Haoran Wu, Chi-Wing Fu, Pheng-Ann Heng
Video Instance Shadow Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Video instance shadow detection aims to simultaneously detect, segment, associate, and track paired shadow-object associations in videos. This work has three key contributions to the task. First, we design SSIS-Track, a new framework to extract shadow-object associations in videos with paired tracking and without category specification; especially, we strive to maintain paired tracking even the objects/shadows are temporarily occluded for several frames. Second, we leverage both labeled images and unlabeled videos, and explore temporal coherence by augmenting the tracking ability via an association cycle consistency loss to optimize SSIS-Track's performance. Last, we build $\textit{SOBA-VID}$, a new dataset with 232 unlabeled videos of ${5,863}$ frames for training and 60 labeled videos of ${1,182}$ frames for testing. Experimental results show that SSIS-Track surpasses baselines built from SOTA video tracking and instance-shadow-detection methods by a large margin. In the end, we showcase several video-level applications.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 10:20:19 GMT" } ]
2022-11-24T00:00:00
[ [ "Xing", "Zhenghao", "" ], [ "Wang", "Tianyu", "" ], [ "Hu", "Xiaowei", "" ], [ "Wu", "Haoran", "" ], [ "Fu", "Chi-Wing", "" ], [ "Heng", "Pheng-Ann", "" ] ]
new_dataset
0.976458
2211.12852
Nicholas Walker
Nicholas Thomas Walker, Stefan Ultes, Pierre Lison
GraphWOZ: Dialogue Management with Conversational Knowledge Graphs
null
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We present a new approach to dialogue management using conversational knowledge graphs as core representation of the dialogue state. To this end, we introduce a new dataset, GraphWOZ, which comprises Wizard-of-Oz dialogues in which human participants interact with a robot acting as a receptionist. In contrast to most existing work on dialogue management, GraphWOZ relies on a dialogue state explicitly represented as a dynamic knowledge graph instead of a fixed set of slots. This graph is composed of a varying number of entities (such as individuals, places, events, utterances and mentions) and relations between them (such as persons being part of a group or attending an event). The graph is then regularly updated on the basis of new observations and system actions. GraphWOZ is released along with detailed manual annotations related to the user intents, system responses, and reference relations occurring in both user and system turns. Based on GraphWOZ, we present experimental results for two dialogue management tasks, namely conversational entity linking and response ranking. For conversational entity linking, we show how to connect utterance mentions to their corresponding entity in the knowledge graph with a neural model relying on a combination of both string and graph-based features. Response ranking is then performed by summarizing the relevant content of the graph into a text, which is concatenated with the dialogue history and employed as input to score possible responses to a given dialogue state.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 10:53:21 GMT" } ]
2022-11-24T00:00:00
[ [ "Walker", "Nicholas Thomas", "" ], [ "Ultes", "Stefan", "" ], [ "Lison", "Pierre", "" ] ]
new_dataset
0.999552
2211.12988
Yuntao Wang
Yuntao Wang, Zhou Su, Qichao Xu, Ruidong Li, Tom H. Luan, and Pinghui Wang
A Secure and Intelligent Data Sharing Scheme for UAV-Assisted Disaster Rescue
Accepted by IEEE/ACM Transactions on Networking (ToN)
null
null
null
cs.MA cs.GT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned aerial vehicles (UAVs) have the potential to establish flexible and reliable emergency networks in disaster sites when terrestrial communication infrastructures go down. Nevertheless, potential security threats may occur on UAVs during data transmissions due to the untrusted environment and open-access UAV networks. Moreover, UAVs typically have limited battery and computation capacity, making them unaffordable for heavy security provisioning operations when performing complicated rescue tasks. In this paper, we develop RescueChain, a secure and efficient information sharing scheme for UAV-assisted disaster rescue. Specifically, we first implement a lightweight blockchain-based framework to safeguard data sharing under disasters and immutably trace misbehaving entities. A reputation-based consensus protocol is devised to adapt the weakly connected environment with improved consensus efficiency and promoted UAVs' honest behaviors. Furthermore, we introduce a novel vehicular fog computing (VFC)-based off-chain mechanism by leveraging ground vehicles as moving fog nodes to offload UAVs' heavy data processing and storage tasks. To offload computational tasks from the UAVs to ground vehicles having idle computing resources, an optimal allocation strategy is developed by choosing payoffs that achieve equilibrium in a Stackelberg game formulation of the allocation problem. For lack of sufficient knowledge on network model parameters and users' private cost parameters in practical environment, we also design a two-tier deep reinforcement learning-based algorithm to seek the optimal payment and resource strategies of UAVs and vehicles with improved learning efficiency. Simulation results show that RescueChain can effectively accelerate consensus process, improve offloading efficiency, reduce energy consumption, and enhance user payoffs.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 14:49:08 GMT" } ]
2022-11-24T00:00:00
[ [ "Wang", "Yuntao", "" ], [ "Su", "Zhou", "" ], [ "Xu", "Qichao", "" ], [ "Li", "Ruidong", "" ], [ "Luan", "Tom H.", "" ], [ "Wang", "Pinghui", "" ] ]
new_dataset
0.985098
2211.13067
Tianyu Wang
Tianyu Wang, Xiaowei Hu, Zhengzhe Liu, Chi-Wing Fu
Sparse2Dense: Learning to Densify 3D Features for 3D Object Detection
Accepted to 36th Conference on Neural Information Processing Systems (NeurIPS 2022). Code will be released on https://github.com/stevewongv/Sparse2Dense
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
LiDAR-produced point clouds are the major source for most state-of-the-art 3D object detectors. Yet, small, distant, and incomplete objects with sparse or few points are often hard to detect. We present Sparse2Dense, a new framework to efficiently boost 3D detection performance by learning to densify point clouds in latent space. Specifically, we first train a dense point 3D detector (DDet) with a dense point cloud as input and design a sparse point 3D detector (SDet) with a regular point cloud as input. Importantly, we formulate the lightweight plug-in S2D module and the point cloud reconstruction module in SDet to densify 3D features and train SDet to produce 3D features, following the dense 3D features in DDet. So, in inference, SDet can simulate dense 3D features from regular (sparse) point cloud inputs without requiring dense inputs. We evaluate our method on the large-scale Waymo Open Dataset and the Waymo Domain Adaptation Dataset, showing its high performance and efficiency over the state of the arts.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 16:01:06 GMT" } ]
2022-11-24T00:00:00
[ [ "Wang", "Tianyu", "" ], [ "Hu", "Xiaowei", "" ], [ "Liu", "Zhengzhe", "" ], [ "Fu", "Chi-Wing", "" ] ]
new_dataset
0.998959
2211.13091
Zhen Hao Gan
Zhen Hao Gan, Yangwei You, Meng Yee (Michael) Chuah
Navigation with Tactile Sensor for Natural Human-Robot Interaction
null
null
null
null
cs.RO
http://creativecommons.org/licenses/by-nc-nd/4.0/
Tactile sensors have been introduced to a wide range of robotic tasks such as robot manipulation to mimic the sense of human touch. However, there has only been a few works that integrate tactile sensing into robot navigation. This paper describes a navigation system which allows robots to operate in crowded human-dense environments and behave with socially acceptable reactions by utilizing semantic and force information collected by embedded tactile sensors, RGB-D camera and LiDAR. Compliance control is implemented based on artificial potential fields considering not only laser scan but also force reading from tactile sensors which promises a fast and reliable response to any possible collision. In contrast to cameras, LiDAR and other non-contact sensors, tactile sensors can directly interact with humans and can be used to accept social cues akin to natural human behavior under the same situation. Furthermore, leveraging semantic segmentation from vision module, the robot is able to identify and, therefore assign varying social cost to different groups of humans enabling for socially conscious path planning. At the end of this paper, the proposed control strategy was validated successfully by testing several scenarios on an omni-directional robot in real world.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 16:19:56 GMT" } ]
2022-11-24T00:00:00
[ [ "Gan", "Zhen Hao", "", "Michael" ], [ "You", "Yangwei", "", "Michael" ], [ "Yee", "Meng", "", "Michael" ], [ "Chuah", "", "" ] ]
new_dataset
0.995318
2211.13114
Ali Abedi
Shehroz S. Khan and Ali Abedi
Step Counting with Attention-based LSTM
null
null
null
EFI-94-11
cs.CV eess.SP
http://creativecommons.org/licenses/by/4.0/
Physical activity is recognized as an essential component of overall health. One measure of physical activity, the step count, is well known as a predictor of long-term morbidity and mortality. Step Counting (SC) is the automated counting of the number of steps an individual takes over a specified period of time and space. Due to the ubiquity of smartphones and smartwatches, most current SC approaches rely on the built-in accelerometer sensors on these devices. The sensor signals are analyzed as multivariate time series, and the number of steps is calculated through a variety of approaches, such as time-domain, frequency-domain, machine-learning, and deep-learning approaches. Most of the existing approaches rely on dividing the input signal into windows, detecting steps in each window, and summing the detected steps. However, these approaches require the determination of multiple parameters, including the window size. Furthermore, most of the existing deep-learning SC approaches require ground-truth labels for every single step, which can be arduous and time-consuming to annotate. To circumvent these requirements, we present a novel SC approach utilizing many-to-one attention-based LSTM. With the proposed LSTM network, SC is solved as a regression problem, taking the entire sensor signal as input and the step count as the output. The analysis shows that the attention-based LSTM automatically learned the pattern of steps even in the absence of ground-truth labels. The experimental results on three publicly available SC datasets demonstrate that the proposed method successfully counts the number of steps with low values of mean absolute error and high values of SC accuracy.
[ { "version": "v1", "created": "Fri, 18 Nov 2022 03:33:57 GMT" } ]
2022-11-24T00:00:00
[ [ "Khan", "Shehroz S.", "" ], [ "Abedi", "Ali", "" ] ]
new_dataset
0.986982
2211.13121
Onur Varol
Ali Najafi, Nihat Mugurtay, Ege Demirci, Serhat Demirkiran, Huseyin Alper Karadeniz, Onur Varol
#Secim2023: First Public Dataset for Studying Turkish General Election
22 pages, 9 figures
null
null
null
cs.SI cs.CY cs.IR
http://creativecommons.org/licenses/by-nc-nd/4.0/
In the context of Turkey's upcoming parliamentary and presidential elections ("se\c{c}im" in Turkish), social media is playing an important role in shaping public debate. The increasing engagement of citizens on social media platforms has led to the growing use of social media by political actors. It is of utmost importance to capture the upcoming Turkish elections, as social media is becoming an essential component of election propaganda, political debates, smear campaigns, and election manipulation by domestic and international actors. We provide a comprehensive dataset for social media researchers to study the upcoming election, develop tools to prevent online manipulation, and gather novel information to inform the public. We are committed to continually improving the data collection and updating it regularly leading up to the election. Using the Secim2023 dataset, researchers can examine the social and communication networks between political actors, track current trends, and investigate emerging threats to election integrity. Our dataset is available at: https://github.com/ViralLab/Secim2023_Dataset
[ { "version": "v1", "created": "Tue, 22 Nov 2022 11:42:32 GMT" } ]
2022-11-24T00:00:00
[ [ "Najafi", "Ali", "" ], [ "Mugurtay", "Nihat", "" ], [ "Demirci", "Ege", "" ], [ "Demirkiran", "Serhat", "" ], [ "Karadeniz", "Huseyin Alper", "" ], [ "Varol", "Onur", "" ] ]
new_dataset
0.999894
2211.13184
Shuming Ma
Shuming Ma, Hongyu Wang, Shaohan Huang, Wenhui Wang, Zewen Chi, Li Dong, Alon Benhaim, Barun Patra, Vishrav Chaudhary, Xia Song, Furu Wei
TorchScale: Transformers at Scale
Work in progress
null
null
null
cs.LG cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large Transformers have achieved state-of-the-art performance across many tasks. Most open-source libraries on scaling Transformers focus on improving training or inference with better parallelization. In this work, we present TorchScale, an open-source toolkit that allows researchers and developers to scale up Transformers efficiently and effectively. TorchScale has the implementation of several modeling techniques, which can improve modeling generality and capability, as well as training stability and efficiency. Experimental results on language modeling and neural machine translation demonstrate that TorchScale can successfully scale Transformers to different sizes without tears. The library is available at https://aka.ms/torchscale.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 17:58:51 GMT" } ]
2022-11-24T00:00:00
[ [ "Ma", "Shuming", "" ], [ "Wang", "Hongyu", "" ], [ "Huang", "Shaohan", "" ], [ "Wang", "Wenhui", "" ], [ "Chi", "Zewen", "" ], [ "Dong", "Li", "" ], [ "Benhaim", "Alon", "" ], [ "Patra", "Barun", "" ], [ "Chaudhary", "Vishrav", "" ], [ "Song", "Xia", "" ], [ "Wei", "Furu", "" ] ]
new_dataset
0.999477
2211.13189
Sara Atito
Sara Atito, Muhammad Awais, Wenwu Wang, Mark D Plumbley, Josef Kittler
ASiT: Audio Spectrogram vIsion Transformer for General Audio Representation
null
null
null
null
cs.SD cs.CV eess.AS
http://creativecommons.org/licenses/by/4.0/
Vision transformers, which were originally developed for natural language processing, have recently generated significant interest in the computer vision and audio communities due to their flexibility in learning long-range relationships. Constrained by data hungry nature of transformers and limited labelled data most transformer-based models for audio tasks are finetuned from ImageNet pretrained models, despite the huge gap between the natural images domain and audio domain. This has motivated the research in self-supervised pretraining of audio transformers, which reduces the dependency on large amounts of labeled data and focuses on extracting concise representation of the audio spectrograms. In this paper, we propose ASiT, a novel self-supervised transformer for general audio representations that captures local and global contextual information employing group masked model learning and self-distillation. We evaluate our pretrained models on both audio and speech classification tasks including audio event classification, keyword spotting, and speaker identification. We further conduct comprehensive ablation studies, including evaluations of different pretraining strategies. The proposed ASiT framework significantly boosts the performance on all tasks and sets a new state-of-the-art performance on five audio and speech classification tasks, outperforming recent methods, including the approaches that use additional datasets for pretraining. The code and pretrained weights will be made publicly available for the scientific community.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 18:21:09 GMT" } ]
2022-11-24T00:00:00
[ [ "Atito", "Sara", "" ], [ "Awais", "Muhammad", "" ], [ "Wang", "Wenwu", "" ], [ "Plumbley", "Mark D", "" ], [ "Kittler", "Josef", "" ] ]
new_dataset
0.992937
2211.13195
Maliheh Shirvanian
Mihai Christodorescu, Maliheh Shirvanian, and Shams Zawoad
Privacy-Preserving Application-to-Application Authentication Using Dynamic Runtime Behaviors
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
Application authentication is typically performed using some form of secret credentials such as cryptographic keys, passwords, or API keys. Since clients are responsible for securely storing and managing the keys, this approach is vulnerable to attacks on clients. Similarly a centrally managed key store is also susceptible to various attacks and if compromised, can leak credentials. To resolve such issues, we propose an application authentication, where we rely on unique and distinguishable application's behavior to lock the key during a setup phase and unlock it for authentication. Our system add a fuzzy-extractor layer on top of current credential authentication systems. During a key enrollment process, the application's behavioral data collected from various sensors in the network are used to hide the credential key. The fuzzy extractor releases the key to the server if the application's behavior during the authentication matches the one collected during the enrollment, with some noise tolerance. We designed the system, analyzed its security, and implemented and evaluated it using 10 real-life applications deployed in our network. Our security analysis shows that the system is secure against client compromise, vault compromise, and feature observation. The evaluation shows the scheme can achieve 0 percent False Accept Rate with an average False Rejection Rate 14 percent and takes about 51 ms to successfully authenticate a client. In light of these promising results, we expect our system to be of practical use, since its deployment requires zero to minimal changes on the server.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 18:28:39 GMT" } ]
2022-11-24T00:00:00
[ [ "Christodorescu", "Mihai", "" ], [ "Shirvanian", "Maliheh", "" ], [ "Zawoad", "Shams", "" ] ]
new_dataset
0.979391
2101.07518
Fu-Jen Tsai
Fu-Jen Tsai, Yan-Tsung Peng, Yen-Yu Lin, Chung-Chi Tsai, and Chia-Wen Lin
BANet: Blur-aware Attention Networks for Dynamic Scene Deblurring
TIP 2022, Code: https://github.com/pp00704831/BANet
null
10.1109/TIP.2022.3216216
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Image motion blur results from a combination of object motions and camera shakes, and such blurring effect is generally directional and non-uniform. Previous research attempted to solve non-uniform blurs using self-recurrent multiscale, multi-patch, or multi-temporal architectures with self-attention to obtain decent results. However, using self-recurrent frameworks typically lead to a longer inference time, while inter-pixel or inter-channel self-attention may cause excessive memory usage. This paper proposes a Blur-aware Attention Network (BANet), that accomplishes accurate and efficient deblurring via a single forward pass. Our BANet utilizes region-based self-attention with multi-kernel strip pooling to disentangle blur patterns of different magnitudes and orientations and cascaded parallel dilated convolution to aggregate multi-scale content features. Extensive experimental results on the GoPro and RealBlur benchmarks demonstrate that the proposed BANet performs favorably against the state-of-the-arts in blurred image restoration and can provide deblurred results in real-time.
[ { "version": "v1", "created": "Tue, 19 Jan 2021 09:03:40 GMT" }, { "version": "v2", "created": "Thu, 8 Jul 2021 03:40:12 GMT" }, { "version": "v3", "created": "Thu, 14 Jul 2022 08:57:44 GMT" }, { "version": "v4", "created": "Tue, 25 Oct 2022 12:00:50 GMT" } ]
2022-11-23T00:00:00
[ [ "Tsai", "Fu-Jen", "" ], [ "Peng", "Yan-Tsung", "" ], [ "Lin", "Yen-Yu", "" ], [ "Tsai", "Chung-Chi", "" ], [ "Lin", "Chia-Wen", "" ] ]
new_dataset
0.993062
2103.13933
Hubert P. H. Shum
Brian K. S. Isaac-Medina, Matt Poyser, Daniel Organisciak, Chris G. Willcocks, Toby P. Breckon, Hubert P. H. Shum
Unmanned Aerial Vehicle Visual Detection and Tracking using Deep Neural Networks: A Performance Benchmark
null
null
10.1109/ICCVW54120.2021.00142
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Unmanned Aerial Vehicles (UAV) can pose a major risk for aviation safety, due to both negligent and malicious use. For this reason, the automated detection and tracking of UAV is a fundamental task in aerial security systems. Common technologies for UAV detection include visible-band and thermal infrared imaging, radio frequency and radar. Recent advances in deep neural networks (DNNs) for image-based object detection open the possibility to use visual information for this detection and tracking task. Furthermore, these detection architectures can be implemented as backbones for visual tracking systems, thereby enabling persistent tracking of UAV incursions. To date, no comprehensive performance benchmark exists that applies DNNs to visible-band imagery for UAV detection and tracking. To this end, three datasets with varied environmental conditions for UAV detection and tracking, comprising a total of 241 videos (331,486 images), are assessed using four detection architectures and three tracking frameworks. The best performing detector architecture obtains an mAP of 98.6% and the best performing tracking framework obtains a MOTA of 96.3%. Cross-modality evaluation is carried out between visible and infrared spectrums, achieving a maximal 82.8% mAP on visible images when training in the infrared modality. These results provide the first public multi-approach benchmark for state-of-the-art deep learning-based methods and give insight into which detection and tracking architectures are effective in the UAV domain.
[ { "version": "v1", "created": "Thu, 25 Mar 2021 15:51:53 GMT" }, { "version": "v2", "created": "Mon, 29 Mar 2021 13:50:11 GMT" }, { "version": "v3", "created": "Wed, 18 Aug 2021 14:58:41 GMT" } ]
2022-11-23T00:00:00
[ [ "Isaac-Medina", "Brian K. S.", "" ], [ "Poyser", "Matt", "" ], [ "Organisciak", "Daniel", "" ], [ "Willcocks", "Chris G.", "" ], [ "Breckon", "Toby P.", "" ], [ "Shum", "Hubert P. H.", "" ] ]
new_dataset
0.9834
2106.14259
Hitoshi Nishimura
Hitoshi Nishimura, Satoshi Komorita, Yasutomo Kawanishi, Hiroshi Murase
SDOF-Tracker: Fast and Accurate Multiple Human Tracking by Skipped-Detection and Optical-Flow
null
null
10.1587/transinf.2022EDP7022
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multiple human tracking is a fundamental problem for scene understanding. Although both accuracy and speed are required in real-world applications, recent tracking methods based on deep learning have focused on accuracy and require substantial running time. This study aims to improve running speed by performing human detection at a certain frame interval because it accounts for most of the running time. The question is how to maintain accuracy while skipping human detection. In this paper, we propose a method that complements the detection results with optical flow, based on the fact that someone's appearance does not change much between adjacent frames. To maintain the tracking accuracy, we introduce robust interest point selection within human regions and a tracking termination metric calculated by the distribution of the interest points. On the MOT20 dataset in the MOTChallenge, the proposed SDOF-Tracker achieved the best performance in terms of the total running speed while maintaining the MOTA metric. Our code is available at https://github.com/hitottiez/sdof-tracker.
[ { "version": "v1", "created": "Sun, 27 Jun 2021 15:35:35 GMT" }, { "version": "v2", "created": "Tue, 29 Jun 2021 04:58:45 GMT" }, { "version": "v3", "created": "Sat, 30 Apr 2022 13:03:47 GMT" } ]
2022-11-23T00:00:00
[ [ "Nishimura", "Hitoshi", "" ], [ "Komorita", "Satoshi", "" ], [ "Kawanishi", "Yasutomo", "" ], [ "Murase", "Hiroshi", "" ] ]
new_dataset
0.966388
2107.00420
Tingting Liang
Tingting Liang, Xiaojie Chu, Yudong Liu, Yongtao Wang, Zhi Tang, Wei Chu, Jingdong Chen, Haibin Ling
CBNet: A Composite Backbone Network Architecture for Object Detection
IEEE Transactions on Image Processing (TIP) camera ready
null
10.1109/TIP.2022.3216771
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Modern top-performing object detectors depend heavily on backbone networks, whose advances bring consistent performance gains through exploring more effective network structures. In this paper, we propose a novel and flexible backbone framework, namely CBNetV2, to construct high-performance detectors using existing open-sourced pre-trained backbones under the pre-training fine-tuning paradigm. In particular, CBNetV2 architecture groups multiple identical backbones, which are connected through composite connections. Specifically, it integrates the high- and low-level features of multiple backbone networks and gradually expands the receptive field to more efficiently perform object detection. We also propose a better training strategy with assistant supervision for CBNet-based detectors. Without additional pre-training of the composite backbone, CBNetV2 can be adapted to various backbones (CNN-based vs. Transformer-based) and head designs of most mainstream detectors (one-stage vs. two-stage, anchor-based vs. anchor-free-based). Experiments provide strong evidence that, compared with simply increasing the depth and width of the network, CBNetV2 introduces a more efficient, effective, and resource-friendly way to build high-performance backbone networks. Particularly, our Dual-Swin-L achieves 59.4% box AP and 51.6% mask AP on COCO test-dev under the single-model and single-scale testing protocol, which is significantly better than the state-of-the-art result (57.7% box AP and 50.2% mask AP) achieved by Swin-L, while the training schedule is reduced by 6$\times$. With multi-scale testing, we push the current best single model result to a new record of 60.1% box AP and 52.3% mask AP without using extra training data. Code is available at https://github.com/VDIGPKU/CBNetV2.
[ { "version": "v1", "created": "Thu, 1 Jul 2021 13:05:11 GMT" }, { "version": "v2", "created": "Fri, 2 Jul 2021 06:44:58 GMT" }, { "version": "v3", "created": "Wed, 7 Jul 2021 16:42:55 GMT" }, { "version": "v4", "created": "Mon, 12 Jul 2021 09:12:05 GMT" }, { "version": "v5", "created": "Sat, 24 Jul 2021 16:50:16 GMT" }, { "version": "v6", "created": "Thu, 29 Jul 2021 03:28:29 GMT" }, { "version": "v7", "created": "Tue, 18 Oct 2022 05:09:09 GMT" } ]
2022-11-23T00:00:00
[ [ "Liang", "Tingting", "" ], [ "Chu", "Xiaojie", "" ], [ "Liu", "Yudong", "" ], [ "Wang", "Yongtao", "" ], [ "Tang", "Zhi", "" ], [ "Chu", "Wei", "" ], [ "Chen", "Jingdong", "" ], [ "Ling", "Haibin", "" ] ]
new_dataset
0.999316
2112.09131
Ali Athar
Ali Athar, Jonathon Luiten, Alexander Hermans, Deva Ramanan, Bastian Leibe
HODOR: High-level Object Descriptors for Object Re-segmentation in Video Learned from Static Images
null
null
10.1109/CVPR52688.2022.00303
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Existing state-of-the-art methods for Video Object Segmentation (VOS) learn low-level pixel-to-pixel correspondences between frames to propagate object masks across video. This requires a large amount of densely annotated video data, which is costly to annotate, and largely redundant since frames within a video are highly correlated. In light of this, we propose HODOR: a novel method that tackles VOS by effectively leveraging annotated static images for understanding object appearance and scene context. We encode object instances and scene information from an image frame into robust high-level descriptors which can then be used to re-segment those objects in different frames. As a result, HODOR achieves state-of-the-art performance on the DAVIS and YouTube-VOS benchmarks compared to existing methods trained without video annotations. Without any architectural modification, HODOR can also learn from video context around single annotated video frames by utilizing cyclic consistency, whereas other methods rely on dense, temporally consistent annotations. Source code is available at: https://github.com/Ali2500/HODOR
[ { "version": "v1", "created": "Thu, 16 Dec 2021 18:59:53 GMT" }, { "version": "v2", "created": "Fri, 15 Jul 2022 13:15:16 GMT" } ]
2022-11-23T00:00:00
[ [ "Athar", "Ali", "" ], [ "Luiten", "Jonathon", "" ], [ "Hermans", "Alexander", "" ], [ "Ramanan", "Deva", "" ], [ "Leibe", "Bastian", "" ] ]
new_dataset
0.997769
2201.05047
Qianyu Zhou
Qianyu Zhou, Xiangtai Li, Lu He, Yibo Yang, Guangliang Cheng, Yunhai Tong, Lizhuang Ma, Dacheng Tao
TransVOD: End-to-End Video Object Detection with Spatial-Temporal Transformers
Accepted to IEEE Transactions on Pattern Analysis and Machine Intelligence (IEEE TPAMI), extended version of arXiv:2105.10920
null
10.1109/TPAMI.2022.3223955
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Detection Transformer (DETR) and Deformable DETR have been proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance as previous complex hand-crafted detectors. However, their performance on Video Object Detection (VOD) has not been well explored. In this paper, we present TransVOD, the first end-to-end video object detection system based on spatial-temporal Transformer architectures. The first goal of this paper is to streamline the pipeline of VOD, effectively removing the need for many hand-crafted components for feature aggregation, e.g., optical flow model, relation networks. Besides, benefited from the object query design in DETR, our method does not need complicated post-processing methods such as Seq-NMS. In particular, we present a temporal Transformer to aggregate both the spatial object queries and the feature memories of each frame. Our temporal transformer consists of two components: Temporal Query Encoder (TQE) to fuse object queries, and Temporal Deformable Transformer Decoder (TDTD) to obtain current frame detection results. These designs boost the strong baseline deformable DETR by a significant margin (3%-4% mAP) on the ImageNet VID dataset. Then, we present two improved versions of TransVOD including TransVOD++ and TransVOD Lite. The former fuses object-level information into object query via dynamic convolution while the latter models the entire video clips as the output to speed up the inference time. We give detailed analysis of all three models in the experiment part. In particular, our proposed TransVOD++ sets a new state-of-the-art record in terms of accuracy on ImageNet VID with 90.0% mAP. Our proposed TransVOD Lite also achieves the best speed and accuracy trade-off with 83.7% mAP while running at around 30 FPS on a single V100 GPU device.
[ { "version": "v1", "created": "Thu, 13 Jan 2022 16:17:34 GMT" }, { "version": "v2", "created": "Fri, 14 Jan 2022 07:19:08 GMT" }, { "version": "v3", "created": "Mon, 17 Jan 2022 02:06:34 GMT" }, { "version": "v4", "created": "Tue, 22 Nov 2022 06:07:22 GMT" } ]
2022-11-23T00:00:00
[ [ "Zhou", "Qianyu", "" ], [ "Li", "Xiangtai", "" ], [ "He", "Lu", "" ], [ "Yang", "Yibo", "" ], [ "Cheng", "Guangliang", "" ], [ "Tong", "Yunhai", "" ], [ "Ma", "Lizhuang", "" ], [ "Tao", "Dacheng", "" ] ]
new_dataset
0.995929
2202.12559
Xiuling Shan
Honglian Shen, Xiuling Shan, Zihong Tian
A new chaotic image encryption algorithm based on transversals in a Latin square
added one reference for section 1, corrected the first author's name in Metadata, added the author's name in reference [28]
null
10.3390/e24111574
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There are some good combinatorial structures suitable for image encryption. In this study, a new chaotic image encryption algorithm based on transversals in a Latin square is proposed. By means of an n-transversal of a Latin square, we can permutate an image data group by group for the first time, then use two Latin squares for auxiliary diffusion on the basis of a chaotic sequence, finally make use of a pair of orthogonal Latin squares to do the second scrambling. As a whole, the encryption process of "scrambling-diffusion-scrambling" is formed. The experimental results indicate that this algorithm achieves a secure and fast encryption effect. The final information entropy is very close to 8, and the correlation coefficient is approximate to 0. All these tests verify the robustness and practicability of this proposed algorithm.
[ { "version": "v1", "created": "Fri, 25 Feb 2022 08:44:24 GMT" }, { "version": "v2", "created": "Sat, 12 Mar 2022 02:39:22 GMT" } ]
2022-11-23T00:00:00
[ [ "Shen", "Honglian", "" ], [ "Shan", "Xiuling", "" ], [ "Tian", "Zihong", "" ] ]
new_dataset
0.963322
2205.02678
Juan Pablo Carbajal
Stevens Paz, Roberto F. Ausas, Juan P. Carbajal, Gustavo C. Buscaglia
Chemoreception and chemotaxis of a three-sphere swimmer
20 pages, 13 figures. Submitted to "Communications in Nonlinear Science and Numerical Simulation"
null
10.1016/j.cnsns.2022.106909
null
cs.LG nlin.AO physics.flu-dyn
http://creativecommons.org/licenses/by-sa/4.0/
The coupled problem of hydrodynamics and solute transport for the Najafi-Golestanian three-sphere swimmer is studied, with the Reynolds number set to zero and P\'eclet numbers (Pe) ranging from 0.06 to 60. The adopted method is the numerical simulation of the problem with a finite element code based upon the FEniCS library. For the swimmer executing the optimal locomotion gait, we report the Sherwood number as a function of Pe in homogeneous fluids and confirm that little gain in solute flux is achieved by swimming unless Pe is significantly larger than 10. We also consider the swimmer as an learning agent moving inside a fluid that has a concentration gradient. The outcomes of Q-learning processes show that learning locomotion (with the displacement as reward) is significantly easier than learning chemotaxis (with the increase of solute flux as reward). The chemotaxis problem, even at low Pe, has a varying environment that renders learning more difficult. Further, the learning difficulty increases severely with the P\'eclet number. The results demonstrate the challenges that natural and artificial swimmers need to overcome to migrate efficiently when exposed to chemical inhomogeneities.
[ { "version": "v1", "created": "Thu, 5 May 2022 14:34:04 GMT" } ]
2022-11-23T00:00:00
[ [ "Paz", "Stevens", "" ], [ "Ausas", "Roberto F.", "" ], [ "Carbajal", "Juan P.", "" ], [ "Buscaglia", "Gustavo C.", "" ] ]
new_dataset
0.999466
2205.08180
Antoine Laurent
Sameer Khurana and Antoine Laurent and James Glass
SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation
null
null
10.1109/JSTSP.2022.3192714
null
cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose the SAMU-XLSR: Semantically-Aligned Multimodal Utterance-level Cross-Lingual Speech Representation learning framework. Unlike previous works on speech representation learning, which learns multilingual contextual speech embedding at the resolution of an acoustic frame (10-20ms), this work focuses on learning multimodal (speech-text) multilingual speech embedding at the resolution of a sentence (5-10s) such that the embedding vector space is semantically aligned across different languages. We combine state-of-the-art multilingual acoustic frame-level speech representation learning model XLS-R with the Language Agnostic BERT Sentence Embedding (LaBSE) model to create an utterance-level multimodal multilingual speech encoder SAMU-XLSR. Although we train SAMU-XLSR with only multilingual transcribed speech data, cross-lingual speech-text and speech-speech associations emerge in its learned representation space. To substantiate our claims, we use SAMU-XLSR speech encoder in combination with a pre-trained LaBSE text sentence encoder for cross-lingual speech-to-text translation retrieval, and SAMU-XLSR alone for cross-lingual speech-to-speech translation retrieval. We highlight these applications by performing several cross-lingual text and speech translation retrieval tasks across several datasets.
[ { "version": "v1", "created": "Tue, 17 May 2022 08:58:48 GMT" } ]
2022-11-23T00:00:00
[ [ "Khurana", "Sameer", "" ], [ "Laurent", "Antoine", "" ], [ "Glass", "James", "" ] ]
new_dataset
0.997862
2205.11169
Yuan Yao
Yuan Yao, Qianyu Chen, Ao Zhang, Wei Ji, Zhiyuan Liu, Tat-Seng Chua, Maosong Sun
PEVL: Position-enhanced Pre-training and Prompt Tuning for Vision-language Models
Accepted by EMNLP 2022
null
null
null
cs.CV cs.AI cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Vision-language pre-training (VLP) has shown impressive performance on a wide range of cross-modal tasks, where VLP models without reliance on object detectors are becoming the mainstream due to their superior computation efficiency and competitive performance. However, the removal of object detectors also deprives the capability of VLP models in explicit object modeling, which is essential to various position-sensitive vision-language (VL) tasks, such as referring expression comprehension and visual commonsense reasoning. To address the challenge, we introduce PEVL that enhances the pre-training and prompt tuning of VLP models with explicit object position modeling. Specifically, PEVL reformulates discretized object positions and language in a unified language modeling framework, which facilitates explicit VL alignment during pre-training, and also enables flexible prompt tuning for various downstream tasks. We show that PEVL enables state-of-the-art performance of detector-free VLP models on position-sensitive tasks such as referring expression comprehension and phrase grounding, and also improves the performance on position-insensitive tasks with grounded inputs. We make the data and code for this paper publicly available at https://github.com/thunlp/PEVL.
[ { "version": "v1", "created": "Mon, 23 May 2022 10:17:53 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 06:59:30 GMT" } ]
2022-11-23T00:00:00
[ [ "Yao", "Yuan", "" ], [ "Chen", "Qianyu", "" ], [ "Zhang", "Ao", "" ], [ "Ji", "Wei", "" ], [ "Liu", "Zhiyuan", "" ], [ "Chua", "Tat-Seng", "" ], [ "Sun", "Maosong", "" ] ]
new_dataset
0.99761
2205.13879
Kota Dohi
Kota Dohi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yuki Nikaido, and Yohei Kawaguchi
MIMII DG: Sound Dataset for Malfunctioning Industrial Machine Investigation and Inspection for Domain Generalization Task
null
null
null
null
cs.SD cs.AI cs.LG eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a machine sound dataset to benchmark domain generalization techniques for anomalous sound detection (ASD). Domain shifts are differences in data distributions that can degrade the detection performance, and handling them is a major issue for the application of ASD systems. While currently available datasets for ASD tasks assume that occurrences of domain shifts are known, in practice, they can be difficult to detect. To handle such domain shifts, domain generalization techniques that perform well regardless of the domains should be investigated. In this paper, we present the first ASD dataset for the domain generalization techniques, called MIMII DG. The dataset consists of five machine types and three domain shift scenarios for each machine type. The dataset is dedicated to the domain generalization task with features such as multiple different values for parameters that cause domain shifts and introduction of domain shifts that can be difficult to detect, such as shifts in the background noise. Experimental results using two baseline systems indicate that the dataset reproduces domain shift scenarios and is useful for benchmarking domain generalization techniques.
[ { "version": "v1", "created": "Fri, 27 May 2022 10:19:16 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 02:14:02 GMT" } ]
2022-11-23T00:00:00
[ [ "Dohi", "Kota", "" ], [ "Nishida", "Tomoya", "" ], [ "Purohit", "Harsh", "" ], [ "Tanabe", "Ryo", "" ], [ "Endo", "Takashi", "" ], [ "Yamamoto", "Masaaki", "" ], [ "Nikaido", "Yuki", "" ], [ "Kawaguchi", "Yohei", "" ] ]
new_dataset
0.999816
2206.08853
Linxi Fan
Linxi Fan, Guanzhi Wang, Yunfan Jiang, Ajay Mandlekar, Yuncong Yang, Haoyi Zhu, Andrew Tang, De-An Huang, Yuke Zhu, Anima Anandkumar
MineDojo: Building Open-Ended Embodied Agents with Internet-Scale Knowledge
Outstanding Paper Award at NeurIPS 2022. Project website: https://minedojo.org
null
null
null
cs.LG cs.AI cs.CL cs.CV
http://creativecommons.org/licenses/by/4.0/
Autonomous agents have made great strides in specialist domains like Atari games and Go. However, they typically learn tabula rasa in isolated environments with limited and manually conceived objectives, thus failing to generalize across a wide spectrum of tasks and capabilities. Inspired by how humans continually learn and adapt in the open world, we advocate a trinity of ingredients for building generalist agents: 1) an environment that supports a multitude of tasks and goals, 2) a large-scale database of multimodal knowledge, and 3) a flexible and scalable agent architecture. We introduce MineDojo, a new framework built on the popular Minecraft game that features a simulation suite with thousands of diverse open-ended tasks and an internet-scale knowledge base with Minecraft videos, tutorials, wiki pages, and forum discussions. Using MineDojo's data, we propose a novel agent learning algorithm that leverages large pre-trained video-language models as a learned reward function. Our agent is able to solve a variety of open-ended tasks specified in free-form language without any manually designed dense shaping reward. We open-source the simulation suite, knowledge bases, algorithm implementation, and pretrained models (https://minedojo.org) to promote research towards the goal of generally capable embodied agents.
[ { "version": "v1", "created": "Fri, 17 Jun 2022 15:53:05 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 07:59:47 GMT" } ]
2022-11-23T00:00:00
[ [ "Fan", "Linxi", "" ], [ "Wang", "Guanzhi", "" ], [ "Jiang", "Yunfan", "" ], [ "Mandlekar", "Ajay", "" ], [ "Yang", "Yuncong", "" ], [ "Zhu", "Haoyi", "" ], [ "Tang", "Andrew", "" ], [ "Huang", "De-An", "" ], [ "Zhu", "Yuke", "" ], [ "Anandkumar", "Anima", "" ] ]
new_dataset
0.999217
2206.08965
Jose Manuel Gilperez Aguilar Dr.
V\'ictor Corsino, Jos\'e Manuel Gilp\'erez, Luis Herrera
KitBit: A New AI Model for Solving Intelligence Tests and Numerical Series
11 pages
null
null
null
cs.AI cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
The resolution of intelligence tests, in particular numerical sequences, has been of great interest in the evaluation of AI systems. We present a new computational model called KitBit that uses a reduced set of algorithms and their combinations to build a predictive model that finds the underlying pattern in numerical sequences, such as those included in IQ tests and others of much greater complexity. We present the fundamentals of the model and its application in different cases. First, the system is tested on a set of number series used in IQ tests collected from various sources. Next, our model is successfully applied on the sequences used to evaluate the models reported in the literature. In both cases, the system is capable of solving these types of problems in less than a second using standard computing power. Finally, KitBit's algorithms have been applied for the first time to the complete set of entire sequences of the well-known OEIS database. We find a pattern in the form of a list of algorithms and predict the following terms in the largest number of series to date. These results demonstrate the potential of KitBit to solve complex problems that could be represented numerically.
[ { "version": "v1", "created": "Fri, 17 Jun 2022 18:40:11 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 18:23:20 GMT" } ]
2022-11-23T00:00:00
[ [ "Corsino", "Víctor", "" ], [ "Gilpérez", "José Manuel", "" ], [ "Herrera", "Luis", "" ] ]
new_dataset
0.999517
2207.02563
Xinying Ma
Xinying Ma, Zhi Chen, Chongwen Huang
Nanoscale Reconfigurable Intelligent Surface Design and Performance Analysis for Terahertz Communications
9 pages, 8 figures. arXiv admin note: substantial text overlap with arXiv:2012.06993
null
10.1109/TNANO.2022.3208193
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by-sa/4.0/
Terahertz (THz) communications have been envisioned as a promising enabler to provide ultra-high data transmission for sixth generation (6G) wireless networks. To tackle the blockage vulnerability brought by severe attenuation and poor diffraction of THz waves, a nanoscale reconfigurable intelligent surface (NRIS) is developed to smartly manipulate the propagation directions of incident THz waves. In this paper, the electric properties of the graphene are investigated by revealing the relationship between conductivity and applied voltages, and then an efficient hardware structure of electrically-controlled NRIS is designed based on Fabry-Perot resonance model. Particularly, the phase response of NRIS can be programmed up to 306.82 degrees. To analyze the hardware performance, we jointly design the passive and active beamforming for NRIS aided THz communication system. Particularly, an adaptive gradient descent (A-GD) algorithm is developed to optimize the phase shift matrix of NRIS by dynamically updating the step size during the iterative process. Finally, numerical results demonstrate the effectiveness of our designed hardware architecture as well as the developed algorithm.
[ { "version": "v1", "created": "Wed, 6 Jul 2022 10:22:29 GMT" } ]
2022-11-23T00:00:00
[ [ "Ma", "Xinying", "" ], [ "Chen", "Zhi", "" ], [ "Huang", "Chongwen", "" ] ]
new_dataset
0.995636
2208.02473
Geonho Han
Geonho Han, Junil Choi, and Robert W. Heath Jr
Radar Imaging Based on IEEE 802.11ad Waveform in V2I Communications
null
null
10.1109/TSP.2022.3213488
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Since most of vehicular radar systems are already exploiting millimeter-wave (mmWave) spectra, it would become much more feasible to implement a joint radar and communication system by extending communication frequencies into the mmWave band. In this paper, an IEEE 802.11ad waveform-based radar imaging technique is proposed for vehicular settings. A roadside unit (RSU) transmits the IEEE 802.11ad waveform to a vehicle for communications while the RSU also listens to the echoes of transmitted waveform to perform inverse synthetic aperture radar (ISAR) imaging. To obtain high-resolution images of the vehicle, the RSU needs to accurately estimate round-trip delays, Doppler shifts, and velocity of vehicle. The proposed ISAR imaging first estimates the round-trip delays using a good correlation property of Golay complementary sequences in the IEEE 802.11ad preamble. The Doppler shifts are then obtained using least square estimation from the echo signals and refined to compensate phase wrapping caused by phase rotation. The velocity of vehicle is determined using an equation of motion and the estimated Doppler shifts. Simulation results verify that the proposed technique is able to form high-resolution ISAR images from point scatterer models of realistic vehicular settings with different viewpoints. The proposed ISAR imaging technique can be used for various vehicular applications, e.g., traffic condition analyses or advanced collision warning systems.
[ { "version": "v1", "created": "Thu, 4 Aug 2022 05:51:21 GMT" } ]
2022-11-23T00:00:00
[ [ "Han", "Geonho", "" ], [ "Choi", "Junil", "" ], [ "Heath", "Robert W.", "Jr" ] ]
new_dataset
0.997144
2209.12118
Ali Athar
Ali Athar, Jonathon Luiten, Paul Voigtlaender, Tarasha Khurana, Achal Dave, Bastian Leibe, Deva Ramanan
BURST: A Benchmark for Unifying Object Recognition, Segmentation and Tracking in Video
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Multiple existing benchmarks involve tracking and segmenting objects in video e.g., Video Object Segmentation (VOS) and Multi-Object Tracking and Segmentation (MOTS), but there is little interaction between them due to the use of disparate benchmark datasets and metrics (e.g. J&F, mAP, sMOTSA). As a result, published works usually target a particular benchmark, and are not easily comparable to each another. We believe that the development of generalized methods that can tackle multiple tasks requires greater cohesion among these research sub-communities. In this paper, we aim to facilitate this by proposing BURST, a dataset which contains thousands of diverse videos with high-quality object masks, and an associated benchmark with six tasks involving object tracking and segmentation in video. All tasks are evaluated using the same data and comparable metrics, which enables researchers to consider them in unison, and hence, more effectively pool knowledge from different methods across different tasks. Additionally, we demonstrate several baselines for all tasks and show that approaches for one task can be applied to another with a quantifiable and explainable performance difference. Dataset annotations and evaluation code is available at: https://github.com/Ali2500/BURST-benchmark.
[ { "version": "v1", "created": "Sun, 25 Sep 2022 01:27:35 GMT" }, { "version": "v2", "created": "Tue, 22 Nov 2022 17:18:39 GMT" } ]
2022-11-23T00:00:00
[ [ "Athar", "Ali", "" ], [ "Luiten", "Jonathon", "" ], [ "Voigtlaender", "Paul", "" ], [ "Khurana", "Tarasha", "" ], [ "Dave", "Achal", "" ], [ "Leibe", "Bastian", "" ], [ "Ramanan", "Deva", "" ] ]
new_dataset
0.996778