id
stringlengths
9
10
submitter
stringlengths
2
52
authors
stringlengths
4
6.51k
title
stringlengths
4
246
comments
stringlengths
1
523
journal-ref
stringlengths
4
345
doi
stringlengths
11
120
report-no
stringlengths
2
243
categories
stringlengths
5
98
license
stringclasses
9 values
abstract
stringlengths
33
3.33k
versions
list
update_date
timestamp[s]
authors_parsed
list
prediction
stringclasses
1 value
probability
float64
0.95
1
2307.11341
Jiangli Shao
Boshen Shi, Yongqing Wang, Fangda Guo, Jiangli Shao, Huawei Shen and Xueqi Cheng
OpenGDA: Graph Domain Adaptation Benchmark for Cross-network Learning
Under Review
null
null
null
cs.AI cs.DL cs.SI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Graph domain adaptation models are widely adopted in cross-network learning tasks, with the aim of transferring labeling or structural knowledge. Currently, there mainly exist two limitations in evaluating graph domain adaptation models. On one side, they are primarily tested for the specific cross-network node classification task, leaving tasks at edge-level and graph-level largely under-explored. Moreover, they are primarily tested in limited scenarios, such as social networks or citation networks, lacking validation of model's capability in richer scenarios. As comprehensively assessing models could enhance model practicality in real-world applications, we propose a benchmark, known as OpenGDA. It provides abundant pre-processed and unified datasets for different types of tasks (node, edge, graph). They originate from diverse scenarios, covering web information systems, urban systems and natural systems. Furthermore, it integrates state-of-the-art models with standardized and end-to-end pipelines. Overall, OpenGDA provides a user-friendly, scalable and reproducible benchmark for evaluating graph domain adaptation models. The benchmark experiments highlight the challenges of applying GDA models to real-world applications with consistent good performance, and potentially provide insights to future research. As an emerging project, OpenGDA will be regularly updated with new datasets and models. It could be accessed from https://github.com/Skyorca/OpenGDA.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 04:11:43 GMT" } ]
2023-07-24T00:00:00
[ [ "Shi", "Boshen", "" ], [ "Wang", "Yongqing", "" ], [ "Guo", "Fangda", "" ], [ "Shao", "Jiangli", "" ], [ "Shen", "Huawei", "" ], [ "Cheng", "Xueqi", "" ] ]
new_dataset
0.994201
2307.11344
Ipsita Mohanty
Ipsita Mohanty
DEFTri: A Few-Shot Label Fused Contextual Representation Learning For Product Defect Triage in e-Commerce
In Proceedings of the Fifth Workshop on e-Commerce and NLP ECNLP 5 2022 Pages 1-7
mohanty-2022-deftri, Association for Computational Linguistics
null
null
cs.SE cs.CL
http://creativecommons.org/licenses/by/4.0/
Defect Triage is a time-sensitive and critical process in a large-scale agile software development lifecycle for e-commerce. Inefficiencies arising from human and process dependencies in this domain have motivated research in automated approaches using machine learning to accurately assign defects to qualified teams. This work proposes a novel framework for automated defect triage (DEFTri) using fine-tuned state-of-the-art pre-trained BERT on labels fused text embeddings to improve contextual representations from human-generated product defects. For our multi-label text classification defect triage task, we also introduce a Walmart proprietary dataset of product defects using weak supervision and adversarial learning, in a few-shot setting.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 04:22:43 GMT" } ]
2023-07-24T00:00:00
[ [ "Mohanty", "Ipsita", "" ] ]
new_dataset
0.994528
2307.11360
Daria Reshetova
Daria Reshetova, Guanhang Wu, Marcel Puyat, Chunhui Gu, Huizhong Chen
ParGANDA: Making Synthetic Pedestrians A Reality For Object Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object detection is the key technique to a number of Computer Vision applications, but it often requires large amounts of annotated data to achieve decent results. Moreover, for pedestrian detection specifically, the collected data might contain some personally identifiable information (PII), which is highly restricted in many countries. This label intensive and privacy concerning task has recently led to an increasing interest in training the detection models using synthetically generated pedestrian datasets collected with a photo-realistic video game engine. The engine is able to generate unlimited amounts of data with precise and consistent annotations, which gives potential for significant gains in the real-world applications. However, the use of synthetic data for training introduces a synthetic-to-real domain shift aggravating the final performance. To close the gap between the real and synthetic data, we propose to use a Generative Adversarial Network (GAN), which performsparameterized unpaired image-to-image translation to generate more realistic images. The key benefit of using the GAN is its intrinsic preference of low-level changes to geometric ones, which means annotations of a given synthetic image remain accurate even after domain translation is performed thus eliminating the need for labeling real data. We extensively experimented with the proposed method using MOTSynth dataset to train and MOT17 and MOT20 detection datasets to test, with experimental results demonstrating the effectiveness of this method. Our approach not only produces visually plausible samples but also does not require any labels of the real domain thus making it applicable to the variety of downstream tasks.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 05:26:32 GMT" } ]
2023-07-24T00:00:00
[ [ "Reshetova", "Daria", "" ], [ "Wu", "Guanhang", "" ], [ "Puyat", "Marcel", "" ], [ "Gu", "Chunhui", "" ], [ "Chen", "Huizhong", "" ] ]
new_dataset
0.999505
2307.11371
Amit Kumar
Chiranjib Bhattacharyya and Ravindran Kannan and Amit Kumar
Random Separating Hyperplane Theorem and Learning Polytopes
null
null
null
null
cs.LG cs.CG
http://creativecommons.org/licenses/by/4.0/
The Separating Hyperplane theorem is a fundamental result in Convex Geometry with myriad applications. Our first result, Random Separating Hyperplane Theorem (RSH), is a strengthening of this for polytopes. $\rsh$ asserts that if the distance between $a$ and a polytope $K$ with $k$ vertices and unit diameter in $\Re^d$ is at least $\delta$, where $\delta$ is a fixed constant in $(0,1)$, then a randomly chosen hyperplane separates $a$ and $K$ with probability at least $1/poly(k)$ and margin at least $\Omega \left(\delta/\sqrt{d} \right)$. An immediate consequence of our result is the first near optimal bound on the error increase in the reduction from a Separation oracle to an Optimization oracle over a polytope. RSH has algorithmic applications in learning polytopes. We consider a fundamental problem, denoted the ``Hausdorff problem'', of learning a unit diameter polytope $K$ within Hausdorff distance $\delta$, given an optimization oracle for $K$. Using RSH, we show that with polynomially many random queries to the optimization oracle, $K$ can be approximated within error $O(\delta)$. To our knowledge this is the first provable algorithm for the Hausdorff Problem. Building on this result, we show that if the vertices of $K$ are well-separated, then an optimization oracle can be used to generate a list of points, each within Hausdorff distance $O(\delta)$ of $K$, with the property that the list contains a point close to each vertex of $K$. Further, we show how to prune this list to generate a (unique) approximation to each vertex of the polytope. We prove that in many latent variable settings, e.g., topic modeling, LDA, optimization oracles do exist provided we project to a suitable SVD subspace. Thus, our work yields the first efficient algorithm for finding approximations to the vertices of the latent polytope under the well-separatedness assumption.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 06:03:43 GMT" } ]
2023-07-24T00:00:00
[ [ "Bhattacharyya", "Chiranjib", "" ], [ "Kannan", "Ravindran", "" ], [ "Kumar", "Amit", "" ] ]
new_dataset
0.99412
2307.11386
Yunhao Ge
Yunhao Ge, Yuecheng Li, Shuo Ni, Jiaping Zhao, Ming-Hsuan Yang, Laurent Itti
CLR: Channel-wise Lightweight Reprogramming for Continual Learning
ICCV 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Continual learning aims to emulate the human ability to continually accumulate knowledge over sequential tasks. The main challenge is to maintain performance on previously learned tasks after learning new tasks, i.e., to avoid catastrophic forgetting. We propose a Channel-wise Lightweight Reprogramming (CLR) approach that helps convolutional neural networks (CNNs) overcome catastrophic forgetting during continual learning. We show that a CNN model trained on an old task (or self-supervised proxy task) could be ``reprogrammed" to solve a new task by using our proposed lightweight (very cheap) reprogramming parameter. With the help of CLR, we have a better stability-plasticity trade-off to solve continual learning problems: To maintain stability and retain previous task ability, we use a common task-agnostic immutable part as the shared ``anchor" parameter set. We then add task-specific lightweight reprogramming parameters to reinterpret the outputs of the immutable parts, to enable plasticity and integrate new knowledge. To learn sequential tasks, we only train the lightweight reprogramming parameters to learn each new task. Reprogramming parameters are task-specific and exclusive to each task, which makes our method immune to catastrophic forgetting. To minimize the parameter requirement of reprogramming to learn new tasks, we make reprogramming lightweight by only adjusting essential kernels and learning channel-wise linear mappings from anchor parameters to task-specific domain knowledge. We show that, for general CNNs, the CLR parameter increase is less than 0.6\% for any new task. Our method outperforms 13 state-of-the-art continual learning baselines on a new challenging sequence of 53 image classification datasets. Code and data are available at https://github.com/gyhandy/Channel-wise-Lightweight-Reprogramming
[ { "version": "v1", "created": "Fri, 21 Jul 2023 06:56:21 GMT" } ]
2023-07-24T00:00:00
[ [ "Ge", "Yunhao", "" ], [ "Li", "Yuecheng", "" ], [ "Ni", "Shuo", "" ], [ "Zhao", "Jiaping", "" ], [ "Yang", "Ming-Hsuan", "" ], [ "Itti", "Laurent", "" ] ]
new_dataset
0.987894
2307.11454
Ravil Mussabayev
Ravil Mussabayev
Dissecting Code Vulnerabilities: Insights from C++ and Java Vulnerability Analysis with ReVeal Model
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study presents an analysis conducted on a real-world dataset of Java vulnerability-fixing commits. The dataset consists of commits with varying numbers of modified methods, leading to a natural partitioning based on the number of changed functions. The research aims to address several key questions. Firstly, the study investigates the optimal parameter selection for ReVeal, a state-of-the-art model, in order to achieve its best performance. Secondly, it explores the contributions of different parts of the Java dataset towards vulnerability detection. Lastly, the study evaluates the model's performance in separating close-to-vulnerable methods (vulnerable methods and their fixed versions) from randomly selected safe code, as well as the finer separation of vulnerable methods from their fixed versions within the set of close-to-vulnerable methods. The research employs a series of experiments to answer these questions and derive meaningful insights.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 09:35:29 GMT" } ]
2023-07-24T00:00:00
[ [ "Mussabayev", "Ravil", "" ] ]
new_dataset
0.994577
2307.11519
Ponkoj Shill
Fariha Tahosin Boishakhi, Ponkoj Chandra Shill, Md. Golam Rabiul Alam
Multi-modal Hate Speech Detection using Machine Learning
5 pages, 2 figures, conference
null
10.1109/BigData52589.2021.9671955
null
cs.AI cs.CL cs.CV cs.LG cs.SD eess.AS
http://creativecommons.org/publicdomain/zero/1.0/
With the continuous growth of internet users and media content, it is very hard to track down hateful speech in audio and video. Converting video or audio into text does not detect hate speech accurately as human sometimes uses hateful words as humorous or pleasant in sense and also uses different voice tones or show different action in the video. The state-ofthe-art hate speech detection models were mostly developed on a single modality. In this research, a combined approach of multimodal system has been proposed to detect hate speech from video contents by extracting feature images, feature values extracted from the audio, text and used machine learning and Natural language processing.
[ { "version": "v1", "created": "Thu, 15 Jun 2023 06:46:52 GMT" } ]
2023-07-24T00:00:00
[ [ "Boishakhi", "Fariha Tahosin", "" ], [ "Shill", "Ponkoj Chandra", "" ], [ "Alam", "Md. Golam Rabiul", "" ] ]
new_dataset
0.998415
2307.11543
Alberto Pretto
Ivano Donadi and Alberto Pretto
KVN: Keypoints Voting Network with Differentiable RANSAC for Stereo Pose Estimation
Submitted to IEEE Robotics and Automation Letters
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Object pose estimation is a fundamental computer vision task exploited in several robotics and augmented reality applications. Many established approaches rely on predicting 2D-3D keypoint correspondences using RANSAC (Random sample consensus) and estimating the object pose using the PnP (Perspective-n-Point) algorithm. Being RANSAC non-differentiable, correspondences cannot be directly learned in an end-to-end fashion. In this paper, we address the stereo image-based object pose estimation problem by (i) introducing a differentiable RANSAC layer into a well-known monocular pose estimation network; (ii) exploiting an uncertainty-driven multi-view PnP solver which can fuse information from multiple views. We evaluate our approach on a challenging public stereo object pose estimation dataset, yielding state-of-the-art results against other recent approaches. Furthermore, in our ablation study, we show that the differentiable RANSAC layer plays a significant role in the accuracy of the proposed method. We release with this paper the open-source implementation of our method.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 12:43:07 GMT" } ]
2023-07-24T00:00:00
[ [ "Donadi", "Ivano", "" ], [ "Pretto", "Alberto", "" ] ]
new_dataset
0.993038
2307.11554
Jan-Gerrit Habekost
Jan-Gerrit Habekost, Erik Strahl, Philipp Allgeuer, Matthias Kerzel, Stefan Wermter
CycleIK: Neuro-inspired Inverse Kinematics
Accepted at ICANN 2023 (32nd International Conference on Artificial Neural Networks)
null
null
null
cs.RO cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The paper introduces CycleIK, a neuro-robotic approach that wraps two novel neuro-inspired methods for the inverse kinematics (IK) task, a Generative Adversarial Network (GAN), and a Multi-Layer Perceptron architecture. These methods can be used in a standalone fashion, but we also show how embedding these into a hybrid neuro-genetic IK pipeline allows for further optimization via sequential least-squares programming (SLSQP) or a genetic algorithm (GA). The models are trained and tested on dense datasets that were collected from random robot configurations of the new Neuro-Inspired COLlaborator (NICOL), a semi-humanoid robot with two redundant 8-DoF manipulators. We utilize the weighted multi-objective function from the state-of-the-art BioIK method to support the training process and our hybrid neuro-genetic architecture. We show that the neural models can compete with state-of-the-art IK approaches, which allows for deployment directly to robotic hardware. Additionally, it is shown that the incorporation of the genetic algorithm improves the precision while simultaneously reducing the overall runtime.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 13:03:27 GMT" } ]
2023-07-24T00:00:00
[ [ "Habekost", "Jan-Gerrit", "" ], [ "Strahl", "Erik", "" ], [ "Allgeuer", "Philipp", "" ], [ "Kerzel", "Matthias", "" ], [ "Wermter", "Stefan", "" ] ]
new_dataset
0.975761
2307.11636
Shuyang Sun
Runjia Li, Shuyang Sun, Mohamed Elhoseiny, Philip Torr
OxfordTVG-HIC: Can Machine Make Humorous Captions from Images?
Accepted by ICCV 2023
null
null
null
cs.CV cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
This paper presents OxfordTVG-HIC (Humorous Image Captions), a large-scale dataset for humour generation and understanding. Humour is an abstract, subjective, and context-dependent cognitive construct involving several cognitive factors, making it a challenging task to generate and interpret. Hence, humour generation and understanding can serve as a new task for evaluating the ability of deep-learning methods to process abstract and subjective information. Due to the scarcity of data, humour-related generation tasks such as captioning remain under-explored. To address this gap, OxfordTVG-HIC offers approximately 2.9M image-text pairs with humour scores to train a generalizable humour captioning model. Contrary to existing captioning datasets, OxfordTVG-HIC features a wide range of emotional and semantic diversity resulting in out-of-context examples that are particularly conducive to generating humour. Moreover, OxfordTVG-HIC is curated devoid of offensive content. We also show how OxfordTVG-HIC can be leveraged for evaluating the humour of a generated text. Through explainability analysis of the trained models, we identify the visual and linguistic cues influential for evoking humour prediction (and generation). We observe qualitatively that these cues are aligned with the benign violation theory of humour in cognitive psychology.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 14:58:44 GMT" } ]
2023-07-24T00:00:00
[ [ "Li", "Runjia", "" ], [ "Sun", "Shuyang", "" ], [ "Elhoseiny", "Mohamed", "" ], [ "Torr", "Philip", "" ] ]
new_dataset
0.999702
2307.11662
Mariam Mahmoud
Mariam Ayman, Youssef El-harty, Ahmed Rashed, Ahmed Fathy, Ahmed Abdullah, Omar Wassim, Walid Gomaa
BlockCampus: A Blockchain-Based DApp for enhancing Student Engagement and Reward Mechanisms in an Academic Community for E-JUST University
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by-nc-sa/4.0/
In today's digital age, online communities have become an integral part of our lives, fostering collaboration, knowledge sharing, and community engagement. Higher education institutions, in particular, can greatly benefit from dedicated platforms that facilitate academic discussions and provide incentives for active participation. This research paper presents a comprehensive study and implementation of a decentralized application (DApp) leveraging the blockchain technology to address these needs specifically for E-JUST (Egypt-Japan University of Science and Technology) students and academic staff.
[ { "version": "v1", "created": "Fri, 7 Jul 2023 19:12:19 GMT" } ]
2023-07-24T00:00:00
[ [ "Ayman", "Mariam", "" ], [ "El-harty", "Youssef", "" ], [ "Rashed", "Ahmed", "" ], [ "Fathy", "Ahmed", "" ], [ "Abdullah", "Ahmed", "" ], [ "Wassim", "Omar", "" ], [ "Gomaa", "Walid", "" ] ]
new_dataset
0.999353
2307.11709
Aakash Bansal
Aakash Bansal, Siyuan Jiang, Sakib Haque, and Collin McMillan
Statement-based Memory for Neural Source Code Summarization
10 pages 2 figures
null
null
null
cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Source code summarization is the task of writing natural language descriptions of source code behavior. Code summarization underpins software documentation for programmers. Short descriptions of code help programmers understand the program quickly without having to read the code itself. Lately, neural source code summarization has emerged as the frontier of research into automated code summarization techniques. By far the most popular targets for summarization are program subroutines. The idea, in a nutshell, is to train an encoder-decoder neural architecture using large sets of examples of subroutines extracted from code repositories. The encoder represents the code and the decoder represents the summary. However, most current approaches attempt to treat the subroutine as a single unit. For example, by taking the entire subroutine as input to a Transformer or RNN-based encoder. But code behavior tends to depend on the flow from statement to statement. Normally dynamic analysis may shed light on this flow, but dynamic analysis on hundreds of thousands of examples in large datasets is not practical. In this paper, we present a statement-based memory encoder that learns the important elements of flow during training, leading to a statement-based subroutine representation without the need for dynamic analysis. We implement our encoder for code summarization and demonstrate a significant improvement over the state-of-the-art.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 17:04:39 GMT" } ]
2023-07-24T00:00:00
[ [ "Bansal", "Aakash", "" ], [ "Jiang", "Siyuan", "" ], [ "Haque", "Sakib", "" ], [ "McMillan", "Collin", "" ] ]
new_dataset
0.963948
2307.11717
Mahmoud Ali
Mahmoud Ali and Lantao Liu
GP-Frontier for Local Mapless Navigation
7 pages, 7 figures, accepted at the 2023 IEEE International Conference on Robotics and Automation ICRA2023
null
null
null
cs.RO
http://creativecommons.org/licenses/by/4.0/
We propose a new frontier concept called the Gaussian Process Frontier (GP-Frontier) that can be used to locally navigate a robot towards a goal without building a map. The GP-Frontier is built on the uncertainty assessment of an efficient variant of sparse Gaussian Process. Based only on local ranging sensing measurement, the GP-Frontier can be used for navigation in both known and unknown environments. The proposed method is validated through intensive evaluations, and the results show that the GP-Frontier can navigate the robot in a safe and persistent way, i.e., the robot moves in the most open space (thus reducing the risk of collision) without relying on a map or a path planner.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 17:21:30 GMT" } ]
2023-07-24T00:00:00
[ [ "Ali", "Mahmoud", "" ], [ "Liu", "Lantao", "" ] ]
new_dataset
0.995353
2307.11719
Rita T. Sousa
Rita T. Sousa, Sara Silva, Catia Pesquita
Benchmark datasets for biomedical knowledge graphs with negative statements
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
Knowledge graphs represent facts about real-world entities. Most of these facts are defined as positive statements. The negative statements are scarce but highly relevant under the open-world assumption. Furthermore, they have been demonstrated to improve the performance of several applications, namely in the biomedical domain. However, no benchmark dataset supports the evaluation of the methods that consider these negative statements. We present a collection of datasets for three relation prediction tasks - protein-protein interaction prediction, gene-disease association prediction and disease prediction - that aim at circumventing the difficulties in building benchmarks for knowledge graphs with negative statements. These datasets include data from two successful biomedical ontologies, Gene Ontology and Human Phenotype Ontology, enriched with negative statements. We also generate knowledge graph embeddings for each dataset with two popular path-based methods and evaluate the performance in each task. The results show that the negative statements can improve the performance of knowledge graph embeddings.
[ { "version": "v1", "created": "Fri, 21 Jul 2023 17:25:21 GMT" } ]
2023-07-24T00:00:00
[ [ "Sousa", "Rita T.", "" ], [ "Silva", "Sara", "" ], [ "Pesquita", "Catia", "" ] ]
new_dataset
0.997672
1912.08166
Matthew Walmer
Anneliese Braunegg, Amartya Chakraborty, Michael Krumdick, Nicole Lape, Sara Leary, Keith Manville, Elizabeth Merkhofer, Laura Strickhart, Matthew Walmer
APRICOT: A Dataset of Physical Adversarial Attacks on Object Detection
23 pages, 14 figures, 3 tables. Updated version as accepted to ECCV 2020
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Physical adversarial attacks threaten to fool object detection systems, but reproducible research on the real-world effectiveness of physical patches and how to defend against them requires a publicly available benchmark dataset. We present APRICOT, a collection of over 1,000 annotated photographs of printed adversarial patches in public locations. The patches target several object categories for three COCO-trained detection models, and the photos represent natural variation in position, distance, lighting conditions, and viewing angle. Our analysis suggests that maintaining adversarial robustness in uncontrolled settings is highly challenging, but it is still possible to produce targeted detections under white-box and sometimes black-box settings. We establish baselines for defending against adversarial patches through several methods, including a detector supervised with synthetic data and unsupervised methods such as kernel density estimation, Bayesian uncertainty, and reconstruction error. Our results suggest that adversarial patches can be effectively flagged, both in a high-knowledge, attack-specific scenario, and in an unsupervised setting where patches are detected as anomalies in natural images. This dataset and the described experiments provide a benchmark for future research on the effectiveness of and defenses against physical adversarial objects in the wild.
[ { "version": "v1", "created": "Tue, 17 Dec 2019 18:08:01 GMT" }, { "version": "v2", "created": "Thu, 20 Aug 2020 21:37:23 GMT" } ]
2023-07-21T00:00:00
[ [ "Braunegg", "Anneliese", "" ], [ "Chakraborty", "Amartya", "" ], [ "Krumdick", "Michael", "" ], [ "Lape", "Nicole", "" ], [ "Leary", "Sara", "" ], [ "Manville", "Keith", "" ], [ "Merkhofer", "Elizabeth", "" ], [ "Strickhart", "Laura", "" ], [ "Walmer", "Matthew", "" ] ]
new_dataset
0.999864
2105.06808
Sylwia Majchrowska Ms.
Sylwia Majchrowska, Agnieszka Miko{\l}ajczyk, Maria Ferlin, Zuzanna Klawikowska, Marta A. Plantykow, Arkadiusz Kwasigroch, Karol Majek
Waste detection in Pomerania: non-profit project for detecting waste in environment
Litter detection, Waste detection, Object detection
Waste Management, Volume 138, 1 February 2022, Pages 274-284
10.1016/j.wasman.2021.12.001
null
cs.CV eess.IV
http://creativecommons.org/licenses/by/4.0/
Waste pollution is one of the most significant environmental issues in the modern world. The importance of recycling is well known, either for economic or ecological reasons, and the industry demands high efficiency. Our team conducted comprehensive research on Artificial Intelligence usage in waste detection and classification to fight the world's waste pollution problem. As a result an open-source framework that enables the detection and classification of litter was developed. The final pipeline consists of two neural networks: one that detects litter and a second responsible for litter classification. Waste is classified into seven categories: bio, glass, metal and plastic, non-recyclable, other, paper and unknown. Our approach achieves up to 70% of average precision in waste detection and around 75% of classification accuracy on the test dataset. The code used in the studies is publicly available online.
[ { "version": "v1", "created": "Wed, 12 May 2021 09:33:22 GMT" } ]
2023-07-21T00:00:00
[ [ "Majchrowska", "Sylwia", "" ], [ "Mikołajczyk", "Agnieszka", "" ], [ "Ferlin", "Maria", "" ], [ "Klawikowska", "Zuzanna", "" ], [ "Plantykow", "Marta A.", "" ], [ "Kwasigroch", "Arkadiusz", "" ], [ "Majek", "Karol", "" ] ]
new_dataset
0.995535
2204.13730
Ziyaur Rahman
Ziyaur Rahman, S. M. Zafaruddin, V. K. Chaubey
Direct Air-to-Underwater Optical Wireless Communication: Statistical Characterization and Outage Performance
This work has been submitted to the IEEE for possible publication
IEEE Transactions on Vehicular Technology, Vol. 72, No. 2, Feb 2023
10.1109/TVT.2022.3211186
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In general, a buoy relay is used to connect the underwater communication to the terrestrial network over a radio or optical wireless communication (OWC) link. The use of relay deployment may pose security and deployment issues. This paper investigates the feasibility of direct air-to-underwater (A2UW) communication from an over-the-sea OWC system to an underwater submarine without deploying a relaying node. We analyze the statistical performance of the direct transmission over the combined channel fading effect of atmospheric turbulence, random fog, air-to-water interface, oceanic turbulence, and pointing errors. We develop novel analytical expressions for the probability density function (PDF) and cumulative distribution function (CDF) of the resultant signal-to-noise ratio (SNR) in terms of bivariate Meijer-G and Fox-H functions. We use the derived statistical results to analyze the system performance by providing exact and asymptotic results of the outage probability in terms of system parameters. We use computer simulations to demonstrate the performance of direct A2UW transmissions compared to the relay-assisted system.
[ { "version": "v1", "created": "Thu, 28 Apr 2022 18:21:37 GMT" } ]
2023-07-21T00:00:00
[ [ "Rahman", "Ziyaur", "" ], [ "Zafaruddin", "S. M.", "" ], [ "Chaubey", "V. K.", "" ] ]
new_dataset
0.995451
2206.02248
Ahmet Kurt
Ahmet Kurt, Kemal Akkaya, Sabri Yilmaz, Suat Mercan, Omer Shlomovits, Enes Erdin
LNGate$^2$: Secure Bidirectional IoT Micro-payments using Bitcoin's Lightning Network and Threshold Cryptography
Revised again based on anonymous reviewers' comments. Journal extension of https://doi.org/10.1145/3448300.3467833. arXiv admin note: text overlap with arXiv:2105.08902
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
Bitcoin has emerged as a revolutionary payment system with its decentralized ledger concept; however it has significant problems such as high transaction fees and low throughput. Lightning Network (LN), which was introduced much later, solves most of these problems with an innovative concept called off-chain payments. With this advancement, Bitcoin has become an attractive venue to perform micro-payments which can also be adopted in many IoT applications (e.g., toll payments). Nevertheless, it is not feasible to host LN and Bitcoin on IoT devices due to the storage, memory, and processing restrictions. Therefore, in this paper, we propose a secure and efficient protocol that enables an IoT device to use LN's functions through an untrusted gateway node. Through this gateway which hosts the LN and Bitcoin nodes, the IoT device can open & close LN channels and send & receive LN payments. This delegation approach is powered by a threshold cryptography based scheme that requires the IoT device and the LN gateway to jointly perform all LN operations. Specifically, we propose thresholdizing LN's Bitcoin public and private keys as well as its public and private keys for the new channel states (i.e., commitment points). We prove with a game theoretical security analysis that the IoT device is secure against collusion attacks. We implemented the proposed protocol by changing LN's source code and thoroughly evaluated its performance using several Raspberry Pis. Our evaluation results show that the protocol; is fast, does not bring extra cost overhead, can be run on low data rate wireless networks, is scalable and has negligible energy consumption overhead. To the best of our knowledge, this is the first work that implemented threshold cryptography in LN.
[ { "version": "v1", "created": "Sun, 5 Jun 2022 19:50:11 GMT" }, { "version": "v2", "created": "Tue, 25 Apr 2023 00:16:58 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 18:30:53 GMT" } ]
2023-07-21T00:00:00
[ [ "Kurt", "Ahmet", "" ], [ "Akkaya", "Kemal", "" ], [ "Yilmaz", "Sabri", "" ], [ "Mercan", "Suat", "" ], [ "Shlomovits", "Omer", "" ], [ "Erdin", "Enes", "" ] ]
new_dataset
0.964677
2206.08309
Cl\'ement Chadebec
Cl\'ement Chadebec and Louis J. Vincent and St\'ephanie Allassonni\`ere
Pythae: Unifying Generative Autoencoders in Python -- A Benchmarking Use Case
Accepted to NeurIPS 2022
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, deep generative models have attracted increasing interest due to their capacity to model complex distributions. Among those models, variational autoencoders have gained popularity as they have proven both to be computationally efficient and yield impressive results in multiple fields. Following this breakthrough, extensive research has been done in order to improve the original publication, resulting in a variety of different VAE models in response to different tasks. In this paper we present Pythae, a versatile open-source Python library providing both a unified implementation and a dedicated framework allowing straightforward, reproducible and reliable use of generative autoencoder models. We then propose to use this library to perform a case study benchmark where we present and compare 19 generative autoencoder models representative of some of the main improvements on downstream tasks such as image reconstruction, generation, classification, clustering and interpolation. The open-source library can be found at https://github.com/clementchadebec/benchmark_VAE.
[ { "version": "v1", "created": "Thu, 16 Jun 2022 17:11:41 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 05:32:00 GMT" } ]
2023-07-21T00:00:00
[ [ "Chadebec", "Clément", "" ], [ "Vincent", "Louis J.", "" ], [ "Allassonnière", "Stéphanie", "" ] ]
new_dataset
0.998977
2206.10552
Weixuan Sun
Weixuan Sun, Zhen Qin, Hui Deng, Jianyuan Wang, Yi Zhang, Kaihao Zhang, Nick Barnes, Stan Birchfield, Lingpeng Kong, Yiran Zhong
Vicinity Vision Transformer
code: https://github.com/OpenNLPLab/Vicinity-Vision-Transformer
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Vision transformers have shown great success on numerous computer vision tasks. However, its central component, softmax attention, prohibits vision transformers from scaling up to high-resolution images, due to both the computational complexity and memory footprint being quadratic. Although linear attention was introduced in natural language processing (NLP) tasks to mitigate a similar issue, directly applying existing linear attention to vision transformers may not lead to satisfactory results. We investigate this problem and find that computer vision tasks focus more on local information compared with NLP tasks. Based on this observation, we present a Vicinity Attention that introduces a locality bias to vision transformers with linear complexity. Specifically, for each image patch, we adjust its attention weight based on its 2D Manhattan distance measured by its neighbouring patches. In this case, the neighbouring patches will receive stronger attention than far-away patches. Moreover, since our Vicinity Attention requires the token length to be much larger than the feature dimension to show its efficiency advantages, we further propose a new Vicinity Vision Transformer (VVT) structure to reduce the feature dimension without degenerating the accuracy. We perform extensive experiments on the CIFAR100, ImageNet1K, and ADE20K datasets to validate the effectiveness of our method. Our method has a slower growth rate of GFlops than previous transformer-based and convolution-based networks when the input resolution increases. In particular, our approach achieves state-of-the-art image classification accuracy with 50% fewer parameters than previous methods.
[ { "version": "v1", "created": "Tue, 21 Jun 2022 17:33:53 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 08:57:20 GMT" } ]
2023-07-21T00:00:00
[ [ "Sun", "Weixuan", "" ], [ "Qin", "Zhen", "" ], [ "Deng", "Hui", "" ], [ "Wang", "Jianyuan", "" ], [ "Zhang", "Yi", "" ], [ "Zhang", "Kaihao", "" ], [ "Barnes", "Nick", "" ], [ "Birchfield", "Stan", "" ], [ "Kong", "Lingpeng", "" ], [ "Zhong", "Yiran", "" ] ]
new_dataset
0.9993
2208.06501
Zifeng Ding
Zifeng Ding, Zongyue Li, Ruoxia Qi, Jingpei Wu, Bailan He, Yunpu Ma, Zhao Meng, Shuo Chen, Ruotong Liao, Zhen Han, Volker Tresp
ForecastTKGQuestions: A Benchmark for Temporal Question Answering and Forecasting over Temporal Knowledge Graphs
Accepted to ISWC 2023
null
null
null
cs.AI cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Question answering over temporal knowledge graphs (TKGQA) has recently found increasing interest. TKGQA requires temporal reasoning techniques to extract the relevant information from temporal knowledge bases. The only existing TKGQA dataset, i.e., CronQuestions, consists of temporal questions based on the facts from a fixed time period, where a temporal knowledge graph (TKG) spanning the same period can be fully used for answer inference, allowing the TKGQA models to use even the future knowledge to answer the questions based on the past facts. In real-world scenarios, however, it is also common that given the knowledge until now, we wish the TKGQA systems to answer the questions asking about the future. As humans constantly seek plans for the future, building TKGQA systems for answering such forecasting questions is important. Nevertheless, this has still been unexplored in previous research. In this paper, we propose a novel task: forecasting question answering over temporal knowledge graphs. We also propose a large-scale TKGQA benchmark dataset, i.e., ForecastTKGQuestions, for this task. It includes three types of questions, i.e., entity prediction, yes-no, and fact reasoning questions. For every forecasting question in our dataset, QA models can only have access to the TKG information before the timestamp annotated in the given question for answer inference. We find that the state-of-the-art TKGQA methods perform poorly on forecasting questions, and they are unable to answer yes-no questions and fact reasoning questions. To this end, we propose ForecastTKGQA, a TKGQA model that employs a TKG forecasting module for future inference, to answer all three types of questions. Experimental results show that ForecastTKGQA outperforms recent TKGQA methods on the entity prediction questions, and it also shows great effectiveness in answering the other two types of questions.
[ { "version": "v1", "created": "Fri, 12 Aug 2022 21:02:35 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 15:05:49 GMT" } ]
2023-07-21T00:00:00
[ [ "Ding", "Zifeng", "" ], [ "Li", "Zongyue", "" ], [ "Qi", "Ruoxia", "" ], [ "Wu", "Jingpei", "" ], [ "He", "Bailan", "" ], [ "Ma", "Yunpu", "" ], [ "Meng", "Zhao", "" ], [ "Chen", "Shuo", "" ], [ "Liao", "Ruotong", "" ], [ "Han", "Zhen", "" ], [ "Tresp", "Volker", "" ] ]
new_dataset
0.999728
2211.05939
Ayal Taitler
Ayal Taitler, Michael Gimelfarb, Jihwan Jeong, Sriram Gopalakrishnan, Martin Mladenov, Xiaotian Liu, Scott Sanner
pyRDDLGym: From RDDL to Gym Environments
null
null
null
null
cs.AI
http://creativecommons.org/licenses/by/4.0/
We present pyRDDLGym, a Python framework for auto-generation of OpenAI Gym environments from RDDL declerative description. The discrete time step evolution of variables in RDDL is described by conditional probability functions, which fits naturally into the Gym step scheme. Furthermore, since RDDL is a lifted description, the modification and scaling up of environments to support multiple entities and different configurations becomes trivial rather than a tedious process prone to errors. We hope that pyRDDLGym will serve as a new wind in the reinforcement learning community by enabling easy and rapid development of benchmarks due to the unique expressive power of RDDL. By providing explicit access to the model in the RDDL description, pyRDDLGym can also facilitate research on hybrid approaches for learning from interaction while leveraging model knowledge. We present the design and built-in examples of pyRDDLGym, and the additions made to the RDDL language that were incorporated into the framework.
[ { "version": "v1", "created": "Fri, 11 Nov 2022 00:58:16 GMT" }, { "version": "v2", "created": "Mon, 14 Nov 2022 19:55:56 GMT" }, { "version": "v3", "created": "Fri, 16 Dec 2022 23:43:52 GMT" }, { "version": "v4", "created": "Wed, 19 Jul 2023 14:40:45 GMT" } ]
2023-07-21T00:00:00
[ [ "Taitler", "Ayal", "" ], [ "Gimelfarb", "Michael", "" ], [ "Jeong", "Jihwan", "" ], [ "Gopalakrishnan", "Sriram", "" ], [ "Mladenov", "Martin", "" ], [ "Liu", "Xiaotian", "" ], [ "Sanner", "Scott", "" ] ]
new_dataset
0.999794
2212.04246
Yufei Xu
Yufei Xu, Jing Zhang, Qiming Zhang, Dacheng Tao
ViTPose++: Vision Transformer Foundation Model for Generic Body Pose Estimation
Extension of ViTPose paper
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we show the surprisingly good properties of plain vision transformers for body pose estimation from various aspects, namely simplicity in model structure, scalability in model size, flexibility in training paradigm, and transferability of knowledge between models, through a simple baseline model dubbed ViTPose. Specifically, ViTPose employs the plain and non-hierarchical vision transformer as an encoder to encode features and a lightweight decoder to decode body keypoints in either a top-down or a bottom-up manner. It can be scaled up from about 20M to 1B parameters by taking advantage of the scalable model capacity and high parallelism of the vision transformer, setting a new Pareto front for throughput and performance. Besides, ViTPose is very flexible regarding the attention type, input resolution, and pre-training and fine-tuning strategy. Based on the flexibility, a novel ViTPose+ model is proposed to deal with heterogeneous body keypoint categories in different types of body pose estimation tasks via knowledge factorization, i.e., adopting task-agnostic and task-specific feed-forward networks in the transformer. We also empirically demonstrate that the knowledge of large ViTPose models can be easily transferred to small ones via a simple knowledge token. Experimental results show that our ViTPose model outperforms representative methods on the challenging MS COCO Human Keypoint Detection benchmark at both top-down and bottom-up settings. Furthermore, our ViTPose+ model achieves state-of-the-art performance simultaneously on a series of body pose estimation tasks, including MS COCO, AI Challenger, OCHuman, MPII for human keypoint detection, COCO-Wholebody for whole-body keypoint detection, as well as AP-10K and APT-36K for animal keypoint detection, without sacrificing inference speed.
[ { "version": "v1", "created": "Wed, 7 Dec 2022 12:33:28 GMT" }, { "version": "v2", "created": "Wed, 12 Jul 2023 16:27:27 GMT" } ]
2023-07-21T00:00:00
[ [ "Xu", "Yufei", "" ], [ "Zhang", "Jing", "" ], [ "Zhang", "Qiming", "" ], [ "Tao", "Dacheng", "" ] ]
new_dataset
0.991342
2212.10338
David Naumann
Anindya Banerjee, Ramana Nagasamudram, David A. Naumann
Making Relational Hoare Logic Alignment Complete
v2: streamline treatment of hypotheses in definition of command equivalence; simplify normal form axioms. v3: add note referencing new paper ArXiv 2307.10045 which incorporates the results in this paper and more
null
null
null
cs.LO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In relational verification, judicious alignment of computational steps facilitates proof of relations between programs using simple relational assertions. Relational Hoare logics (RHL) provide compositional rules that embody various alignments. Seemingly more flexible alignments can be expressed in terms of product automata based on program transition relations. A RHL can be complete, in the ordinary sense, using a single degenerate alignment rule. The notion of alignment completeness was previously proposed as a more satisfactory measure, based on alignment automata, and some rules were shown to be alignment complete with respect to a few ad hoc forms of alignment automata. Using a rule of semantics-preserving rewrites based on Kleene algebra with tests, an RHL is shown to be alignment complete with respect to a very general class of alignment automata. Besides solving the open problem of general alignment completeness, this result bridges between human-friendly syntax-based reasoning and automata representations that facilitate automated verification.
[ { "version": "v1", "created": "Tue, 20 Dec 2022 15:24:57 GMT" }, { "version": "v2", "created": "Sat, 18 Mar 2023 02:50:22 GMT" }, { "version": "v3", "created": "Thu, 20 Jul 2023 02:29:44 GMT" } ]
2023-07-21T00:00:00
[ [ "Banerjee", "Anindya", "" ], [ "Nagasamudram", "Ramana", "" ], [ "Naumann", "David A.", "" ] ]
new_dataset
0.968061
2212.13792
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Josef Bigun, Julian Fierrez, Naser Damer, Hugo Proen\c{c}a, Arun Ross
Periocular Biometrics: A Modality for Unconstrained Scenarios
Published at IEEE Computer journal
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Periocular refers to the externally visible region of the face that surrounds the eye socket. This feature-rich area can provide accurate identification in unconstrained or uncooperative scenarios, where the iris or face modalities may not offer sufficient biometric cues due to factors such as partial occlusion or high subject-to-camera distance. The COVID-19 pandemic has further highlighted its importance, as the ocular region remained the only visible facial area even in controlled settings due to the widespread use of masks. This paper discusses the state of the art in periocular biometrics, presenting an overall framework encompassing its most significant research aspects, which include: (a) ocular definition, acquisition, and detection; (b) identity recognition, including combination with other modalities and use of various spectra; and (c) ocular soft-biometric analysis. Finally, we conclude by addressing current challenges and proposing future directions.
[ { "version": "v1", "created": "Wed, 28 Dec 2022 12:08:27 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 12:37:06 GMT" } ]
2023-07-21T00:00:00
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Bigun", "Josef", "" ], [ "Fierrez", "Julian", "" ], [ "Damer", "Naser", "" ], [ "Proença", "Hugo", "" ], [ "Ross", "Arun", "" ] ]
new_dataset
0.995466
2302.04450
Vishnuprasad Padinjaredath Suresh
Vishnuprasad Padinjaredath Suresh, Gianluca Nogara, Felipe Cardoso, Stefano Cresci, Silvia Giordano, and Luca Luceri
Tracking Fringe and Coordinated Activity on Twitter Leading Up To the US Capitol Attack
11 pages (including references), 8 figures, 1 table. Accepted at The 18th International AAAI Conference on Web and Social Media
Proceedings of the 18th International Conference on Web and Social Media, 2024
null
null
cs.SI cs.HC
http://creativecommons.org/licenses/by-nc-sa/4.0/
The aftermath of the 2020 US Presidential Election witnessed an unprecedented attack on the democratic values of the country through the violent insurrection at Capitol Hill on January 6th, 2021. The attack was fueled by the proliferation of conspiracy theories and misleading claims about the integrity of the election pushed by political elites and fringe communities on social media. In this study, we explore the evolution of fringe content and conspiracy theories on Twitter in the seven months leading up to the Capitol attack. We examine the suspicious coordinated activity carried out by users sharing fringe content, finding evidence of common adversarial manipulation techniques ranging from targeted amplification to manufactured consensus. Further, we map out the temporal evolution of, and the relationship between, fringe and conspiracy theories, which eventually coalesced into the rhetoric of a stolen election, with the hashtag #stopthesteal, alongside QAnon-related narratives. Our findings further highlight how social media platforms offer fertile ground for the widespread proliferation of conspiracies during major societal events, which can potentially lead to offline coordinated actions and organized violence.
[ { "version": "v1", "created": "Thu, 9 Feb 2023 05:54:16 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 09:31:22 GMT" } ]
2023-07-21T00:00:00
[ [ "Suresh", "Vishnuprasad Padinjaredath", "" ], [ "Nogara", "Gianluca", "" ], [ "Cardoso", "Felipe", "" ], [ "Cresci", "Stefano", "" ], [ "Giordano", "Silvia", "" ], [ "Luceri", "Luca", "" ] ]
new_dataset
0.999592
2302.08292
Alexandre Almin
Alexandre Almin, L\'eo Lemari\'e, Anh Duong, B Ravi Kiran
Navya3DSeg -- Navya 3D Semantic Segmentation Dataset & split generation for autonomous vehicles
Accepted version to IEEE RA-L. Version with supplementary materials
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Autonomous driving (AD) perception today relies heavily on deep learning based architectures requiring large scale annotated datasets with their associated costs for curation and annotation. The 3D semantic data are useful for core perception tasks such as obstacle detection and ego-vehicle localization. We propose a new dataset, Navya 3D Segmentation (Navya3DSeg), with a diverse label space corresponding to a large scale production grade operational domain, including rural, urban, industrial sites and universities from 13 countries. It contains 23 labeled sequences and 25 supplementary sequences without labels, designed to explore self-supervised and semi-supervised semantic segmentation benchmarks on point clouds. We also propose a novel method for sequential dataset split generation based on iterative multi-label stratification, and demonstrated to achieve a +1.2% mIoU improvement over the original split proposed by SemanticKITTI dataset. A complete benchmark for semantic segmentation task was performed, with state of the art methods. Finally, we demonstrate an Active Learning (AL) based dataset distillation framework. We introduce a novel heuristic-free sampling method called ego-pose distance based sampling in the context of AL. A detailed presentation on the dataset is available here https://www.youtube.com/watch?v=5m6ALIs-s20.
[ { "version": "v1", "created": "Thu, 16 Feb 2023 13:41:19 GMT" }, { "version": "v2", "created": "Mon, 22 May 2023 14:42:46 GMT" }, { "version": "v3", "created": "Thu, 20 Jul 2023 08:35:26 GMT" } ]
2023-07-21T00:00:00
[ [ "Almin", "Alexandre", "" ], [ "Lemarié", "Léo", "" ], [ "Duong", "Anh", "" ], [ "Kiran", "B Ravi", "" ] ]
new_dataset
0.999867
2303.00924
Lindsey Kuper
Gan Shen, Shun Kashiwa, Lindsey Kuper
HasChor: Functional Choreographic Programming for All (Functional Pearl)
null
null
10.1145/3607849
null
cs.PL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Choreographic programming is an emerging paradigm for programming distributed systems. In choreographic programming, the programmer describes the behavior of the entire system as a single, unified program -- a choreography -- which is then compiled to individual programs that run on each node, via a compilation step called endpoint projection. We present a new model for functional choreographic programming where choreographies are expressed as computations in a monad. Our model supports cutting-edge choreographic programming features that enable modularity and code reuse: in particular, it supports higher-order choreographies, in which a choreography may be passed as an argument to another choreography, and location-polymorphic choreographies, in which a choreography can abstract over nodes. Our model is implemented in a Haskell library, HasChor, which lets programmers write choreographic programs while using the rich Haskell ecosystem at no cost, bringing choreographic programming within reach of everyday Haskellers. Moreover, thanks to Haskell's abstractions, the implementation of the HasChor library itself is concise and understandable, boiling down endpoint projection to its short and simple essence.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 02:54:05 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 19:33:30 GMT" } ]
2023-07-21T00:00:00
[ [ "Shen", "Gan", "" ], [ "Kashiwa", "Shun", "" ], [ "Kuper", "Lindsey", "" ] ]
new_dataset
0.996568
2303.13501
Tolga Birdal
Nathan Mankovich and Tolga Birdal
Chordal Averaging on Flag Manifolds and Its Applications
Appears at ICCV 2023
null
null
null
cs.CV cs.LG math.DG math.OC stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a new, provably-convergent algorithm for computing the flag-mean and flag-median of a set of points on a flag manifold under the chordal metric. The flag manifold is a mathematical space consisting of flags, which are sequences of nested subspaces of a vector space that increase in dimension. The flag manifold is a superset of a wide range of known matrix spaces, including Stiefel and Grassmanians, making it a general object that is useful in a wide variety computer vision problems. To tackle the challenge of computing first order flag statistics, we first transform the problem into one that involves auxiliary variables constrained to the Stiefel manifold. The Stiefel manifold is a space of orthogonal frames, and leveraging the numerical stability and efficiency of Stiefel-manifold optimization enables us to compute the flag-mean effectively. Through a series of experiments, we show the competence of our method in Grassmann and rotation averaging, as well as principal component analysis. We release our source code under https://github.com/nmank/FlagAveraging.
[ { "version": "v1", "created": "Thu, 23 Mar 2023 17:57:28 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 18:27:49 GMT" } ]
2023-07-21T00:00:00
[ [ "Mankovich", "Nathan", "" ], [ "Birdal", "Tolga", "" ] ]
new_dataset
0.999004
2305.01146
Dave Van Veen
Dave Van Veen, Cara Van Uden, Maayane Attias, Anuj Pareek, Christian Bluethgen, Malgorzata Polacin, Wah Chiu, Jean-Benoit Delbrouck, Juan Manuel Zambrano Chaves, Curtis P. Langlotz, Akshay S. Chaudhari, John Pauly
RadAdapt: Radiology Report Summarization via Lightweight Domain Adaptation of Large Language Models
12 pages, 10 figures. Published in ACL BioNLP. Compared to v1, v2 includes minor edits and one additional figure in the appendix. Compared to v2, v3 includes a link to the project's GitHub repository
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
We systematically investigate lightweight strategies to adapt large language models (LLMs) for the task of radiology report summarization (RRS). Specifically, we focus on domain adaptation via pretraining (on natural language, biomedical text, or clinical text) and via discrete prompting or parameter-efficient fine-tuning. Our results consistently achieve best performance by maximally adapting to the task via pretraining on clinical text and fine-tuning on RRS examples. Importantly, this method fine-tunes a mere 0.32% of parameters throughout the model, in contrast to end-to-end fine-tuning (100% of parameters). Additionally, we study the effect of in-context examples and out-of-distribution (OOD) training before concluding with a radiologist reader study and qualitative analysis. Our findings highlight the importance of domain adaptation in RRS and provide valuable insights toward developing effective natural language processing solutions for clinical tasks.
[ { "version": "v1", "created": "Tue, 2 May 2023 01:33:02 GMT" }, { "version": "v2", "created": "Sat, 17 Jun 2023 13:17:07 GMT" }, { "version": "v3", "created": "Thu, 20 Jul 2023 13:10:07 GMT" } ]
2023-07-21T00:00:00
[ [ "Van Veen", "Dave", "" ], [ "Van Uden", "Cara", "" ], [ "Attias", "Maayane", "" ], [ "Pareek", "Anuj", "" ], [ "Bluethgen", "Christian", "" ], [ "Polacin", "Malgorzata", "" ], [ "Chiu", "Wah", "" ], [ "Delbrouck", "Jean-Benoit", "" ], [ "Chaves", "Juan Manuel Zambrano", "" ], [ "Langlotz", "Curtis P.", "" ], [ "Chaudhari", "Akshay S.", "" ], [ "Pauly", "John", "" ] ]
new_dataset
0.977808
2305.07290
Lei Jin
Jian Zhao, Jianan Li, Lei Jin, Jiaming Chu, Zhihao Zhang, Jun Wang, Jiangqiang Xia, Kai Wang, Yang Liu, Sadaf Gulshad, Jiaojiao Zhao, Tianyang Xu, Xuefeng Zhu, Shihan Liu, Zheng Zhu, Guibo Zhu, Zechao Li, Zheng Wang, Baigui Sun, Yandong Guo, Shin ichi Satoh, Junliang Xing, Jane Shen Shengmei
The 3rd Anti-UAV Workshop & Challenge: Methods and Results
Technical report for 3rd Anti-UAV Workshop and Challenge. arXiv admin note: text overlap with arXiv:2108.09909
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The 3rd Anti-UAV Workshop & Challenge aims to encourage research in developing novel and accurate methods for multi-scale object tracking. The Anti-UAV dataset used for the Anti-UAV Challenge has been publicly released. There are two main differences between this year's competition and the previous two. First, we have expanded the existing dataset, and for the first time, released a training set so that participants can focus on improving their models. Second, we set up two tracks for the first time, i.e., Anti-UAV Tracking and Anti-UAV Detection & Tracking. Around 76 participating teams from the globe competed in the 3rd Anti-UAV Challenge. In this paper, we provide a brief summary of the 3rd Anti-UAV Workshop & Challenge including brief introductions to the top three methods in each track. The submission leaderboard will be reopened for researchers that are interested in the Anti-UAV challenge. The benchmark dataset and other information can be found at: https://anti-uav.github.io/.
[ { "version": "v1", "created": "Fri, 12 May 2023 07:37:04 GMT" }, { "version": "v2", "created": "Sat, 15 Jul 2023 05:32:55 GMT" } ]
2023-07-21T00:00:00
[ [ "Zhao", "Jian", "" ], [ "Li", "Jianan", "" ], [ "Jin", "Lei", "" ], [ "Chu", "Jiaming", "" ], [ "Zhang", "Zhihao", "" ], [ "Wang", "Jun", "" ], [ "Xia", "Jiangqiang", "" ], [ "Wang", "Kai", "" ], [ "Liu", "Yang", "" ], [ "Gulshad", "Sadaf", "" ], [ "Zhao", "Jiaojiao", "" ], [ "Xu", "Tianyang", "" ], [ "Zhu", "Xuefeng", "" ], [ "Liu", "Shihan", "" ], [ "Zhu", "Zheng", "" ], [ "Zhu", "Guibo", "" ], [ "Li", "Zechao", "" ], [ "Wang", "Zheng", "" ], [ "Sun", "Baigui", "" ], [ "Guo", "Yandong", "" ], [ "Satoh", "Shin ichi", "" ], [ "Xing", "Junliang", "" ], [ "Shengmei", "Jane Shen", "" ] ]
new_dataset
0.958054
2305.11408
Sara Papi
Sara Papi, Marco Turchi, Matteo Negri
AlignAtt: Using Attention-based Audio-Translation Alignments as a Guide for Simultaneous Speech Translation
Accepted at Interspeech 2023
null
null
null
cs.CL cs.LG cs.SD eess.AS
http://creativecommons.org/licenses/by-nc-sa/4.0/
Attention is the core mechanism of today's most used architectures for natural language processing and has been analyzed from many perspectives, including its effectiveness for machine translation-related tasks. Among these studies, attention resulted to be a useful source of information to get insights about word alignment also when the input text is substituted with audio segments, as in the case of the speech translation (ST) task. In this paper, we propose AlignAtt, a novel policy for simultaneous ST (SimulST) that exploits the attention information to generate source-target alignments that guide the model during inference. Through experiments on the 8 language pairs of MuST-C v1.0, we show that AlignAtt outperforms previous state-of-the-art SimulST policies applied to offline-trained models with gains in terms of BLEU of 2 points and latency reductions ranging from 0.5s to 0.8s across the 8 languages.
[ { "version": "v1", "created": "Fri, 19 May 2023 03:31:42 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 00:58:30 GMT" } ]
2023-07-21T00:00:00
[ [ "Papi", "Sara", "" ], [ "Turchi", "Marco", "" ], [ "Negri", "Matteo", "" ] ]
new_dataset
0.973917
2305.17079
Felix Stutz
Elaine Li, Felix Stutz, Thomas Wies, Damien Zufferey
Complete Multiparty Session Type Projection with Automata
24 pages, 44 pages including appendix; CAV 2023
null
null
null
cs.FL cs.DC cs.PL
http://creativecommons.org/licenses/by/4.0/
Multiparty session types (MSTs) are a type-based approach to verifying communication protocols. Central to MSTs is a projection operator: a partial function that maps protocols represented as global types to correct-by-construction implementations for each participant, represented as a communicating state machine. Existing projection operators are syntactic in nature, and trade efficiency for completeness. We present the first projection operator that is sound, complete, and efficient. Our projection separates synthesis from checking implementability. For synthesis, we use a simple automata-theoretic construction; for checking implementability, we present succinct conditions that summarize insights into the property of implementability. We use these conditions to show that MST implementability is PSPACE-complete. This improves upon a previous decision procedure that is in EXPSPACE and applies to a smaller class of MSTs. We demonstrate the effectiveness of our approach using a prototype implementation, which handles global types not supported by previous work without sacrificing performance.
[ { "version": "v1", "created": "Fri, 26 May 2023 16:38:37 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 22:23:37 GMT" } ]
2023-07-21T00:00:00
[ [ "Li", "Elaine", "" ], [ "Stutz", "Felix", "" ], [ "Wies", "Thomas", "" ], [ "Zufferey", "Damien", "" ] ]
new_dataset
0.956833
2306.14030
Raviraj Joshi
Tanmay Chavan, Omkar Gokhale, Aditya Kane, Shantanu Patankar, Raviraj Joshi
My Boli: Code-mixed Marathi-English Corpora, Pretrained Language Models and Evaluation Benchmarks
null
null
null
null
cs.CL cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The research on code-mixed data is limited due to the unavailability of dedicated code-mixed datasets and pre-trained language models. In this work, we focus on the low-resource Indian language Marathi which lacks any prior work in code-mixing. We present L3Cube-MeCorpus, a large code-mixed Marathi-English (Mr-En) corpus with 10 million social media sentences for pretraining. We also release L3Cube-MeBERT and MeRoBERTa, code-mixed BERT-based transformer models pre-trained on MeCorpus. Furthermore, for benchmarking, we present three supervised datasets MeHate, MeSent, and MeLID for downstream tasks like code-mixed Mr-En hate speech detection, sentiment analysis, and language identification respectively. These evaluation datasets individually consist of manually annotated \url{~}12,000 Marathi-English code-mixed tweets. Ablations show that the models trained on this novel corpus significantly outperform the existing state-of-the-art BERT models. This is the first work that presents artifacts for code-mixed Marathi research. All datasets and models are publicly released at https://github.com/l3cube-pune/MarathiNLP .
[ { "version": "v1", "created": "Sat, 24 Jun 2023 18:17:38 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 13:54:05 GMT" } ]
2023-07-21T00:00:00
[ [ "Chavan", "Tanmay", "" ], [ "Gokhale", "Omkar", "" ], [ "Kane", "Aditya", "" ], [ "Patankar", "Shantanu", "" ], [ "Joshi", "Raviraj", "" ] ]
new_dataset
0.999863
2306.14795
Xin Chen
Biao Jiang, Xin Chen, Wen Liu, Jingyi Yu, Gang Yu, Tao Chen
MotionGPT: Human Motion as a Foreign Language
Project Page: https://github.com/OpenMotionLab/MotionGPT
null
null
null
cs.CV cs.CL cs.GR
http://creativecommons.org/licenses/by/4.0/
Though the advancement of pre-trained large language models unfolds, the exploration of building a unified model for language and other multi-modal data, such as motion, remains challenging and untouched so far. Fortunately, human motion displays a semantic coupling akin to human language, often perceived as a form of body language. By fusing language data with large-scale motion models, motion-language pre-training that can enhance the performance of motion-related tasks becomes feasible. Driven by this insight, we propose MotionGPT, a unified, versatile, and user-friendly motion-language model to handle multiple motion-relevant tasks. Specifically, we employ the discrete vector quantization for human motion and transfer 3D motion into motion tokens, similar to the generation process of word tokens. Building upon this "motion vocabulary", we perform language modeling on both motion and text in a unified manner, treating human motion as a specific language. Moreover, inspired by prompt learning, we pre-train MotionGPT with a mixture of motion-language data and fine-tune it on prompt-based question-and-answer tasks. Extensive experiments demonstrate that MotionGPT achieves state-of-the-art performances on multiple motion tasks including text-driven motion generation, motion captioning, motion prediction, and motion in-between.
[ { "version": "v1", "created": "Mon, 26 Jun 2023 15:53:02 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 03:39:19 GMT" } ]
2023-07-21T00:00:00
[ [ "Jiang", "Biao", "" ], [ "Chen", "Xin", "" ], [ "Liu", "Wen", "" ], [ "Yu", "Jingyi", "" ], [ "Yu", "Gang", "" ], [ "Chen", "Tao", "" ] ]
new_dataset
0.999645
2307.01091
Rita Pucci
Rita Pucci, Niki Martinel
UW-ProCCaps: UnderWater Progressive Colourisation with Capsules
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Underwater images are fundamental for studying and understanding the status of marine life. We focus on reducing the memory space required for image storage while the memory space consumption in the collecting phase limits the time lasting of this phase leading to the need for more image collection campaigns. We present a novel machine-learning model that reconstructs the colours of underwater images from their luminescence channel, thus saving 2/3 of the available storage space. Our model specialises in underwater colour reconstruction and consists of an encoder-decoder architecture. The encoder is composed of a convolutional encoder and a parallel specialised classifier trained with webly-supervised data. The encoder and the decoder use layers of capsules to capture the features of the entities in the image. The colour reconstruction process recalls the progressive and the generative adversarial training procedures. The progressive training gives the ground for a generative adversarial routine focused on the refining of colours giving the image bright and saturated colours which bring the image back to life. We validate the model both qualitatively and quantitatively on four benchmark datasets. This is the first attempt at colour reconstruction in greyscale underwater images. Extensive results on four benchmark datasets demonstrate that our solution outperforms state-of-the-art (SOTA) solutions. We also demonstrate that the generated colourisation enhances the quality of images compared to enhancement models at the SOTA.
[ { "version": "v1", "created": "Mon, 3 Jul 2023 15:09:32 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 09:40:13 GMT" } ]
2023-07-21T00:00:00
[ [ "Pucci", "Rita", "" ], [ "Martinel", "Niki", "" ] ]
new_dataset
0.995826
2307.04005
EPTCS
Rineke Verbrugge (University of Groningen)
Proceedings Nineteenth conference on Theoretical Aspects of Rationality and Knowledge
null
EPTCS 379, 2023
10.4204/EPTCS.379
null
cs.LO cs.AI cs.GT cs.MA
http://creativecommons.org/licenses/by/4.0/
The TARK conference (Theoretical Aspects of Rationality and Knowledge) is a conference that aims to bring together researchers from a wide variety of fields, including computer science, artificial intelligence, game theory, decision theory, philosophy, logic, linguistics, and cognitive science. Its goal is to further our understanding of interdisciplinary issues involving reasoning about rationality and knowledge. Previous conferences have been held biennially around the world since 1986, on the initiative of Joe Halpern (Cornell University). Topics of interest include, but are not limited to, semantic models for knowledge, belief, awareness and uncertainty, bounded rationality and resource-bounded reasoning, commonsense epistemic reasoning, epistemic logic, epistemic game theory, knowledge and action, applications of reasoning about knowledge and other mental states, belief revision, computational social choice, algorithmic game theory, and foundations of multi-agent systems. Information about TARK, including conference proceedings, is available at http://www.tark.org/ These proceedings contain the papers that have been accepted for presentation at the Nineteenth Conference on Theoretical Aspects of Rationality and Knowledge (TARK 2023), held between June 28 and June 30, 2023, at the University of Oxford, United Kingdom. The conference website can be found at https://sites.google.com/view/tark-2023
[ { "version": "v1", "created": "Sat, 8 Jul 2023 16:22:42 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 14:31:39 GMT" } ]
2023-07-21T00:00:00
[ [ "Verbrugge", "Rineke", "", "University of Groningen" ] ]
new_dataset
0.983682
2307.08122
Tian Yu Liu
Tian Yu Liu, Aditya Golatkar and Stefano Soatto
Tangent Transformers for Composition, Privacy and Removal
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We introduce Tangent Attention Fine-Tuning (TAFT), a method for fine-tuning linearized transformers obtained by computing a First-order Taylor Expansion around a pre-trained initialization. We show that the Jacobian-Vector Product resulting from linearization can be computed efficiently in a single forward pass, reducing training and inference cost to the same order of magnitude as its original non-linear counterpart, while using the same number of parameters. Furthermore, we show that, when applied to various downstream visual classification tasks, the resulting Tangent Transformer fine-tuned with TAFT can perform comparably with fine-tuning the original non-linear network. Since Tangent Transformers are linear with respect to the new set of weights, and the resulting fine-tuning loss is convex, we show that TAFT enjoys several advantages compared to non-linear fine-tuning when it comes to model composition, parallel training, machine unlearning, and differential privacy.
[ { "version": "v1", "created": "Sun, 16 Jul 2023 18:31:25 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 03:07:28 GMT" } ]
2023-07-21T00:00:00
[ [ "Liu", "Tian Yu", "" ], [ "Golatkar", "Aditya", "" ], [ "Soatto", "Stefano", "" ] ]
new_dataset
0.998846
2307.10165
Fernando Alonso-Fernandez
Moa Arvidsson, Sithichot Sawirot, Cristofer Englund, Fernando Alonso-Fernandez, Martin Torstensson, Boris Duran
Drone navigation and license place detection for vehicle location in indoor spaces
Published at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Millions of vehicles are transported every year, tightly parked in vessels or boats. To reduce the risks of associated safety issues like fires, knowing the location of vehicles is essential, since different vehicles may need different mitigation measures, e.g. electric cars. This work is aimed at creating a solution based on a nano-drone that navigates across rows of parked vehicles and detects their license plates. We do so via a wall-following algorithm, and a CNN trained to detect license plates. All computations are done in real-time on the drone, which just sends position and detected images that allow the creation of a 2D map with the position of the plates. Our solution is capable of reading all plates across eight test cases (with several rows of plates, different drone speeds, or low light) by aggregation of measurements across several drone journeys.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 17:46:55 GMT" }, { "version": "v2", "created": "Thu, 20 Jul 2023 08:53:13 GMT" } ]
2023-07-21T00:00:00
[ [ "Arvidsson", "Moa", "" ], [ "Sawirot", "Sithichot", "" ], [ "Englund", "Cristofer", "" ], [ "Alonso-Fernandez", "Fernando", "" ], [ "Torstensson", "Martin", "" ], [ "Duran", "Boris", "" ] ]
new_dataset
0.996888
2307.10214
Davide Sanvito
Giuseppe Siracusano, Davide Sanvito, Roberto Gonzalez, Manikantan Srinivasan, Sivakaman Kamatchi, Wataru Takahashi, Masaru Kawakita, Takahiro Kakumaru, Roberto Bifulco
Time for aCTIon: Automated Analysis of Cyber Threat Intelligence in the Wild
null
null
null
null
cs.CR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Cyber Threat Intelligence (CTI) plays a crucial role in assessing risks and enhancing security for organizations. However, the process of extracting relevant information from unstructured text sources can be expensive and time-consuming. Our empirical experience shows that existing tools for automated structured CTI extraction have performance limitations. Furthermore, the community lacks a common benchmark to quantitatively assess their performance. We fill these gaps providing a new large open benchmark dataset and aCTIon, a structured CTI information extraction tool. The dataset includes 204 real-world publicly available reports and their corresponding structured CTI information in STIX format. Our team curated the dataset involving three independent groups of CTI analysts working over the course of several months. To the best of our knowledge, this dataset is two orders of magnitude larger than previously released open source datasets. We then design aCTIon, leveraging recently introduced large language models (GPT3.5) in the context of two custom information extraction pipelines. We compare our method with 10 solutions presented in previous work, for which we develop our own implementations when open-source implementations were lacking. Our results show that aCTIon outperforms previous work for structured CTI extraction with an improvement of the F1-score from 10%points to 50%points across all tasks.
[ { "version": "v1", "created": "Fri, 14 Jul 2023 13:43:16 GMT" } ]
2023-07-21T00:00:00
[ [ "Siracusano", "Giuseppe", "" ], [ "Sanvito", "Davide", "" ], [ "Gonzalez", "Roberto", "" ], [ "Srinivasan", "Manikantan", "" ], [ "Kamatchi", "Sivakaman", "" ], [ "Takahashi", "Wataru", "" ], [ "Kawakita", "Masaru", "" ], [ "Kakumaru", "Takahiro", "" ], [ "Bifulco", "Roberto", "" ] ]
new_dataset
0.998824
2307.10222
Amy Winecoff
Amy A. Winecoff and Johannes Lenhard
Techno-Utopians, Scammers, and Bullshitters: The Promise and Peril of Web3 and Blockchain Technologies According to Operators and Venture Capital Investors
null
null
null
null
cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
Proponents and developers of Web3 and blockchain argue that these technologies can revolutionize how people live and work by empowering individuals and distributing decision-making power. While technologists often have expansive hopes for what their technologies will accomplish over the long term, the practical challenges of developing, scaling, and maintaining systems amidst present-day constraints can compromise progress toward this vision. How technologists think about the technological future they hope to enable and how they navigate day-to-day issues impacts the form technologies take, their potential benefits, and their potential harms. In our current work, we aimed to explore the visions of Web3 and blockchain technologists and identify the immediate challenges that could threaten their visions. We conducted semi-structured interviews with 29 operators and professional investors in the Web3 and blockchain field. Our findings revealed that participants supported several ideological goals for their projects, with decentralization being a pivotal mechanism to enable user autonomy, distribute governance power, and promote financial inclusion. However, participants acknowledged the practical difficulties in fulfilling these promises, including the need for rapid technology development, conflicts of interest among stakeholders due to platform financing dynamics, and the challenge of expanding to mainstream users who may not share the "Web3 ethos." If negotiated ineffectively, these challenges could lead to negative outcomes, such as corrupt governance, increased inequality, and increased prevalence of scams and dubious investment schemes. While participants thought education, regulation, and a renewed commitment to the original blockchain ideals could alleviate some problems, they expressed skepticism about the potential of these solutions.
[ { "version": "v1", "created": "Fri, 14 Jul 2023 22:36:14 GMT" } ]
2023-07-21T00:00:00
[ [ "Winecoff", "Amy A.", "" ], [ "Lenhard", "Johannes", "" ] ]
new_dataset
0.996311
2307.10226
Joohyung Lee
Joohyung Lee, Yunsong Meng
On Loop Formulas with Variables
10 pages. In Proc. Eleventh International Conference on Principles of Knowledge Representation and Reasoning (KR 2008), pages 444-453. arXiv admin note: text overlap with arXiv:1401.3898
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently Ferraris, Lee and Lifschitz proposed a new definition of stable models that does not refer to grounding, which applies to the syntax of arbitrary first-order sentences. We show its relation to the idea of loop formulas with variables by Chen, Lin, Wang and Zhang, and generalize their loop formulas to disjunctive programs and to arbitrary first-order sentences. We also extend the syntax of logic programs to allow explicit quantifiers, and define its semantics as a subclass of the new language of stable models by Ferraris et al. Such programs inherit from the general language the ability to handle nonmonotonic reasoning under the stable model semantics even in the absence of the unique name and the domain closure assumptions, while yielding more succinct loop formulas than the general language due to the restricted syntax. We also show certain syntactic conditions under which query answering for an extended program can be reduced to entailment checking in first-order logic, providing a way to apply first-order theorem provers to reasoning about non-Herbrand stable models.
[ { "version": "v1", "created": "Sat, 15 Jul 2023 06:20:43 GMT" } ]
2023-07-21T00:00:00
[ [ "Lee", "Joohyung", "" ], [ "Meng", "Yunsong", "" ] ]
new_dataset
0.98941
2307.10267
Richard Wang
Raiyan Rahman, Christopher Indris, Tianxiao Zhang, Kaidong Li, Brian McCornack, Daniel Flippo, Ajay Sharda, Guanghui Wang
On the Real-Time Semantic Segmentation of Aphid Clusters in the Wild
null
null
null
null
cs.CV cs.AI
http://creativecommons.org/publicdomain/zero/1.0/
Aphid infestations can cause extensive damage to wheat and sorghum fields and spread plant viruses, resulting in significant yield losses in agriculture. To address this issue, farmers often rely on chemical pesticides, which are inefficiently applied over large areas of fields. As a result, a considerable amount of pesticide is wasted on areas without pests, while inadequate amounts are applied to areas with severe infestations. The paper focuses on the urgent need for an intelligent autonomous system that can locate and spray infestations within complex crop canopies, reducing pesticide use and environmental impact. We have collected and labeled a large aphid image dataset in the field, and propose the use of real-time semantic segmentation models to segment clusters of aphids. A multiscale dataset is generated to allow for learning the clusters at different scales. We compare the segmentation speeds and accuracy of four state-of-the-art real-time semantic segmentation models on the aphid cluster dataset, benchmarking them against nonreal-time models. The study results show the effectiveness of a real-time solution, which can reduce inefficient pesticide use and increase crop yields, paving the way towards an autonomous pest detection system.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 19:04:39 GMT" } ]
2023-07-21T00:00:00
[ [ "Rahman", "Raiyan", "" ], [ "Indris", "Christopher", "" ], [ "Zhang", "Tianxiao", "" ], [ "Li", "Kaidong", "" ], [ "McCornack", "Brian", "" ], [ "Flippo", "Daniel", "" ], [ "Sharda", "Ajay", "" ], [ "Wang", "Guanghui", "" ] ]
new_dataset
0.984807
2307.10283
Anastasia Natsiou
Anastasia Natsiou, Luca Longo, Sean O'Leary
Interpretable Timbre Synthesis using Variational Autoencoders Regularized on Timbre Descriptors
null
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Controllable timbre synthesis has been a subject of research for several decades, and deep neural networks have been the most successful in this area. Deep generative models such as Variational Autoencoders (VAEs) have the ability to generate a high-level representation of audio while providing a structured latent space. Despite their advantages, the interpretability of these latent spaces in terms of human perception is often limited. To address this limitation and enhance the control over timbre generation, we propose a regularized VAE-based latent space that incorporates timbre descriptors. Moreover, we suggest a more concise representation of sound by utilizing its harmonic content, in order to minimize the dimensionality of the latent space.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 11:46:13 GMT" } ]
2023-07-21T00:00:00
[ [ "Natsiou", "Anastasia", "" ], [ "Longo", "Luca", "" ], [ "O'Leary", "Sean", "" ] ]
new_dataset
0.997731
2307.10286
Mona Ghassemian
Dejan Vukobratovi\'c, Nikolaos Bartzoudis, Mona Ghassemian, Firooz Saghezchi, Peizheng Li, Adnan Aijaz, Ricardo Martinez, Xueli An, Ranga Rao Venkatesha Prasad, Helge L\"uders, and Shahid Mumtaz
Distributed Sensing, Computing, Communication, and Control Fabric: A Unified Service-Level Architecture for 6G
null
null
null
null
cs.NI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
With the advent of the multimodal immersive communication system, people can interact with each other using multiple devices for sensing, communication and/or control either onsite or remotely. As a breakthrough concept, a distributed sensing, computing, communications, and control (DS3C) fabric is introduced in this paper for provisioning 6G services in multi-tenant environments in a unified manner. The DS3C fabric can be further enhanced by natively incorporating intelligent algorithms for network automation and managing networking, computing, and sensing resources efficiently to serve vertical use cases with extreme and/or conflicting requirements. As such, the paper proposes a novel end-to-end 6G system architecture with enhanced intelligence spanning across different network, computing, and business domains, identifies vertical use cases and presents an overview of the relevant standardization and pre-standardization landscape.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 13:30:44 GMT" } ]
2023-07-21T00:00:00
[ [ "Vukobratović", "Dejan", "" ], [ "Bartzoudis", "Nikolaos", "" ], [ "Ghassemian", "Mona", "" ], [ "Saghezchi", "Firooz", "" ], [ "Li", "Peizheng", "" ], [ "Aijaz", "Adnan", "" ], [ "Martinez", "Ricardo", "" ], [ "An", "Xueli", "" ], [ "Prasad", "Ranga Rao Venkatesha", "" ], [ "Lüders", "Helge", "" ], [ "Mumtaz", "Shahid", "" ] ]
new_dataset
0.994317
2307.10305
Vinayak Gupta
Vinayak Gupta and Srikanta Bedathur
Tapestry of Time and Actions: Modeling Human Activity Sequences using Temporal Point Process Flows
Extended version of Gupta and Bedathur [arXiv:2206.05291] (SIGKDD 2022). Under review in a journal
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Human beings always engage in a vast range of activities and tasks that demonstrate their ability to adapt to different scenarios. Any human activity can be represented as a temporal sequence of actions performed to achieve a certain goal. Unlike the time series datasets extracted from electronics or machines, these action sequences are highly disparate in their nature -- the time to finish a sequence of actions can vary between different persons. Therefore, understanding the dynamics of these sequences is essential for many downstream tasks such as activity length prediction, goal prediction, next action recommendation, etc. Existing neural network-based approaches that learn a continuous-time activity sequence (or CTAS) are limited to the presence of only visual data or are designed specifically for a particular task, i.e., limited to next action or goal prediction. In this paper, we present ProActive, a neural marked temporal point process (MTPP) framework for modeling the continuous-time distribution of actions in an activity sequence while simultaneously addressing three high-impact problems -- next action prediction, sequence-goal prediction, and end-to-end sequence generation. Specifically, we utilize a self-attention module with temporal normalizing flows to model the influence and the inter-arrival times between actions in a sequence. In addition, we propose a novel addition over the ProActive model that can handle variations in the order of actions, i.e., different methods of achieving a given goal. We demonstrate that this variant can learn the order in which the person or actor prefers to do their actions. Extensive experiments on sequences derived from three activity recognition datasets show the significant accuracy boost of ProActive over the state-of-the-art in terms of action and goal prediction, and the first-ever application of end-to-end action sequence generation.
[ { "version": "v1", "created": "Thu, 13 Jul 2023 19:17:54 GMT" } ]
2023-07-21T00:00:00
[ [ "Gupta", "Vinayak", "" ], [ "Bedathur", "Srikanta", "" ] ]
new_dataset
0.987677
2307.10314
Nafees Mansoor PhD
Maliha Mahajebin, Mohammad Rifat Ahmmad Rashid, Nafees Mansoor
Mood Classification of Bangla Songs Based on Lyrics
Presented at International Conference on. Inventive Communication and Computational Technologies 2023
null
null
null
cs.IR cs.CL cs.LG cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Music can evoke various emotions, and with the advancement of technology, it has become more accessible to people. Bangla music, which portrays different human emotions, lacks sufficient research. The authors of this article aim to analyze Bangla songs and classify their moods based on the lyrics. To achieve this, this research has compiled a dataset of 4000 Bangla song lyrics, genres, and used Natural Language Processing and the Bert Algorithm to analyze the data. Among the 4000 songs, 1513 songs are represented for the sad mood, 1362 for the romantic mood, 886 for happiness, and the rest 239 are classified as relaxation. By embedding the lyrics of the songs, the authors have classified the songs into four moods: Happy, Sad, Romantic, and Relaxed. This research is crucial as it enables a multi-class classification of songs' moods, making the music more relatable to people's emotions. The article presents the automated result of the four moods accurately derived from the song lyrics.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 03:31:41 GMT" } ]
2023-07-21T00:00:00
[ [ "Mahajebin", "Maliha", "" ], [ "Rashid", "Mohammad Rifat Ahmmad", "" ], [ "Mansoor", "Nafees", "" ] ]
new_dataset
0.999231
2307.10346
Roberto Daza
\'Alvaro Becerra, Roberto Daza, Ruth Cobos, Aythami Morales, Julian Fierrez
Estudio de la Experiencia de Usuario mediante un Sistema de Dashboards de An\'alisis de Aprendizaje Multimodal
Accepted in "XXIII CONGRESO INTERNACIONAL DE INTERACCI\'ON PERSONA-ORDENADOR 2023". Article in Spanish language. The abstract in English and Spanish. There is an extended abstract of 2 pages in English
null
null
null
cs.HC cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the article, we present a Web-based System called M2LADS, which supports the integration and visualization of multimodal data recorded in user experiences (UX) in a Learning Analytics (LA) system in the form of Web-based Dashboards. Based on the edBB platform, the multimodal data gathered contains biometric and behavioral signals including electroencephalogram data to measure learners' cognitive attention, heart rate for affective measures and visual attention from the video recordings. Additionally, learners' static background data and their learning performance measures are tracked using LOGGE tool. M2LADS provides opportunities to capture learners' holistic experience during their interactions with the learning analytic system in order to improve the system and the user experience of the learners. -- En este art\'iculo, presentamos M2LADS, un sistema que permite la integraci\'on y visualizaci\'on de datos multimodales en forma de Dashboards Web. Estos datos provienen de sesiones de experiencia de usuario en un sistema de Learning Analytics (LA) llevadas a cabo por estudiantes de MOOCs. Los datos multimodales incluyen se\~nales biom\'etricas y de comportamiento monitorizados por la plataforma edBB, como electroencefalogramas (EEG) de 5 canales, frecuencia card\'iaca, atenci\'on visual, videos en el espectro visible y NIR, entre otros. Adem\'as, se incluyen datos de interacci\'on de los estudiantes con el sistema de LA a trav\'es de la herramienta LOGGE. Toda esta informaci\'on proporciona una comprensi\'on completa de la experiencia del usuario al utilizar el sistema de LA, lo que ha permitido tanto mejorar el sistema LA como la experiencia de aprendizaje de los estudiantes de MOOCs.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 17:10:56 GMT" } ]
2023-07-21T00:00:00
[ [ "Becerra", "Álvaro", "" ], [ "Daza", "Roberto", "" ], [ "Cobos", "Ruth", "" ], [ "Morales", "Aythami", "" ], [ "Fierrez", "Julian", "" ] ]
new_dataset
0.962666
2307.10349
Hans Hanley
Hans W. A. Hanley, Zakir Durumeric
Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter
null
null
null
null
cs.SI cs.CY
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Social media platforms are often blamed for exacerbating political polarization and worsening public dialogue. Many claim hyperpartisan users post pernicious content, slanted to their political views, inciting contentious and toxic conversations. However, what factors, actually contribute to increased online toxicity and negative interactions? In this work, we explore the role that political ideology plays in contributing to toxicity both on an individual user level and a topic level on Twitter. To do this, we train and open-source a DeBERTa-based toxicity detector with a contrastive objective that outperforms the Google Jigsaw Persective Toxicity detector on the Civil Comments test dataset. Then, after collecting 187 million tweets from 55,415 Twitter users, we determine how several account-level characteristics, including political ideology and account age, predict how often each user posts toxic content. Running a linear regression, we find that the diversity of views and the toxicity of the other accounts with which that user engages has a more marked effect on their own toxicity. Namely, toxic comments are correlated with users who engage with a wider array of political views. Performing topic analysis on the toxic content posted by these accounts using the large language model MPNet and a version of the DP-Means clustering algorithm, we find similar behavior across 6,592 individual topics, with conversations on each topic becoming more toxic as a wider diversity of users become involved.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 17:24:47 GMT" } ]
2023-07-21T00:00:00
[ [ "Hanley", "Hans W. A.", "" ], [ "Durumeric", "Zakir", "" ] ]
new_dataset
0.999613
2307.10455
Zahra Gharaee
Zahra Gharaee, ZeMing Gong, Nicholas Pellegrino, Iuliia Zarubiieva, Joakim Bruslund Haurum, Scott C. Lowe, Jaclyn T.A. McKeown, Chris C.Y. Ho, Joschka McLeod, Yi-Yun C Wei, Jireh Agda, Sujeevan Ratnasingham, Dirk Steinke, Angel X. Chang, Graham W. Taylor, Paul Fieguth
A Step Towards Worldwide Biodiversity Assessment: The BIOSCAN-1M Insect Dataset
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In an effort to catalog insect biodiversity, we propose a new large dataset of hand-labelled insect images, the BIOSCAN-Insect Dataset. Each record is taxonomically classified by an expert, and also has associated genetic information including raw nucleotide barcode sequences and assigned barcode index numbers, which are genetically-based proxies for species classification. This paper presents a curated million-image dataset, primarily to train computer-vision models capable of providing image-based taxonomic assessment, however, the dataset also presents compelling characteristics, the study of which would be of interest to the broader machine learning community. Driven by the biological nature inherent to the dataset, a characteristic long-tailed class-imbalance distribution is exhibited. Furthermore, taxonomic labelling is a hierarchical classification scheme, presenting a highly fine-grained classification problem at lower levels. Beyond spurring interest in biodiversity research within the machine learning community, progress on creating an image-based taxonomic classifier will also further the ultimate goal of all BIOSCAN research: to lay the foundation for a comprehensive survey of global biodiversity. This paper introduces the dataset and explores the classification task through the implementation and analysis of a baseline classifier.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 20:54:08 GMT" } ]
2023-07-21T00:00:00
[ [ "Gharaee", "Zahra", "" ], [ "Gong", "ZeMing", "" ], [ "Pellegrino", "Nicholas", "" ], [ "Zarubiieva", "Iuliia", "" ], [ "Haurum", "Joakim Bruslund", "" ], [ "Lowe", "Scott C.", "" ], [ "McKeown", "Jaclyn T. A.", "" ], [ "Ho", "Chris C. Y.", "" ], [ "McLeod", "Joschka", "" ], [ "Wei", "Yi-Yun C", "" ], [ "Agda", "Jireh", "" ], [ "Ratnasingham", "Sujeevan", "" ], [ "Steinke", "Dirk", "" ], [ "Chang", "Angel X.", "" ], [ "Taylor", "Graham W.", "" ], [ "Fieguth", "Paul", "" ] ]
new_dataset
0.999805
2307.10481
Min Chen
Yuanzhe Jin, Tim J. A. de Jong, Martijn Tennekes, and Min Chen
Radial Icicle Tree (RIT): Node Separation and Area Constancy
null
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Icicles and sunbursts are two commonly-used visual representations of trees. While icicle trees can map data values faithfully to rectangles of different sizes, often some rectangles are too narrow to be noticed easily. When an icicle tree is transformed into a sunburst tree, the width of each rectangle becomes the length of an annular sector that is usually longer than the original width. While sunburst trees alleviate the problem of narrow rectangles in icicle trees, it no longer maintains the consistency of size encoding. At different tree depths, nodes of the same data values are displayed in annular sections of different sizes in a sunburst tree, though they are represented by rectangles of the same size in an icicle tree. Furthermore, two nodes from different subtrees could sometimes appear as a single node in both icicle trees and sunburst trees. In this paper, we propose a new visual representation, referred to as \emph{radial icicle tree} (RIT), which transforms the rectangular bounding box of an icicle tree into a circle, circular sector, or annular sector while introducing gaps between nodes and maintaining area constancy for nodes of the same size. We applied the new visual design to several datasets. Both the analytical design process and user-centered evaluation have confirmed that this new design has improved the design of icicles and sunburst trees without introducing any relative demerit.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 22:25:14 GMT" } ]
2023-07-21T00:00:00
[ [ "Jin", "Yuanzhe", "" ], [ "de Jong", "Tim J. A.", "" ], [ "Tennekes", "Martijn", "" ], [ "Chen", "Min", "" ] ]
new_dataset
0.969385
2307.10482
Heiko Kabutz
Heiko Kabutz, Kaushik Jayaram
Design of CLARI: A miniature modular origami passive shape-morphing robot
null
null
null
null
cs.RO cs.SY eess.SY
http://creativecommons.org/licenses/by-sa/4.0/
Miniature robots provide unprecedented access to confined environments and show promising potential for novel applications such as search-and-rescue and high-value asset inspection. The capability of body deformation further enhances the reachability of these small robots in complex cluttered terrains similar to those of insects and soft arthropods. Motivated by this concept, we present CLARI, an insect-scale 2.59g quadrupedal robot capable of body deformation with tethered electrical connections for power and control and manufactured using laminate fabrication and assembled using origami pop-up techniques. In order to enable locomotion in multiple shape configurations, we designed a novel body architecture comprising of modular, actuated leg mechanisms. Overall, CLARI has eight independently actuated degrees of freedom (two per modular leg unit) driven by custom piezoelectric actuators, making it mechanically dextrous. We characterize open-loop robot locomotion at multiple stride frequencies (1-10Hz) using multiple gaits (trot, walk, etc.) in three different fixed body shapes (long, symmetric, wide) and illustrate the robot's capabilities. Finally, we demonstrate preliminary results of CLARI locomoting with a compliant body in open terrain and through a laterally constrained gap, a novel capability for legged robots. Our results represent the first step towards achieving effective cluttered terrain navigation with adaptable compliant robots in real-world environments.
[ { "version": "v1", "created": "Wed, 19 Jul 2023 22:26:31 GMT" } ]
2023-07-21T00:00:00
[ [ "Kabutz", "Heiko", "" ], [ "Jayaram", "Kaushik", "" ] ]
new_dataset
0.999604
2307.10550
Yong-Hoon Choi
Daegyeom Kim, Seongho Hong, and Yong-Hoon Choi
SC VALL-E: Style-Controllable Zero-Shot Text to Speech Synthesizer
null
null
null
null
cs.SD cs.LG eess.AS
http://creativecommons.org/licenses/by/4.0/
Expressive speech synthesis models are trained by adding corpora with diverse speakers, various emotions, and different speaking styles to the dataset, in order to control various characteristics of speech and generate the desired voice. In this paper, we propose a style control (SC) VALL-E model based on the neural codec language model (called VALL-E), which follows the structure of the generative pretrained transformer 3 (GPT-3). The proposed SC VALL-E takes input from text sentences and prompt audio and is designed to generate controllable speech by not simply mimicking the characteristics of the prompt audio but by controlling the attributes to produce diverse voices. We identify tokens in the style embedding matrix of the newly designed style network that represent attributes such as emotion, speaking rate, pitch, and voice intensity, and design a model that can control these attributes. To evaluate the performance of SC VALL-E, we conduct comparative experiments with three representative expressive speech synthesis models: global style token (GST) Tacotron2, variational autoencoder (VAE) Tacotron2, and original VALL-E. We measure word error rate (WER), F0 voiced error (FVE), and F0 gross pitch error (F0GPE) as evaluation metrics to assess the accuracy of generated sentences. For comparing the quality of synthesized speech, we measure comparative mean option score (CMOS) and similarity mean option score (SMOS). To evaluate the style control ability of the generated speech, we observe the changes in F0 and mel-spectrogram by modifying the trained tokens. When using prompt audio that is not present in the training data, SC VALL-E generates a variety of expressive sounds and demonstrates competitive performance compared to the existing models. Our implementation, pretrained models, and audio samples are located on GitHub.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 03:28:06 GMT" } ]
2023-07-21T00:00:00
[ [ "Kim", "Daegyeom", "" ], [ "Hong", "Seongho", "" ], [ "Choi", "Yong-Hoon", "" ] ]
new_dataset
0.999623
2307.10551
Kaiwen Wei
Kaiwen Wei, Jie Yao, Jingyuan Zhang, Yangyang Kang, Fubang Zhao, Yating Zhang, Changlong Sun, Xin Jin, Xin Zhang
PPN: Parallel Pointer-based Network for Key Information Extraction with Complex Layouts
null
null
null
null
cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Key Information Extraction (KIE) is a challenging multimodal task that aims to extract structured value semantic entities from visually rich documents. Although significant progress has been made, there are still two major challenges that need to be addressed. Firstly, the layout of existing datasets is relatively fixed and limited in the number of semantic entity categories, creating a significant gap between these datasets and the complex real-world scenarios. Secondly, existing methods follow a two-stage pipeline strategy, which may lead to the error propagation problem. Additionally, they are difficult to apply in situations where unseen semantic entity categories emerge. To address the first challenge, we propose a new large-scale human-annotated dataset named Complex Layout form for key information EXtraction (CLEX), which consists of 5,860 images with 1,162 semantic entity categories. To solve the second challenge, we introduce Parallel Pointer-based Network (PPN), an end-to-end model that can be applied in zero-shot and few-shot scenarios. PPN leverages the implicit clues between semantic entities to assist extracting, and its parallel extraction mechanism allows it to extract multiple results simultaneously and efficiently. Experiments on the CLEX dataset demonstrate that PPN outperforms existing state-of-the-art methods while also offering a much faster inference speed.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 03:29:09 GMT" } ]
2023-07-21T00:00:00
[ [ "Wei", "Kaiwen", "" ], [ "Yao", "Jie", "" ], [ "Zhang", "Jingyuan", "" ], [ "Kang", "Yangyang", "" ], [ "Zhao", "Fubang", "" ], [ "Zhang", "Yating", "" ], [ "Sun", "Changlong", "" ], [ "Jin", "Xin", "" ], [ "Zhang", "Xin", "" ] ]
new_dataset
0.998575
2307.10567
Qi Zhang
Qi Zhang and Sipeng Zheng and Qin Jin
No-frills Temporal Video Grounding: Multi-Scale Neighboring Attention and Zoom-in Boundary Detection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Temporal video grounding (TVG) aims to retrieve the time interval of a language query from an untrimmed video. A significant challenge in TVG is the low "Semantic Noise Ratio (SNR)", which results in worse performance with lower SNR. Prior works have addressed this challenge using sophisticated techniques. In this paper, we propose a no-frills TVG model that consists of two core modules, namely multi-scale neighboring attention and zoom-in boundary detection. The multi-scale neighboring attention restricts each video token to only aggregate visual contexts from its neighbor, enabling the extraction of the most distinguishing information with multi-scale feature hierarchies from high-ratio noises. The zoom-in boundary detection then focuses on local-wise discrimination of the selected top candidates for fine-grained grounding adjustment. With an end-to-end training strategy, our model achieves competitive performance on different TVG benchmarks, while also having the advantage of faster inference speed and lighter model parameters, thanks to its lightweight architecture.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 04:12:10 GMT" } ]
2023-07-21T00:00:00
[ [ "Zhang", "Qi", "" ], [ "Zheng", "Sipeng", "" ], [ "Jin", "Qin", "" ] ]
new_dataset
0.980045
2307.10593
Ziwei Wang
Ziwei Wang, Timothy Molloy, Pieter van Goor, Robert Mahony
Event Blob Tracking: An Asynchronous Real-Time Algorithm
17 pages, 8 figures, preprint version
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Event-based cameras have become increasingly popular for tracking fast-moving objects due to their high temporal resolution, low latency, and high dynamic range. In this paper, we propose a novel algorithm for tracking event blobs using raw events asynchronously in real time. We introduce the concept of an event blob as a spatio-temporal likelihood of event occurrence where the conditional spatial likelihood is blob-like. Many real-world objects generate event blob data, for example, flickering LEDs such as car headlights or any small foreground object moving against a static or slowly varying background. The proposed algorithm uses a nearest neighbour classifier with a dynamic threshold criteria for data association coupled with a Kalman filter to track the event blob state. Our algorithm achieves highly accurate tracking and event blob shape estimation even under challenging lighting conditions and high-speed motions. The microsecond time resolution achieved means that the filter output can be used to derive secondary information such as time-to-contact or range estimation, that will enable applications to real-world problems such as collision avoidance in autonomous driving.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 05:15:03 GMT" } ]
2023-07-21T00:00:00
[ [ "Wang", "Ziwei", "" ], [ "Molloy", "Timothy", "" ], [ "van Goor", "Pieter", "" ], [ "Mahony", "Robert", "" ] ]
new_dataset
0.99964
2307.10601
Dongyun Lin
Dongyun Lin, Yi Cheng, Aiyuan Guo, Shangbo Mao, Yiqun Li
SCA-PVNet: Self-and-Cross Attention Based Aggregation of Point Cloud and Multi-View for 3D Object Retrieval
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
To address 3D object retrieval, substantial efforts have been made to generate highly discriminative descriptors of 3D objects represented by a single modality, e.g., voxels, point clouds or multi-view images. It is promising to leverage the complementary information from multi-modality representations of 3D objects to further improve retrieval performance. However, multi-modality 3D object retrieval is rarely developed and analyzed on large-scale datasets. In this paper, we propose self-and-cross attention based aggregation of point cloud and multi-view images (SCA-PVNet) for 3D object retrieval. With deep features extracted from point clouds and multi-view images, we design two types of feature aggregation modules, namely the In-Modality Aggregation Module (IMAM) and the Cross-Modality Aggregation Module (CMAM), for effective feature fusion. IMAM leverages a self-attention mechanism to aggregate multi-view features while CMAM exploits a cross-attention mechanism to interact point cloud features with multi-view features. The final descriptor of a 3D object for object retrieval can be obtained via concatenating the aggregated features from both modules. Extensive experiments and analysis are conducted on three datasets, ranging from small to large scale, to show the superiority of the proposed SCA-PVNet over the state-of-the-art methods.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 05:46:32 GMT" } ]
2023-07-21T00:00:00
[ [ "Lin", "Dongyun", "" ], [ "Cheng", "Yi", "" ], [ "Guo", "Aiyuan", "" ], [ "Mao", "Shangbo", "" ], [ "Li", "Yiqun", "" ] ]
new_dataset
0.999583
2307.10615
Kshitiz Verma
Kshitiz Verma
Analyzing HC-NJDG Data to Understand the Pendency in High Courts in India
25 pages, 31 figures, presented at Law Via Internet Conference, 2018
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
Indian Judiciary is suffering from burden of millions of cases that are lying pending in its courts at all the levels. In this paper, we analyze the data that we have collected on the pendency of 24 high courts in the Republic of India as they were made available on High Court NJDG (HC-NJDG). We collected data on 73 days beginning August 31, 2017 to December 26, 2018, including these days. Thus, the data collected by us spans a period of almost sixteen months. We have analyzed various statistics available on the NJDG portal for High Courts, including but not limited to the number of judges in each high court, the number of cases pending in each high court, cases that have been pending for more than 10 years, cases filed, listed and disposed, cases filed by women and senior citizens, etc. Our results show that: 1) statistics as important as the number of judges in high courts have serious errors on NJDG (Fig. 1, 2, 10, 11, Table V). 2) pending cases in most of the high courts are increasing rather than decreasing (Fig. 3, 13). 3) regular update of HC-NJDG is required for it to be useful. Data related to some high courts is not being updated regularly or is updated erroneously on the portal (Fig. 14). 4) there is a huge difference in terms of average load of cases on judges of different high courts (Fig. 6). 5) if all the high courts operate at their approved strength of judges, then for most of the high courts pendency can be nullified within 20 years from now (Fig. 21, 22). 6) the pending cases filed by women and senior citizens are disproportionately low, they together constitute less than 10% of the total pending cases (Fig. 23 - 27) 7) a better scheduling process for preparing causelists in courts can help reducing the number of pending cases in the High Courts (Fig. 29). 8) some statistics are not well defined (Fig. 31).
[ { "version": "v1", "created": "Thu, 20 Jul 2023 06:25:53 GMT" } ]
2023-07-21T00:00:00
[ [ "Verma", "Kshitiz", "" ] ]
new_dataset
0.997191
2307.10635
Yanqiao Zhu
Xiaoxuan Wang and Ziniu Hu and Pan Lu and Yanqiao Zhu and Jieyu Zhang and Satyen Subramaniam and Arjun R. Loomba and Shichang Zhang and Yizhou Sun and Wei Wang
SciBench: Evaluating College-Level Scientific Problem-Solving Abilities of Large Language Models
Work in progress, 18 pages
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent advances in large language models (LLMs) have demonstrated notable progress on many mathematical benchmarks. However, most of these benchmarks only feature problems grounded in junior and senior high school subjects, contain only multiple-choice questions, and are confined to a limited scope of elementary arithmetic operations. To address these issues, this paper introduces an expansive benchmark suite SciBench that aims to systematically examine the reasoning capabilities required for complex scientific problem solving. SciBench contains two carefully curated datasets: an open set featuring a range of collegiate-level scientific problems drawn from mathematics, chemistry, and physics textbooks, and a closed set comprising problems from undergraduate-level exams in computer science and mathematics. Based on the two datasets, we conduct an in-depth benchmark study of two representative LLMs with various prompting strategies. The results reveal that current LLMs fall short of delivering satisfactory performance, with an overall score of merely 35.80%. Furthermore, through a detailed user study, we categorize the errors made by LLMs into ten problem-solving abilities. Our analysis indicates that no single prompting strategy significantly outperforms others and some strategies that demonstrate improvements in certain problem-solving skills result in declines in other skills. We envision that SciBench will catalyze further developments in the reasoning abilities of LLMs, thereby ultimately contributing to scientific research and discovery.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 07:01:57 GMT" } ]
2023-07-21T00:00:00
[ [ "Wang", "Xiaoxuan", "" ], [ "Hu", "Ziniu", "" ], [ "Lu", "Pan", "" ], [ "Zhu", "Yanqiao", "" ], [ "Zhang", "Jieyu", "" ], [ "Subramaniam", "Satyen", "" ], [ "Loomba", "Arjun R.", "" ], [ "Zhang", "Shichang", "" ], [ "Sun", "Yizhou", "" ], [ "Wang", "Wei", "" ] ]
new_dataset
0.99914
2307.10642
Qichao Ying
Qichao Ying, Jiaxin Liu, Sheng Li, Haisheng Xu, Zhenxing Qian, Xinpeng Zhang
RetouchingFFHQ: A Large-scale Dataset for Fine-grained Face Retouching Detection
Under review
null
null
null
cs.CV cs.MM
http://creativecommons.org/licenses/by/4.0/
The widespread use of face retouching filters on short-video platforms has raised concerns about the authenticity of digital appearances and the impact of deceptive advertising. To address these issues, there is a pressing need to develop advanced face retouching techniques. However, the lack of large-scale and fine-grained face retouching datasets has been a major obstacle to progress in this field. In this paper, we introduce RetouchingFFHQ, a large-scale and fine-grained face retouching dataset that contains over half a million conditionally-retouched images. RetouchingFFHQ stands out from previous datasets due to its large scale, high quality, fine-grainedness, and customization. By including four typical types of face retouching operations and different retouching levels, we extend the binary face retouching detection into a fine-grained, multi-retouching type, and multi-retouching level estimation problem. Additionally, we propose a Multi-granularity Attention Module (MAM) as a plugin for CNN backbones for enhanced cross-scale representation learning. Extensive experiments using different baselines as well as our proposed method on RetouchingFFHQ show decent performance on face retouching detection. With the proposed new dataset, we believe there is great potential for future work to tackle the challenging problem of real-world fine-grained face retouching detection.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 07:12:56 GMT" } ]
2023-07-21T00:00:00
[ [ "Ying", "Qichao", "" ], [ "Liu", "Jiaxin", "" ], [ "Li", "Sheng", "" ], [ "Xu", "Haisheng", "" ], [ "Qian", "Zhenxing", "" ], [ "Zhang", "Xinpeng", "" ] ]
new_dataset
0.999696
2307.10646
Mikko Majamaa
Mikko Majamaa, Henrik Martikainen, Jani Puttonen and Timo H\"am\"alainen
On Enhancing Reliability in B5G NTNs with Packet Duplication via Multi-Connectivity
Accepted for publication in 2023 IEEE International Conference on Wireless for Space and Extreme Environments (WiSEE 2023)
null
null
null
cs.NI
http://creativecommons.org/licenses/by/4.0/
Non-Terrestrial Networks (NTNs) can be used to provide ubiquitous 5G and beyond services to un(der)served areas. To ensure reliable communication in such networks, packet duplication (PD) through multi-connectivity is a promising solution. However, the existing PD schemes developed for terrestrial environments may not be reactive enough for the NTN environment where propagation delays are significantly longer. This paper proposes a dynamic PD activation scheme for NTNs based on hybrid automatic repeat request feedback. The scheme aims to reduce the number of duplicated packets while maintaining high reliability. To evaluate the proposed scheme, simulations are conducted in a scenario with two transparent payload lowearth orbit satellites. The results show a significant reduction of 87.2% in the number of duplicated packets compared to blind duplication, with only marginal compromise in reliability.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 07:16:05 GMT" } ]
2023-07-21T00:00:00
[ [ "Majamaa", "Mikko", "" ], [ "Martikainen", "Henrik", "" ], [ "Puttonen", "Jani", "" ], [ "Hämälainen", "Timo", "" ] ]
new_dataset
0.996095
2307.10666
Jind\v{r}ich Libovick\'y
Hynek Kydl\'i\v{c}ek, Jind\v{r}ich Libovick\'y
A Dataset and Strong Baselines for Classification of Czech News Texts
12 pages, Accepted to Text, Speech and Dialogue (TSD) 2023
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Pre-trained models for Czech Natural Language Processing are often evaluated on purely linguistic tasks (POS tagging, parsing, NER) and relatively simple classification tasks such as sentiment classification or article classification from a single news source. As an alternative, we present CZEch~NEws~Classification~dataset (CZE-NEC), one of the largest Czech classification datasets, composed of news articles from various sources spanning over twenty years, which allows a more rigorous evaluation of such models. We define four classification tasks: news source, news category, inferred author's gender, and day of the week. To verify the task difficulty, we conducted a human evaluation, which revealed that human performance lags behind strong machine-learning baselines built upon pre-trained transformer models. Furthermore, we show that language-specific pre-trained encoder analysis outperforms selected commercially available large-scale generative language models.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 07:47:08 GMT" } ]
2023-07-21T00:00:00
[ [ "Kydlíček", "Hynek", "" ], [ "Libovický", "Jindřich", "" ] ]
new_dataset
0.999797
2307.10697
Fernando Alonso-Fernandez
Fernando Alonso-Fernandez, Kevin Hernandez-Diaz, Jose Maria Buades Rubio, Josef Bigun
SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via Filter Pruning
Published at VIII International Workshop on Artificial Intelligence and Pattern Recognition, IWAIPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The widespread use of mobile devices for various digital services has created a need for reliable and real-time person authentication. In this context, facial recognition technologies have emerged as a dependable method for verifying users due to the prevalence of cameras in mobile devices and their integration into everyday applications. The rapid advancement of deep Convolutional Neural Networks (CNNs) has led to numerous face verification architectures. However, these models are often large and impractical for mobile applications, reaching sizes of hundreds of megabytes with millions of parameters. We address this issue by developing SqueezerFaceNet, a light face recognition network which less than 1M parameters. This is achieved by applying a network pruning method based on Taylor scores, where filters with small importance scores are removed iteratively. Starting from an already small network (of 1.24M) based on SqueezeNet, we show that it can be further reduced (up to 40%) without an appreciable loss in performance. To the best of our knowledge, we are the first to evaluate network pruning methods for the task of face recognition.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 08:38:50 GMT" } ]
2023-07-21T00:00:00
[ [ "Alonso-Fernandez", "Fernando", "" ], [ "Hernandez-Diaz", "Kevin", "" ], [ "Rubio", "Jose Maria Buades", "" ], [ "Bigun", "Josef", "" ] ]
new_dataset
0.995618
2307.10726
Ioanna Kantzavelou
Achilleas Spanos and Ioanna Kantzavelou
A Blockchain-based Electronic Voting System: EtherVote
2 pages, Poster presented in ACM 5th summit on Gender Equality in Computing, GEC 2023, Athens University of Economics and Business, Athens, Greece, 27 June 2023
null
null
null
cs.CR
http://creativecommons.org/licenses/by-nc-nd/4.0/
The development of an electronic voting system that would replace traditional election procedures is a research topic of great interest for many years. Blockchain technology could provide some guarantees and fulfill strong requirements for electronic voting platforms, such as transparency, immutability, and confidentiality. From time to time research is conducted to address problems in voting systems. Many research works attempt to implement secure and reliable voting systems, which address known security, anonymity, and fraud issues that might threaten such systems. This paper presents a proposal of a secure electronic voting system, the EtherVote, using the Ethereum Blockchain network that focuses deeply on the field of identification of eligible citizens. The proposed system will be entirely based on Blockchain without any central authority servers or databases, thus improving security, privacy, and election cost. Limitations, problems, and solutions are discussed, in order to make the proposed electronic voting system ideal and ready to use for national elections.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 09:39:29 GMT" } ]
2023-07-21T00:00:00
[ [ "Spanos", "Achilleas", "" ], [ "Kantzavelou", "Ioanna", "" ] ]
new_dataset
0.98417
2307.10757
Weidong Chen
Weidong Chen, Xiaofen Xing, Peihao Chen, Xiangmin Xu
Vesper: A Compact and Effective Pretrained Model for Speech Emotion Recognition
13 pages, 5 figures, 8 tables
null
null
null
cs.SD cs.CL eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper presents a paradigm that adapts general large-scale pretrained models (PTMs) to speech emotion recognition task. Although PTMs shed new light on artificial general intelligence, they are constructed with general tasks in mind, and thus, their efficacy for specific tasks can be further improved. Additionally, employing PTMs in practical applications can be challenging due to their considerable size. Above limitations spawn another research direction, namely, optimizing large-scale PTMs for specific tasks to generate task-specific PTMs that are both compact and effective. In this paper, we focus on the speech emotion recognition task and propose an improved emotion-specific pretrained encoder called Vesper. Vesper is pretrained on a speech dataset based on WavLM and takes into account emotional characteristics. To enhance sensitivity to emotional information, Vesper employs an emotion-guided masking strategy to identify the regions that need masking. Subsequently, Vesper employs hierarchical and cross-layer self-supervision to improve its ability to capture acoustic and semantic representations, both of which are crucial for emotion recognition. Experimental results on the IEMOCAP, MELD, and CREMA-D datasets demonstrate that Vesper with 4 layers outperforms WavLM Base with 12 layers, and the performance of Vesper with 12 layers surpasses that of WavLM Large with 24 layers.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 10:42:16 GMT" } ]
2023-07-21T00:00:00
[ [ "Chen", "Weidong", "" ], [ "Xing", "Xiaofen", "" ], [ "Chen", "Peihao", "" ], [ "Xu", "Xiangmin", "" ] ]
new_dataset
0.999711
2307.10781
Ahmad Rostami
Ahmad Rostami, Dhruvin Patel, Madhusudan Giyyarpuram, Finn Pedersen
5G Non-Public Network for Industrial IoT: Operation Models
null
null
null
null
cs.NI
http://creativecommons.org/licenses/by-nc-nd/4.0/
5G non-public networks (NPNs) play a key role in enabling critical Industrial Internet of Things (IoT) applications in various vertical industries. Among other features, 5G NPNs enable novel operation models, where the roles and responsibilities for setting up and operating the network can be distributed among several stakeholders, i.e., among the public mobile network operators (MNOs), the industrial party who uses the 5G NPN services and 3rd parties. This results in many theoretically feasible operation models for 5G NPN, each with its own advantages and disadvantages. We investigate the resulting operation models and identify a set of nine prime models taking into account today's practical considerations. Additionally, we define a framework to qualitatively analyze the operation models and use it to evaluate and compare the identified operation models.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 11:30:32 GMT" } ]
2023-07-21T00:00:00
[ [ "Rostami", "Ahmad", "" ], [ "Patel", "Dhruvin", "" ], [ "Giyyarpuram", "Madhusudan", "" ], [ "Pedersen", "Finn", "" ] ]
new_dataset
0.992602
2307.10814
Richard Sutcliffe
Ephrem Afele Retta, Richard Sutcliffe, Jabar Mahmood, Michael Abebe Berwo, Eiad Almekhlafi, Sajjad Ahmed Khan, Shehzad Ashraf Chaudhry, Mustafa Mhamed, Jun Feng
Cross-Corpus Multilingual Speech Emotion Recognition: Amharic vs. Other Languages
16 pages, 9 tables, 5 figures
null
null
null
cs.CL cs.NE cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In a conventional Speech emotion recognition (SER) task, a classifier for a given language is trained on a pre-existing dataset for that same language. However, where training data for a language does not exist, data from other languages can be used instead. We experiment with cross-lingual and multilingual SER, working with Amharic, English, German and URDU. For Amharic, we use our own publicly-available Amharic Speech Emotion Dataset (ASED). For English, German and Urdu we use the existing RAVDESS, EMO-DB and URDU datasets. We followed previous research in mapping labels for all datasets to just two classes, positive and negative. Thus we can compare performance on different languages directly, and combine languages for training and testing. In Experiment 1, monolingual SER trials were carried out using three classifiers, AlexNet, VGGE (a proposed variant of VGG), and ResNet50. Results averaged for the three models were very similar for ASED and RAVDESS, suggesting that Amharic and English SER are equally difficult. Similarly, German SER is more difficult, and Urdu SER is easier. In Experiment 2, we trained on one language and tested on another, in both directions for each pair: Amharic<->German, Amharic<->English, and Amharic<->Urdu. Results with Amharic as target suggested that using English or German as source will give the best result. In Experiment 3, we trained on several non-Amharic languages and then tested on Amharic. The best accuracy obtained was several percent greater than the best accuracy in Experiment 2, suggesting that a better result can be obtained when using two or three non-Amharic languages for training than when using just one non-Amharic language. Overall, the results suggest that cross-lingual and multilingual training can be an effective strategy for training a SER classifier when resources for a language are scarce.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 12:24:23 GMT" } ]
2023-07-21T00:00:00
[ [ "Retta", "Ephrem Afele", "" ], [ "Sutcliffe", "Richard", "" ], [ "Mahmood", "Jabar", "" ], [ "Berwo", "Michael Abebe", "" ], [ "Almekhlafi", "Eiad", "" ], [ "Khan", "Sajjad Ahmed", "" ], [ "Chaudhry", "Shehzad Ashraf", "" ], [ "Mhamed", "Mustafa", "" ], [ "Feng", "Jun", "" ] ]
new_dataset
0.999738
2307.10837
Zhen Gao
Li Qiao, Anwen Liao, Zhuoran Li, Hua Wang, Zhen Gao, Xiang Gao, Yu Su, Pei Xiao, Li You, and Derrick Wing Kwan Ng
Sensing User's Activity, Channel, and Location with Near-Field Extra-Large-Scale MIMO
Submitted to IEEE Transactions on Communications, Major revision. Codes will be open to all on https://gaozhen16.github.io/ soon
null
null
null
cs.IT eess.SP math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper proposes a grant-free massive access scheme based on the millimeter wave (mmWave) extra-large-scale multiple-input multiple-output (XL-MIMO) to support massive Internet-of-Things (IoT) devices with low latency, high data rate, and high localization accuracy in the upcoming sixth-generation (6G) networks. The XL-MIMO consists of multiple antenna subarrays that are widely spaced over the service area to ensure line-of-sight (LoS) transmissions. First, we establish the XL-MIMO-based massive access model considering the near-field spatial non-stationary (SNS) property. Then, by exploiting the block sparsity of subarrays and the SNS property, we propose a structured block orthogonal matching pursuit algorithm for efficient active user detection (AUD) and channel estimation (CE). Furthermore, different sensing matrices are applied in different pilot subcarriers for exploiting the diversity gains. Additionally, a multi-subarray collaborative localization algorithm is designed for localization. In particular, the angle of arrival (AoA) and time difference of arrival (TDoA) of the LoS links between active users and related subarrays are extracted from the estimated XL-MIMO channels, and then the coordinates of active users are acquired by jointly utilizing the AoAs and TDoAs. Simulation results show that the proposed algorithms outperform existing algorithms in terms of AUD and CE performance and can achieve centimeter-level localization accuracy.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 12:57:15 GMT" } ]
2023-07-21T00:00:00
[ [ "Qiao", "Li", "" ], [ "Liao", "Anwen", "" ], [ "Li", "Zhuoran", "" ], [ "Wang", "Hua", "" ], [ "Gao", "Zhen", "" ], [ "Gao", "Xiang", "" ], [ "Su", "Yu", "" ], [ "Xiao", "Pei", "" ], [ "You", "Li", "" ], [ "Ng", "Derrick Wing Kwan", "" ] ]
new_dataset
0.993141
2307.10847
Jan Maty\'a\v{s} K\v{r}i\v{s}\v{t}an
Jan Maty\'a\v{s} K\v{r}i\v{s}\v{t}an, Jakub Svoboda
Shortest Dominating Set Reconfiguration under Token Sliding
To appear at FCT 2023 (Fundamentals of Computation Theory)
null
null
null
cs.DS cs.DM math.CO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we present novel algorithms that efficiently compute a shortest reconfiguration sequence between two given dominating sets in trees and interval graphs under the Token Sliding model. In this problem, a graph is provided along with its two dominating sets, which can be imagined as tokens placed on vertices. The objective is to find a shortest sequence of dominating sets that transforms one set into the other, with each set in the sequence resulting from sliding a single token in the previous set. While identifying any sequence has been well studied, our work presents the first polynomial algorithms for this optimization variant in the context of dominating sets.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 13:11:01 GMT" } ]
2023-07-21T00:00:00
[ [ "Křišťan", "Jan Matyáš", "" ], [ "Svoboda", "Jakub", "" ] ]
new_dataset
0.981621
2307.10934
Harshith Mohan Kumar
Aditya Nalgunda Ganesh and Dhruval Pobbathi Badrinath and Harshith Mohan Kumar and Priya SS and Surabhi Narayan
OCTraN: 3D Occupancy Convolutional Transformer Network in Unstructured Traffic Scenarios
This work was accepted as a spotlight presentation at the Transformers for Vision Workshop @CVPR 2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern approaches for vision-centric environment perception for autonomous navigation make extensive use of self-supervised monocular depth estimation algorithms that output disparity maps. However, when this disparity map is projected onto 3D space, the errors in disparity are magnified, resulting in a depth estimation error that increases quadratically as the distance from the camera increases. Though Light Detection and Ranging (LiDAR) can solve this issue, it is expensive and not feasible for many applications. To address the challenge of accurate ranging with low-cost sensors, we propose, OCTraN, a transformer architecture that uses iterative-attention to convert 2D image features into 3D occupancy features and makes use of convolution and transpose convolution to efficiently operate on spatial information. We also develop a self-supervised training pipeline to generalize the model to any scene by eliminating the need for LiDAR ground truth by substituting it with pseudo-ground truth labels obtained from boosted monocular depth estimation.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 15:06:44 GMT" } ]
2023-07-21T00:00:00
[ [ "Ganesh", "Aditya Nalgunda", "" ], [ "Badrinath", "Dhruval Pobbathi", "" ], [ "Kumar", "Harshith Mohan", "" ], [ "SS", "Priya", "" ], [ "Narayan", "Surabhi", "" ] ]
new_dataset
0.976223
2307.10953
Xiangchen Yin
Xiangchen Yin, Zhenda Yu, Zetao Fei, Wenjun Lv, Xin Gao
PE-YOLO: Pyramid Enhancement Network for Dark Object Detection
Accepted at ICANN 2023
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Current object detection models have achieved good results on many benchmark datasets, detecting objects in dark conditions remains a large challenge. To address this issue, we propose a pyramid enhanced network (PENet) and joint it with YOLOv3 to build a dark object detection framework named PE-YOLO. Firstly, PENet decomposes the image into four components of different resolutions using the Laplacian pyramid. Specifically we propose a detail processing module (DPM) to enhance the detail of images, which consists of context branch and edge branch. In addition, we propose a low-frequency enhancement filter (LEF) to capture low-frequency semantics and prevent high-frequency noise. PE-YOLO adopts an end-to-end joint training approach and only uses normal detection loss to simplify the training process. We conduct experiments on the low-light object detection dataset ExDark to demonstrate the effectiveness of ours. The results indicate that compared with other dark detectors and low-light enhancement models, PE-YOLO achieves the advanced results, achieving 78.0% in mAP and 53.6 in FPS, respectively, which can adapt to object detection under different low-light conditions. The code is available at https://github.com/XiangchenYin/PE-YOLO.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 15:25:55 GMT" } ]
2023-07-21T00:00:00
[ [ "Yin", "Xiangchen", "" ], [ "Yu", "Zhenda", "" ], [ "Fei", "Zetao", "" ], [ "Lv", "Wenjun", "" ], [ "Gao", "Xin", "" ] ]
new_dataset
0.992343
2307.10954
Xi Fang
Xi Fang, Daeseung Kim, Xuanang Xu, Tianshu Kuang, Nathan Lampen, Jungwook Lee, Hannah H. Deng, Jaime Gateno, Michael A.K. Liebschner, James J. Xia, Pingkun Yan
Soft-tissue Driven Craniomaxillofacial Surgical Planning
Early accepted by MICCAI 2023
null
null
null
cs.RO cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In CMF surgery, the planning of bony movement to achieve a desired facial outcome is a challenging task. Current bone driven approaches focus on normalizing the bone with the expectation that the facial appearance will be corrected accordingly. However, due to the complex non-linear relationship between bony structure and facial soft-tissue, such bone-driven methods are insufficient to correct facial deformities. Despite efforts to simulate facial changes resulting from bony movement, surgical planning still relies on iterative revisions and educated guesses. To address these issues, we propose a soft-tissue driven framework that can automatically create and verify surgical plans. Our framework consists of a bony planner network that estimates the bony movements required to achieve the desired facial outcome and a facial simulator network that can simulate the possible facial changes resulting from the estimated bony movement plans. By combining these two models, we can verify and determine the final bony movement required for planning. The proposed framework was evaluated using a clinical dataset, and our experimental results demonstrate that the soft-tissue driven approach greatly improves the accuracy and efficacy of surgical planning when compared to the conventional bone-driven approach.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 15:26:01 GMT" } ]
2023-07-21T00:00:00
[ [ "Fang", "Xi", "" ], [ "Kim", "Daeseung", "" ], [ "Xu", "Xuanang", "" ], [ "Kuang", "Tianshu", "" ], [ "Lampen", "Nathan", "" ], [ "Lee", "Jungwook", "" ], [ "Deng", "Hannah H.", "" ], [ "Gateno", "Jaime", "" ], [ "Liebschner", "Michael A. K.", "" ], [ "Xia", "James J.", "" ], [ "Yan", "Pingkun", "" ] ]
new_dataset
0.992343
2307.10955
Shaowu Peng
Shaowu Peng, Pengcheng Zhao, Yongyu Ye, Junying Chen, Yunbing Chang, Xiaoqing Zheng
Spinal nerve segmentation method and dataset construction in endoscopic surgical scenarios
Accepted by MICCAI 2023
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Endoscopic surgery is currently an important treatment method in the field of spinal surgery and avoiding damage to the spinal nerves through video guidance is a key challenge. This paper presents the first real-time segmentation method for spinal nerves in endoscopic surgery, which provides crucial navigational information for surgeons. A finely annotated segmentation dataset of approximately 10,000 consec-utive frames recorded during surgery is constructed for the first time for this field, addressing the problem of semantic segmentation. Based on this dataset, we propose FUnet (Frame-Unet), which achieves state-of-the-art performance by utilizing inter-frame information and self-attention mechanisms. We also conduct extended exper-iments on a similar polyp endoscopy video dataset and show that the model has good generalization ability with advantageous performance. The dataset and code of this work are presented at: https://github.com/zzzzzzpc/FUnet .
[ { "version": "v1", "created": "Thu, 20 Jul 2023 15:26:57 GMT" } ]
2023-07-21T00:00:00
[ [ "Peng", "Shaowu", "" ], [ "Zhao", "Pengcheng", "" ], [ "Ye", "Yongyu", "" ], [ "Chen", "Junying", "" ], [ "Chang", "Yunbing", "" ], [ "Zheng", "Xiaoqing", "" ] ]
new_dataset
0.999701
2307.11023
Saleh Kalantari
Tong Bill Xu and Saleh Kalantari
Visual Flow-based Programming Plugin for Brain Computer Interface in Computer-Aided Design
null
null
null
null
cs.HC cs.SE
http://creativecommons.org/licenses/by/4.0/
Over the last half century, the main application of Brain Computer Interfaces, BCIs has been controlling wheelchairs and neural prostheses or generating text or commands for people with restricted mobility. There has been very limited attention in the field to applications for computer aided design, despite the potential of BCIs to provide a new form of environmental interaction. In this paper we introduce the development and application of Neuron, a novel BCI tool that enables designers with little experience in neuroscience or computer programming to gain access to neurological data, along with established metrics relevant to design, create BCI interaction prototypes, both with digital onscreen objects and physical devices, and evaluate designs based on neurological information and record measurements for further analysis. After discussing the BCI tool development, the article presents its capabilities through two case studies, along with a brief evaluation of the tool performance and a discussion of implications, limitations, and future improvement.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 16:50:39 GMT" } ]
2023-07-21T00:00:00
[ [ "Xu", "Tong Bill", "" ], [ "Kalantari", "Saleh", "" ] ]
new_dataset
0.995928
2307.11057
L\^e Th\`anh D\~ung (Tito) Nguy\^en
L\^e Th\`anh D\~ung Nguy\^en, Camille No\^us, C\'ecilia Pradic
Two-way automata and transducers with planar behaviours are aperiodic
18 pages, DMTCS submission
null
null
null
cs.FL
http://creativecommons.org/licenses/by/4.0/
We consider a notion of planarity for two-way finite automata and transducers, inspired by Temperley-Lieb monoids of planar diagrams. We show that this restriction captures star-free languages and first-order transductions.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 17:37:48 GMT" } ]
2023-07-21T00:00:00
[ [ "Nguyên", "Lê Thành Dũng", "" ], [ "Noûs", "Camille", "" ], [ "Pradic", "Cécilia", "" ] ]
new_dataset
0.986736
2307.11073
Oscar Michel
Oscar Michel, Anand Bhattad, Eli VanderBilt, Ranjay Krishna, Aniruddha Kembhavi, Tanmay Gupta
OBJECT 3DIT: Language-guided 3D-aware Image Editing
null
null
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by/4.0/
Existing image editing tools, while powerful, typically disregard the underlying 3D geometry from which the image is projected. As a result, edits made using these tools may become detached from the geometry and lighting conditions that are at the foundation of the image formation process. In this work, we formulate the newt ask of language-guided 3D-aware editing, where objects in an image should be edited according to a language instruction in context of the underlying 3D scene. To promote progress towards this goal, we release OBJECT: a dataset consisting of 400K editing examples created from procedurally generated 3D scenes. Each example consists of an input image, editing instruction in language, and the edited image. We also introduce 3DIT : single and multi-task models for four editing tasks. Our models show impressive abilities to understand the 3D composition of entire scenes, factoring in surrounding objects, surfaces, lighting conditions, shadows, and physically-plausible object configurations. Surprisingly, training on only synthetic scenes from OBJECT, editing capabilities of 3DIT generalize to real-world images.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 17:53:46 GMT" } ]
2023-07-21T00:00:00
[ [ "Michel", "Oscar", "" ], [ "Bhattad", "Anand", "" ], [ "VanderBilt", "Eli", "" ], [ "Krishna", "Ranjay", "" ], [ "Kembhavi", "Aniruddha", "" ], [ "Gupta", "Tanmay", "" ] ]
new_dataset
0.999832
2307.11086
Shichong Peng
Yanshu Zhang, Shichong Peng, Alireza Moazeni, Ke Li
PAPR: Proximity Attention Point Rendering
null
null
null
null
cs.CV cs.AI cs.GR cs.LG cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Learning accurate and parsimonious point cloud representations of scene surfaces from scratch remains a challenge in 3D representation learning. Existing point-based methods often suffer from the vanishing gradient problem or require a large number of points to accurately model scene geometry and texture. To address these limitations, we propose Proximity Attention Point Rendering (PAPR), a novel method that consists of a point-based scene representation and a differentiable renderer. Our scene representation uses a point cloud where each point is characterized by its spatial position, foreground score, and view-independent feature vector. The renderer selects the relevant points for each ray and produces accurate colours using their associated features. PAPR effectively learns point cloud positions to represent the correct scene geometry, even when the initialization drastically differs from the target geometry. Notably, our method captures fine texture details while using only a parsimonious set of points. We also demonstrate four practical applications of our method: geometry editing, object manipulation, texture transfer, and exposure control. More results and code are available on our project website at https://zvict.github.io/papr/.
[ { "version": "v1", "created": "Thu, 20 Jul 2023 17:59:33 GMT" } ]
2023-07-21T00:00:00
[ [ "Zhang", "Yanshu", "" ], [ "Peng", "Shichong", "" ], [ "Moazeni", "Alireza", "" ], [ "Li", "Ke", "" ] ]
new_dataset
0.9967
2201.03601
Arion Pons
Arion Pons and Fehmi Cirak
Multiaxis nose-pointing-and-shooting in a biomimetic morphing-wing aircraft
null
Journal of Guidance, Control, and Dynamics, 46(3), 2023
10.2514/1.G006381
null
cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern high-performance combat aircraft exceed conventional flight-envelope limits on maneuverability through the use of thrust vectoring, and so achieve supermaneuverability. With ongoing development of biomimetic unmanned aerial vehicles (UAVs), the potential for supermaneuverability through biomimetic mechanisms becomes apparent. So far, this potential has not been well studied: biomimetic UAVs have not yet been shown to be capable of any of the forms of classical supermaneuverability available to thrust-vectored aircraft. Here we show this capability, by demonstrating how biomimetic morphing-wing UAVs can perform sophisticated multiaxis nose-pointing-and-shooting (NPAS) maneuvers at low morphing complexity. Nonlinear flight-dynamic analysis is used to characterize the extent and stability of the multidimensional space of aircraft trim states that arises from biomimetic morphing. Navigating this trim space provides an effective model-based guidance strategy for generating open-loop NPAS maneuvers in simulation. Our results demonstrate the capability of biomimetic aircraft for air combat-relevant supermaneuverability, and provide strategies for the exploration, characterization, and guidance of further forms of classical and non-classical supermaneuverability in such aircraft.
[ { "version": "v1", "created": "Mon, 10 Jan 2022 19:11:07 GMT" } ]
2023-07-20T00:00:00
[ [ "Pons", "Arion", "" ], [ "Cirak", "Fehmi", "" ] ]
new_dataset
0.963196
2209.13780
Jingchao Peng
Jingchao Peng, Haitao Zhao, Kaijie Zhao, Zhongze Wang, Lujian Yao
CourtNet for Infrared Small-Target Detection
null
null
10.1016/j.eswa.2023.120996
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Infrared small-target detection (ISTD) is an important computer vision task. ISTD aims at separating small targets from complex background clutter. The infrared radiation decays over distances, making the targets highly dim and prone to confusion with the background clutter, which makes the detector challenging to balance the precision and recall rate. To deal with this difficulty, this paper proposes a neural-network-based ISTD method called CourtNet, which has three sub-networks: the prosecution network is designed for improving the recall rate; the defendant network is devoted to increasing the precision rate; the jury network weights their results to adaptively balance the precision and recall rate. Furthermore, the prosecution network utilizes a densely connected transformer structure, which can prevent small targets from disappearing in the network forward propagation. In addition, a fine-grained attention module is adopted to accurately locate the small targets. Experimental results show that CourtNet achieves the best F1-score on the two ISTD datasets, MFIRST (0.62) and SIRST (0.73).
[ { "version": "v1", "created": "Wed, 28 Sep 2022 02:16:24 GMT" }, { "version": "v2", "created": "Sat, 15 Apr 2023 07:16:17 GMT" } ]
2023-07-20T00:00:00
[ [ "Peng", "Jingchao", "" ], [ "Zhao", "Haitao", "" ], [ "Zhao", "Kaijie", "" ], [ "Wang", "Zhongze", "" ], [ "Yao", "Lujian", "" ] ]
new_dataset
0.993257
2211.12955
Weijie Yuan
Weijie Yuan, Shuangyang Li, Zhiqiang Wei, Yuanhao Cui, Jiamo Jiang, Haijun Zhang, Pingzhi Fan
New Delay Doppler Communication Paradigm in 6G era: A Survey of Orthogonal Time Frequency Space (OTFS)
Survey paper on OTFS, accepted by China Communications; Cover paper of the 6th issue
China Communications. 2023, 20(6): 1-25
10.23919/JCC.fa.2022-0578.202306
null
cs.IT eess.SP math.IT
http://creativecommons.org/licenses/by/4.0/
In the 6G era, space-air-Ground integrated networks (SAGIN) are anticipated to deliver global coverage, necessitating support for a diverse array of emerging applications in high-mobility, hostile environments. Under such conditions, conventional orthogonal frequency division multiplexing (OFDM) modulation, widely employed in cellular and Wi-Fi communication systems, experiences performance degradation due to significant Doppler shifts. To overcome this obstacle, a novel two-dimensional (2D) modulation approach, namely orthogonal time frequency space (OTFS), has emerged as a key enabler for future high-mobility use cases. Distinctively, OTFS modulates information within the delay-Doppler (DD) domain, as opposed to the time-frequency (TF) domain utilized by OFDM. This offers advantages such as Doppler and delay resilience, reduced signaling latency, a lower peak-to-average ratio (PAPR), and a reduced-complexity implementation. Recent studies further indicate that the direct interplay between information and the physical world in the DD domain positions OTFS as a promising waveform for achieving integrated sensing and communications (ISAC). In this article, we present an in-depth review of OTFS technology in the context of the 6G era, encompassing fundamentals, recent advancements, and future directions. Our objective is to provide a valuable resource for researchers engaged in the field of OTFS.
[ { "version": "v1", "created": "Wed, 23 Nov 2022 13:55:47 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 02:27:20 GMT" } ]
2023-07-20T00:00:00
[ [ "Yuan", "Weijie", "" ], [ "Li", "Shuangyang", "" ], [ "Wei", "Zhiqiang", "" ], [ "Cui", "Yuanhao", "" ], [ "Jiang", "Jiamo", "" ], [ "Zhang", "Haijun", "" ], [ "Fan", "Pingzhi", "" ] ]
new_dataset
0.993422
2212.07253
Sean Moran
Sae Young Moon, Gregor Kerr, Fran Silavong, Sean Moran
API-Miner: an API-to-API Specification Recommendation Engine
null
null
null
null
cs.SE cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
When designing a new API for a large project, developers need to make smart design choices so that their code base can grow sustainably. To ensure that new API components are well designed, developers can learn from existing API components. However, the lack of standardized methods for comparing API designs makes this learning process time-consuming and difficult. To address this gap we developed API-Miner, to the best of our knowledge, one of the first API-to-API specification recommendation engines. API-Miner retrieves relevant specification components written in OpenAPI (a widely adopted language used to describe web APIs). API-miner presents several significant contributions, including: (1) novel methods of processing and extracting key information from OpenAPI specifications, (2) innovative feature extraction techniques that are optimized for the highly technical API specification domain, and (3) a novel log-linear probabilistic model that combines multiple signals to retrieve relevant and high quality OpenAPI specification components given a query specification. We evaluate API-Miner in both quantitative and qualitative tasks and achieve an overall of 91.7% recall@1 and 56.2% F1, which surpasses baseline performance by 15.4% in recall@1 and 3.2% in F1. Overall, API-Miner will allow developers to retrieve relevant OpenAPI specification components from a public or internal database in the early stages of the API development cycle, so that they can learn from existing established examples and potentially identify redundancies in their work. It provides the guidance developers need to accelerate development process and contribute thoughtfully designed APIs that promote code maintainability and quality. Code is available on GitHub at https://github.com/jpmorganchase/api-miner.
[ { "version": "v1", "created": "Wed, 14 Dec 2022 14:43:51 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 17:30:33 GMT" } ]
2023-07-20T00:00:00
[ [ "Moon", "Sae Young", "" ], [ "Kerr", "Gregor", "" ], [ "Silavong", "Fran", "" ], [ "Moran", "Sean", "" ] ]
new_dataset
0.980999
2212.10551
Fei Yuan
Fei Yuan, Yinquan Lu, WenHao Zhu, Lingpeng Kong, Lei Li, Yu Qiao, Jingjing Xu
Lego-MT: Learning Detachable Models for Massively Multilingual Machine Translation
ACL 2023 Findings
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Multilingual neural machine translation (MNMT) aims to build a unified model for many language directions. Existing monolithic models for MNMT encounter two challenges: parameter interference among languages and inefficient inference for large models. In this paper, we revisit the classic multi-way structures and develop a detachable model by assigning each language (or group of languages) to an individual branch that supports plug-and-play training and inference. To address the needs of learning representations for all languages in a unified space, we propose a novel efficient training recipe, upon which we build an effective detachable model, Lego-MT. For a fair comparison, we collect data from OPUS and build a translation benchmark covering 433 languages and 1.3B parallel data. Experiments show that Lego-MT with 1.2B parameters brings an average gain of 3.2 spBLEU. It even outperforms M2M-100 with 12B parameters. The proposed training recipe brings a 28.2$\times$ speedup over the conventional multi-way training method.\footnote{ \url{https://github.com/CONE-MT/Lego-MT}.}
[ { "version": "v1", "created": "Tue, 20 Dec 2022 18:54:08 GMT" }, { "version": "v2", "created": "Mon, 29 May 2023 03:39:44 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 05:52:32 GMT" } ]
2023-07-20T00:00:00
[ [ "Yuan", "Fei", "" ], [ "Lu", "Yinquan", "" ], [ "Zhu", "WenHao", "" ], [ "Kong", "Lingpeng", "" ], [ "Li", "Lei", "" ], [ "Qiao", "Yu", "" ], [ "Xu", "Jingjing", "" ] ]
new_dataset
0.997522
2301.02307
Kumar Ashutosh
Kumar Ashutosh, Rohit Girdhar, Lorenzo Torresani, Kristen Grauman
What You Say Is What You Show: Visual Narration Detection in Instructional Videos
Technical Report
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Narrated ''how-to'' videos have emerged as a promising data source for a wide range of learning problems, from learning visual representations to training robot policies. However, this data is extremely noisy, as the narrations do not always describe the actions demonstrated in the video. To address this problem we introduce the novel task of visual narration detection, which entails determining whether a narration is visually depicted by the actions in the video. We propose What You Say is What You Show (WYS^2), a method that leverages multi-modal cues and pseudo-labeling to learn to detect visual narrations with only weakly labeled data. Our model successfully detects visual narrations in in-the-wild videos, outperforming strong baselines, and we demonstrate its impact for state-of-the-art summarization and temporal alignment of instructional videos.
[ { "version": "v1", "created": "Thu, 5 Jan 2023 21:43:19 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 17:29:16 GMT" } ]
2023-07-20T00:00:00
[ [ "Ashutosh", "Kumar", "" ], [ "Girdhar", "Rohit", "" ], [ "Torresani", "Lorenzo", "" ], [ "Grauman", "Kristen", "" ] ]
new_dataset
0.969373
2303.01589
Xijun Wang
Xijun Wang, Ruiqi Xian, Tianrui Guan, Celso M. de Melo, Stephen M. Nogar, Aniket Bera, Dinesh Manocha
AZTR: Aerial Video Action Recognition with Auto Zoom and Temporal Reasoning
Accepted for publication at ICRA 2023
null
10.1109/ICRA48891.2023.10160564
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
We propose a novel approach for aerial video action recognition. Our method is designed for videos captured using UAVs and can run on edge or mobile devices. We present a learning-based approach that uses customized auto zoom to automatically identify the human target and scale it appropriately. This makes it easier to extract the key features and reduces the computational overhead. We also present an efficient temporal reasoning algorithm to capture the action information along the spatial and temporal domains within a controllable computational cost. Our approach has been implemented and evaluated both on the desktop with high-end GPUs and on the low power Robotics RB5 Platform for robots and drones. In practice, we achieve 6.1-7.4% improvement over SOTA in Top-1 accuracy on the RoCoG-v2 dataset, 8.3-10.4% improvement on the UAV-Human dataset and 3.2% improvement on the Drone Action dataset.
[ { "version": "v1", "created": "Thu, 2 Mar 2023 21:24:19 GMT" } ]
2023-07-20T00:00:00
[ [ "Wang", "Xijun", "" ], [ "Xian", "Ruiqi", "" ], [ "Guan", "Tianrui", "" ], [ "de Melo", "Celso M.", "" ], [ "Nogar", "Stephen M.", "" ], [ "Bera", "Aniket", "" ], [ "Manocha", "Dinesh", "" ] ]
new_dataset
0.999561
2303.02775
Yuxiang Peng
Yuxiang Peng, Jacob Young, Pengyu Liu, Xiaodi Wu
SimuQ: A Domain-Specific Language For Quantum Simulation With Analog Compilation
26 pages, 12 figures, 6 tables. Code is available at https://github.com/PicksPeng/SimuQ. A website is available at https://pickspeng.github.io/SimuQ/
null
null
null
cs.PL quant-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Quantum Hamiltonian simulation, which simulates the evolution of quantum systems and probes quantum phenomena, is one of the most promising applications of quantum computing. Recent experimental results suggest that Hamiltonian-oriented analog quantum simulation would be advantageous over circuit-oriented digital quantum simulation in the Noisy Intermediate-Scale Quantum (NISQ) machine era. However, programming analog quantum simulators is much more challenging due to the lack of a unified interface between hardware and software. In this paper, we design and implement SimuQ, the first domain-specific language for quantum Hamiltonian simulation that supports pulse-level compilation to heterogeneous analog quantum simulators. Specifically, in SimuQ, front-end users specify the target quantum system with Hamiltonian Modeling Language, and the Hamiltonian-level programmability of analog quantum simulators is specified through a new abstraction called the abstract analog instruction set (AAIS) and programmed in AAIS Specification Language by hardware providers. Through a solver-based compilation, SimuQ generates executable pulse schedules for real devices to simulate the evolution of desired quantum systems, which is demonstrated on superconducting (IBM), neutral-atom (QuEra), and trapped-ion (IonQ) quantum devices. Moreover, we demonstrate the advantages of exposing the Hamiltonian-level programmability of devices with native operations or interaction-based gates and establish a small benchmark of quantum simulation to evaluate SimuQ's compiler with the above analog quantum simulators.
[ { "version": "v1", "created": "Sun, 5 Mar 2023 21:28:05 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 06:00:41 GMT" } ]
2023-07-20T00:00:00
[ [ "Peng", "Yuxiang", "" ], [ "Young", "Jacob", "" ], [ "Liu", "Pengyu", "" ], [ "Wu", "Xiaodi", "" ] ]
new_dataset
0.997552
2303.08096
Axel Levy
Axel Levy, Mark Matthews, Matan Sela, Gordon Wetzstein, Dmitry Lagun
MELON: NeRF with Unposed Images in SO(3)
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Neural radiance fields enable novel-view synthesis and scene reconstruction with photorealistic quality from a few images, but require known and accurate camera poses. Conventional pose estimation algorithms fail on smooth or self-similar scenes, while methods performing inverse rendering from unposed views require a rough initialization of the camera orientations. The main difficulty of pose estimation lies in real-life objects being almost invariant under certain transformations, making the photometric distance between rendered views non-convex with respect to the camera parameters. Using an equivalence relation that matches the distribution of local minima in camera space, we reduce this space to its quotient set, in which pose estimation becomes a more convex problem. Using a neural-network to regularize pose estimation, we demonstrate that our method - MELON - can reconstruct a neural radiance field from unposed images with state-of-the-art accuracy while requiring ten times fewer views than adversarial approaches.
[ { "version": "v1", "created": "Tue, 14 Mar 2023 17:33:39 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 08:19:58 GMT" } ]
2023-07-20T00:00:00
[ [ "Levy", "Axel", "" ], [ "Matthews", "Mark", "" ], [ "Sela", "Matan", "" ], [ "Wetzstein", "Gordon", "" ], [ "Lagun", "Dmitry", "" ] ]
new_dataset
0.998173
2303.11103
Jakob Hoydis
Jakob Hoydis, Fay\c{c}al A\"it Aoudia, Sebastian Cammerer, Merlin Nimier-David, Nikolaus Binder, Guillermo Marcus, Alexander Keller
Sionna RT: Differentiable Ray Tracing for Radio Propagation Modeling
5 pages, 5 figures, update reflects new features of Sionna RT introduced in release v0.15
null
null
null
cs.IT cs.AI cs.LG cs.NI math.IT
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Sionna is a GPU-accelerated open-source library for link-level simulations based on TensorFlow. Since release v0.14 it integrates a differentiable ray tracer (RT) for the simulation of radio wave propagation. This unique feature allows for the computation of gradients of the channel impulse response and other related quantities with respect to many system and environment parameters, such as material properties, antenna patterns, array geometries, as well as transmitter and receiver orientations and positions. In this paper, we outline the key components of Sionna RT and showcase example applications such as learning radio materials and optimizing transmitter orientations by gradient descent. While classic ray tracing is a crucial tool for 6G research topics like reconfigurable intelligent surfaces, integrated sensing and communications, as well as user localization, differentiable ray tracing is a key enabler for many novel and exciting research directions, for example, digital twins.
[ { "version": "v1", "created": "Mon, 20 Mar 2023 13:40:11 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 14:42:10 GMT" } ]
2023-07-20T00:00:00
[ [ "Hoydis", "Jakob", "" ], [ "Aoudia", "Fayçal Aït", "" ], [ "Cammerer", "Sebastian", "" ], [ "Nimier-David", "Merlin", "" ], [ "Binder", "Nikolaus", "" ], [ "Marcus", "Guillermo", "" ], [ "Keller", "Alexander", "" ] ]
new_dataset
0.99933
2304.04578
Juan Ignacio Iba\~nez
Juan Ignacio Iba\~nez, Alexander Freier
Bitcoin's Carbon Footprint Revisited: Proof of Work Mining for Renewable Energy Expansion
A previous version of this paper was titled "Can Bitcoin Stop Climate Change? Proof of Work, Energy Consumption and Carbon Footprint (SoK)"
Challenges, EISSN 2078-1547, Published by MDPI
null
null
cs.DC cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite their potential in many respects, blockchain and distributed ledger technology (DLT) technology have been the target of criticism for the energy intensity of the proof-of-work (PoW) consensus algorithm in general and of Bitcoin mining in particular. However, mining is also believed to have the potential to drive net decarbonization and renewable penetration in the energy grid by providing ancillary and other services. In this paper, we systematize the state of the art in this regard. Although not completely absent from the literature, the extent to which flexible load response (FLR) through PoW mining may support grid decarbonization remains insufficiently studied and hence contested. We approach this research gap by systematizing both the strengths and the limitations of mining to provide FLR services for energy grids. We find that a net-decarbonizing effect led by renewable-based mining is indeed plausible.
[ { "version": "v1", "created": "Fri, 3 Feb 2023 19:53:55 GMT" }, { "version": "v2", "created": "Wed, 10 May 2023 20:44:11 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 13:19:09 GMT" } ]
2023-07-20T00:00:00
[ [ "Ibañez", "Juan Ignacio", "" ], [ "Freier", "Alexander", "" ] ]
new_dataset
0.975777
2304.05417
Fabio Poiesi
Luigi Riz, Andrea Caraffa, Matteo Bortolon, Mohamed Lamine Mekhalfi, Davide Boscaini, Andr\'e Moura, Jos\'e Antunes, Andr\'e Dias, Hugo Silva, Andreas Leonidou, Christos Constantinides, Christos Keleshis, Dante Abate, Fabio Poiesi
The MONET dataset: Multimodal drone thermal dataset recorded in rural scenarios
Published in Computer Vision and Pattern Recognition (CVPR) Workshops 2023 - 6th Multimodal Learning and Applications Workshop
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
We present MONET, a new multimodal dataset captured using a thermal camera mounted on a drone that flew over rural areas, and recorded human and vehicle activities. We captured MONET to study the problem of object localisation and behaviour understanding of targets undergoing large-scale variations and being recorded from different and moving viewpoints. Target activities occur in two different land sites, each with unique scene structures and cluttered backgrounds. MONET consists of approximately 53K images featuring 162K manually annotated bounding boxes. Each image is timestamp-aligned with drone metadata that includes information about attitudes, speed, altitude, and GPS coordinates. MONET is different from previous thermal drone datasets because it features multimodal data, including rural scenes captured with thermal cameras containing both person and vehicle targets, along with trajectory information and metadata. We assessed the difficulty of the dataset in terms of transfer learning between the two sites and evaluated nine object detection algorithms to identify the open challenges associated with this type of data. Project page: https://github.com/fabiopoiesi/monet_dataset.
[ { "version": "v1", "created": "Tue, 11 Apr 2023 18:00:02 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 10:01:29 GMT" } ]
2023-07-20T00:00:00
[ [ "Riz", "Luigi", "" ], [ "Caraffa", "Andrea", "" ], [ "Bortolon", "Matteo", "" ], [ "Mekhalfi", "Mohamed Lamine", "" ], [ "Boscaini", "Davide", "" ], [ "Moura", "André", "" ], [ "Antunes", "José", "" ], [ "Dias", "André", "" ], [ "Silva", "Hugo", "" ], [ "Leonidou", "Andreas", "" ], [ "Constantinides", "Christos", "" ], [ "Keleshis", "Christos", "" ], [ "Abate", "Dante", "" ], [ "Poiesi", "Fabio", "" ] ]
new_dataset
0.999853
2304.10727
Seulki Park
Seulki Park, Daeho Um, Hajung Yoon, Sanghyuk Chun, Sangdoo Yun and Jin Young Choi
RoCOCO: Robustness Benchmark of MS-COCO to Stress-test Image-Text Matching Models
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a robustness benchmark for image-text matching models to assess their vulnerabilities. To this end, we insert adversarial texts and images into the search pool (i.e., gallery set) and evaluate models with the adversarial data. Specifically, we replace a word in the text to change the meaning of the text and mix images with different images to create perceptible changes in pixels. We assume that such explicit alterations would not deceive a robust model, as they should understand the holistic meaning of texts and images simultaneously. However, in our evaluations on the proposed benchmark, many state-of-the-art models show significant performance degradation, e.g., Recall@1: 81.9% $\rightarrow$ 64.5% in BLIP, 66.1% $\rightarrow$ 37.5% in VSE$\infty$, where the models favor adversarial texts/images over the original ones. This reveals the current vision-language models may not account for subtle changes or understand the overall context of texts and images. Our findings can provide insights for improving the robustness of the vision-language models and devising more diverse stress-test methods in cross-modal retrieval task. Source code and dataset will be available at https://github.com/pseulki/rococo.
[ { "version": "v1", "created": "Fri, 21 Apr 2023 03:45:59 GMT" }, { "version": "v2", "created": "Fri, 14 Jul 2023 04:34:57 GMT" } ]
2023-07-20T00:00:00
[ [ "Park", "Seulki", "" ], [ "Um", "Daeho", "" ], [ "Yoon", "Hajung", "" ], [ "Chun", "Sanghyuk", "" ], [ "Yun", "Sangdoo", "" ], [ "Choi", "Jin Young", "" ] ]
new_dataset
0.989909
2306.03308
Manuel Delgado
Manuel Delgado and Jaume Us\'o i Cubertorer
Kunz languages for numerical semigroups are context sensitive
11 pages
null
null
null
cs.FL math.AC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
There is a one-to-one and onto correspondence between the class of numerical semigroups of depth $n$, where $n$ is an integer, and a certain language over the alphabet $\{1,\ldots,n\}$ which we call a Kunz language of depth $n$. The Kunz language associated with the numerical semigroups of depth $2$ is the regular language $\{1,2\}^*2\{1,2\}^*$. We prove that Kunz languages associated with numerical semigroups of larger depth are context-sensitive but not regular.
[ { "version": "v1", "created": "Mon, 5 Jun 2023 23:30:30 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 17:36:31 GMT" } ]
2023-07-20T00:00:00
[ [ "Delgado", "Manuel", "" ], [ "Cubertorer", "Jaume Usó i", "" ] ]
new_dataset
0.994914
2306.07591
Raz Lapid
Raz Lapid, Moshe Sipper
I See Dead People: Gray-Box Adversarial Attack on Image-To-Text Models
null
Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD 2023)
null
null
cs.CV cs.NE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Modern image-to-text systems typically adopt the encoder-decoder framework, which comprises two main components: an image encoder, responsible for extracting image features, and a transformer-based decoder, used for generating captions. Taking inspiration from the analysis of neural networks' robustness against adversarial perturbations, we propose a novel gray-box algorithm for creating adversarial examples in image-to-text models. Unlike image classification tasks that have a finite set of class labels, finding visually similar adversarial examples in an image-to-text task poses greater challenges because the captioning system allows for a virtually infinite space of possible captions. In this paper, we present a gray-box adversarial attack on image-to-text, both untargeted and targeted. We formulate the process of discovering adversarial perturbations as an optimization problem that uses only the image-encoder component, meaning the proposed attack is language-model agnostic. Through experiments conducted on the ViT-GPT2 model, which is the most-used image-to-text model in Hugging Face, and the Flickr30k dataset, we demonstrate that our proposed attack successfully generates visually similar adversarial examples, both with untargeted and targeted captions. Notably, our attack operates in a gray-box manner, requiring no knowledge about the decoder module. We also show that our attacks fool the popular open-source platform Hugging Face.
[ { "version": "v1", "created": "Tue, 13 Jun 2023 07:35:28 GMT" }, { "version": "v2", "created": "Wed, 12 Jul 2023 09:45:54 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 12:04:59 GMT" } ]
2023-07-20T00:00:00
[ [ "Lapid", "Raz", "" ], [ "Sipper", "Moshe", "" ] ]
new_dataset
0.987685
2307.05588
Mich\`ele Duguay
Mich\`ele Duguay, Kate Mancey, Johanna Devaney
Collaborative Song Dataset (CoSoD): An annotated dataset of multi-artist collaborations in popular music
To be published in the Proceedings of the 24th International Society for Music Information Retrieval Conference (ISMIR)
null
null
null
cs.SD eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The Collaborative Song Dataset (CoSoD) is a corpus of 331 multi-artist collaborations from the 2010-2019 Billboard "Hot 100" year-end charts. The corpus is annotated with formal sections, aspects of vocal production (including reverberation, layering, panning, and gender of the performers), and relevant metadata. CoSoD complements other popular music datasets by focusing exclusively on musical collaborations between independent acts. In addition to facilitating the study of song form and vocal production, CoSoD allows for the in-depth study of gender as it relates to various timbral, pitch, and formal parameters in musical collaborations. In this paper, we detail the contents of the dataset and outline the annotation process. We also present an experiment using CoSoD that examines how the use of reverberation, layering, and panning are related to the gender of the artist. In this experiment, we find that men's voices are on average treated with less reverberation and occupy a more narrow position in the stereo mix than women's voices.
[ { "version": "v1", "created": "Mon, 10 Jul 2023 15:57:42 GMT" }, { "version": "v2", "created": "Thu, 13 Jul 2023 18:59:22 GMT" } ]
2023-07-20T00:00:00
[ [ "Duguay", "Michèle", "" ], [ "Mancey", "Kate", "" ], [ "Devaney", "Johanna", "" ] ]
new_dataset
0.999597
2307.05944
Fengshi Tian
Xiaomeng Wang, Fengshi Tian, Xizi Chen, Jiakun Zheng, Xuejiao Liu, Fengbin Tu, Jie Yang, Mohamad Sawan, Kwang-Ting Cheng, Chi-Ying Tsui
A 137.5 TOPS/W SRAM Compute-in-Memory Macro with 9-b Memory Cell-Embedded ADCs and Signal Margin Enhancement Techniques for AI Edge Applications
Submitted to IEEE ASSCC 2023
null
null
null
cs.AR cs.NE eess.SP
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this paper, we propose a high-precision SRAM-based CIM macro that can perform 4x4-bit MAC operations and yield 9-bit signed output. The inherent discharge branches of SRAM cells are utilized to apply time-modulated MAC and 9-bit ADC readout operations on two bit-line capacitors. The same principle is used for both MAC and A-to-D conversion ensuring high linearity and thus supporting large number of analog MAC accumulations. The memory cell-embedded ADC eliminates the use of separate ADCs and enhances energy and area efficiency. Additionally, two signal margin enhancement techniques, namely the MAC-folding and boosted-clipping schemes, are proposed to further improve the CIM computation accuracy.
[ { "version": "v1", "created": "Wed, 12 Jul 2023 06:20:19 GMT" }, { "version": "v2", "created": "Mon, 17 Jul 2023 03:08:13 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 08:58:58 GMT" } ]
2023-07-20T00:00:00
[ [ "Wang", "Xiaomeng", "" ], [ "Tian", "Fengshi", "" ], [ "Chen", "Xizi", "" ], [ "Zheng", "Jiakun", "" ], [ "Liu", "Xuejiao", "" ], [ "Tu", "Fengbin", "" ], [ "Yang", "Jie", "" ], [ "Sawan", "Mohamad", "" ], [ "Cheng", "Kwang-Ting", "" ], [ "Tsui", "Chi-Ying", "" ] ]
new_dataset
0.998686
2307.07813
Pietro Bonazzi
Pietro Bonazzi, Thomas Ruegg, Sizhen Bian, Yawei Li, Michele Magno
TinyTracker: Ultra-Fast and Ultra-Low-Power Edge Vision In-Sensor for Gaze Estimation
null
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by/4.0/
Intelligent edge vision tasks encounter the critical challenge of ensuring power and latency efficiency due to the typically heavy computational load they impose on edge platforms.This work leverages one of the first "AI in sensor" vision platforms, IMX500 by Sony, to achieve ultra-fast and ultra-low-power end-to-end edge vision applications. We evaluate the IMX500 and compare it to other edge platforms, such as the Google Coral Dev Micro and Sony Spresense, by exploring gaze estimation as a case study. We propose TinyTracker, a highly efficient, fully quantized model for 2D gaze estimation designed to maximize the performance of the edge vision systems considered in this study. TinyTracker achieves a 41x size reduction (600Kb) compared to iTracker [1] without significant loss in gaze estimation accuracy (maximum of 0.16 cm when fully quantized). TinyTracker's deployment on the Sony IMX500 vision sensor results in end-to-end latency of around 19ms. The camera takes around 17.9ms to read, process and transmit the pixels to the accelerator. The inference time of the network is 0.86ms with an additional 0.24 ms for retrieving the results from the sensor. The overall energy consumption of the end-to-end system is 4.9 mJ, including 0.06 mJ for inference. The end-to-end study shows that IMX500 is 1.7x faster than CoralMicro (19ms vs 34.4ms) and 7x more power efficient (4.9mJ VS 34.2mJ)
[ { "version": "v1", "created": "Sat, 15 Jul 2023 14:34:25 GMT" }, { "version": "v2", "created": "Tue, 18 Jul 2023 16:35:36 GMT" }, { "version": "v3", "created": "Wed, 19 Jul 2023 08:06:34 GMT" } ]
2023-07-20T00:00:00
[ [ "Bonazzi", "Pietro", "" ], [ "Ruegg", "Thomas", "" ], [ "Bian", "Sizhen", "" ], [ "Li", "Yawei", "" ], [ "Magno", "Michele", "" ] ]
new_dataset
0.977342
2307.07859
Yao Huang
Xingxing Wei, Yao Huang, Yitong Sun, Jie Yu
Unified Adversarial Patch for Cross-modal Attacks in the Physical World
10 pages, 8 figures, accepted by ICCV2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, physical adversarial attacks have been presented to evade DNNs-based object detectors. To ensure the security, many scenarios are simultaneously deployed with visible sensors and infrared sensors, leading to the failures of these single-modal physical attacks. To show the potential risks under such scenes, we propose a unified adversarial patch to perform cross-modal physical attacks, i.e., fooling visible and infrared object detectors at the same time via a single patch. Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches, which can be captured in different modalities when they change. To this end, we design a novel boundary-limited shape optimization to achieve the compact and smooth shapes, and thus they can be easily implemented in the physical world. In addition, to balance the fooling degree between visible detector and infrared detector during the optimization process, we propose a score-aware iterative evaluation, which can guide the adversarial patch to iteratively reduce the predicted scores of the multi-modal sensors. We finally test our method against the one-stage detector: YOLOv3 and the two-stage detector: Faster RCNN. Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively. More importantly, we verify the effective attacks in the physical world when visible and infrared sensors shoot the objects under various settings like different angles, distances, postures, and scenes.
[ { "version": "v1", "created": "Sat, 15 Jul 2023 17:45:17 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 03:04:50 GMT" } ]
2023-07-20T00:00:00
[ [ "Wei", "Xingxing", "" ], [ "Huang", "Yao", "" ], [ "Sun", "Yitong", "" ], [ "Yu", "Jie", "" ] ]
new_dataset
0.999749
2307.08222
Yunlong Wang
Guang Jiang, Jiahui Zhu, Yunsong Li, Pengcheng An, Yunlong Wang
NaMemo2: Facilitating Teacher-Student Interaction with Theory-Based Design and Student Autonomy Consideration
This paper has been accepted in July 2023 for publication in Education and Information Technologies
null
null
null
cs.HC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Teacher-student interaction (TSI) is essential for learning efficiency and harmonious teacher-student interpersonal relationships. However, studies on TSI support tools often focus on teacher needs while neglecting student needs and autonomy. To enhance both lecturer competence in delivering interpersonal interaction and student autonomy in TSI, we developed NaMemo2, a novel augmented-reality system that allows students to express their willingness to TSI and displays student information to teachers during lectures. The design and evaluation process follows a new framework, STUDIER, which can facilitate the development of theory-based ethnics-aware TSI support tools in general. The quantitative results of our four-week field study with four classes in a university suggested that NaMemo2 can improve 1) TSI in the classroom from both teacher and student perspectives, 2) student attitudes and willingness to TSI, and 3) student attitudes to the deployment of NaMemo2. The qualitative feedback from students and teachers indicated that improving TSI may be responsible for improved attention in students and a better classroom atmosphere during lectures.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 03:52:28 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 03:33:29 GMT" } ]
2023-07-20T00:00:00
[ [ "Jiang", "Guang", "" ], [ "Zhu", "Jiahui", "" ], [ "Li", "Yunsong", "" ], [ "An", "Pengcheng", "" ], [ "Wang", "Yunlong", "" ] ]
new_dataset
0.997507
2307.09191
Federico Matteucci
Federico Matteucci, Vadim Arzamasov, Klemens Boehm
A benchmark of categorical encoders for binary classification
Submitted to the 37th Conference on Neural Information Processing Systems (NeurIPS 2023) Track on Datasets and Benchmarks
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Categorical encoders transform categorical features into numerical representations that are indispensable for a wide range of machine learning models. Existing encoder benchmark studies lack generalizability because of their limited choice of (1) encoders, (2) experimental factors, and (3) datasets. Additionally, inconsistencies arise from the adoption of varying aggregation strategies. This paper is the most comprehensive benchmark of categorical encoders to date, including an extensive evaluation of 32 configurations of encoders from diverse families, with 36 combinations of experimental factors, and on 50 datasets. The study shows the profound influence of dataset selection, experimental factors, and aggregation strategies on the benchmark's conclusions -- aspects disregarded in previous encoder benchmarks.
[ { "version": "v1", "created": "Mon, 17 Jul 2023 13:17:26 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 16:24:31 GMT" } ]
2023-07-20T00:00:00
[ [ "Matteucci", "Federico", "" ], [ "Arzamasov", "Vadim", "" ], [ "Boehm", "Klemens", "" ] ]
new_dataset
0.991821
2307.09288
Thomas Scialom
Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom
Llama 2: Open Foundation and Fine-Tuned Chat Models
null
null
null
null
cs.CL cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In this work, we develop and release Llama 2, a collection of pretrained and fine-tuned large language models (LLMs) ranging in scale from 7 billion to 70 billion parameters. Our fine-tuned LLMs, called Llama 2-Chat, are optimized for dialogue use cases. Our models outperform open-source chat models on most benchmarks we tested, and based on our human evaluations for helpfulness and safety, may be a suitable substitute for closed-source models. We provide a detailed description of our approach to fine-tuning and safety improvements of Llama 2-Chat in order to enable the community to build on our work and contribute to the responsible development of LLMs.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 14:31:57 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 17:08:59 GMT" } ]
2023-07-20T00:00:00
[ [ "Touvron", "Hugo", "" ], [ "Martin", "Louis", "" ], [ "Stone", "Kevin", "" ], [ "Albert", "Peter", "" ], [ "Almahairi", "Amjad", "" ], [ "Babaei", "Yasmine", "" ], [ "Bashlykov", "Nikolay", "" ], [ "Batra", "Soumya", "" ], [ "Bhargava", "Prajjwal", "" ], [ "Bhosale", "Shruti", "" ], [ "Bikel", "Dan", "" ], [ "Blecher", "Lukas", "" ], [ "Ferrer", "Cristian Canton", "" ], [ "Chen", "Moya", "" ], [ "Cucurull", "Guillem", "" ], [ "Esiobu", "David", "" ], [ "Fernandes", "Jude", "" ], [ "Fu", "Jeremy", "" ], [ "Fu", "Wenyin", "" ], [ "Fuller", "Brian", "" ], [ "Gao", "Cynthia", "" ], [ "Goswami", "Vedanuj", "" ], [ "Goyal", "Naman", "" ], [ "Hartshorn", "Anthony", "" ], [ "Hosseini", "Saghar", "" ], [ "Hou", "Rui", "" ], [ "Inan", "Hakan", "" ], [ "Kardas", "Marcin", "" ], [ "Kerkez", "Viktor", "" ], [ "Khabsa", "Madian", "" ], [ "Kloumann", "Isabel", "" ], [ "Korenev", "Artem", "" ], [ "Koura", "Punit Singh", "" ], [ "Lachaux", "Marie-Anne", "" ], [ "Lavril", "Thibaut", "" ], [ "Lee", "Jenya", "" ], [ "Liskovich", "Diana", "" ], [ "Lu", "Yinghai", "" ], [ "Mao", "Yuning", "" ], [ "Martinet", "Xavier", "" ], [ "Mihaylov", "Todor", "" ], [ "Mishra", "Pushkar", "" ], [ "Molybog", "Igor", "" ], [ "Nie", "Yixin", "" ], [ "Poulton", "Andrew", "" ], [ "Reizenstein", "Jeremy", "" ], [ "Rungta", "Rashi", "" ], [ "Saladi", "Kalyan", "" ], [ "Schelten", "Alan", "" ], [ "Silva", "Ruan", "" ], [ "Smith", "Eric Michael", "" ], [ "Subramanian", "Ranjan", "" ], [ "Tan", "Xiaoqing Ellen", "" ], [ "Tang", "Binh", "" ], [ "Taylor", "Ross", "" ], [ "Williams", "Adina", "" ], [ "Kuan", "Jian Xiang", "" ], [ "Xu", "Puxin", "" ], [ "Yan", "Zheng", "" ], [ "Zarov", "Iliyan", "" ], [ "Zhang", "Yuchen", "" ], [ "Fan", "Angela", "" ], [ "Kambadur", "Melanie", "" ], [ "Narang", "Sharan", "" ], [ "Rodriguez", "Aurelien", "" ], [ "Stojnic", "Robert", "" ], [ "Edunov", "Sergey", "" ], [ "Scialom", "Thomas", "" ] ]
new_dataset
0.997362
2307.09362
Zhixiang Wei
Zhixiang Wei, Lin Chen, Tao Tu, Huaian Chen, Pengyang Ling, Yi Jin
Disentangle then Parse:Night-time Semantic Segmentation with Illumination Disentanglement
Accepted by ICCV2023
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Most prior semantic segmentation methods have been developed for day-time scenes, while typically underperforming in night-time scenes due to insufficient and complicated lighting conditions. In this work, we tackle this challenge by proposing a novel night-time semantic segmentation paradigm, i.e., disentangle then parse (DTP). DTP explicitly disentangles night-time images into light-invariant reflectance and light-specific illumination components and then recognizes semantics based on their adaptive fusion. Concretely, the proposed DTP comprises two key components: 1) Instead of processing lighting-entangled features as in prior works, our Semantic-Oriented Disentanglement (SOD) framework enables the extraction of reflectance component without being impeded by lighting, allowing the network to consistently recognize the semantics under cover of varying and complicated lighting conditions. 2) Based on the observation that the illumination component can serve as a cue for some semantically confused regions, we further introduce an Illumination-Aware Parser (IAParser) to explicitly learn the correlation between semantics and lighting, and aggregate the illumination features to yield more precise predictions. Extensive experiments on the night-time segmentation task with various settings demonstrate that DTP significantly outperforms state-of-the-art methods. Furthermore, with negligible additional parameters, DTP can be directly used to benefit existing day-time methods for night-time segmentation.
[ { "version": "v1", "created": "Tue, 18 Jul 2023 15:46:21 GMT" }, { "version": "v2", "created": "Wed, 19 Jul 2023 13:21:30 GMT" } ]
2023-07-20T00:00:00
[ [ "Wei", "Zhixiang", "" ], [ "Chen", "Lin", "" ], [ "Tu", "Tao", "" ], [ "Chen", "Huaian", "" ], [ "Ling", "Pengyang", "" ], [ "Jin", "Yi", "" ] ]
new_dataset
0.987216