The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Dataset Card for Dataset Name
This repositoty covers 8 Text datasets inlcuding: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578. We provide the original textual data, preprocess data and multiple embeddings based LLama2, Llama3, Mistral and Embedding Models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002) from OpenAI.
task_categories: - text-classification - feature-extraction tags: - anomaly-detection - benchmark - embeddings - llms language: - en
Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding
This repository provides Text-ADBench, a comprehensive benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets.
Paper: Text-ADBench: Text Anomaly Detection Benchmark based on LLMs Embedding
Code: Text-ADBench
Abstract
Text anomaly detection is a critical task in natural language processing (NLP), with applications spanning fraud detection, misinformation identification, spam detection and content moderation, etc. Despite significant advances in large language models (LLMs) and anomaly detection algorithms, the absence of standardized and comprehensive benchmarks for evaluating the existing anomaly detection methods on text data limits rigorous comparison and development of innovative approaches. This work performs a comprehensive empirical study and introduces a benchmark for text anomaly detection, leveraging embeddings from diverse pre-trained language models across a wide array of text datasets. Our work systematically evaluates the effectiveness of embedding-based text anomaly detection by incorporating (1) early language models (GloVe, BERT); (2) multiple LLMs (LLaMa-2, LLama-3, Mistral, OpenAI (small, ada, large)); (3) multi-domain text datasets (news, social media, scientific publications); (4) comprehensive evaluation metrics (AUROC, AUPRC). Our experiments reveal a critical empirical insight: embedding quality significantly governs anomaly detection efficacy, and deep learning-based approaches demonstrate no performance advantage over conventional shallow algorithms (e.g., KNN, Isolation Forest) when leveraging LLM-derived embeddings. In addition, we observe strongly low-rank characteristics in cross-model performance matrices, which enables an efficient strategy for rapid model evaluation (or embedding evaluation) and selection in practical applications. Furthermore, by open-sourcing our benchmark toolkit that includes all embeddings from different models and code at this https URL , this work provides a foundation for future research in robust and scalable text anomaly detection systems.
Dataset Details
This repository covers 8 text datasets including: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, Reuters21578. For each of these multi-domain datasets (news, social media, scientific publications), the repository provides:
- The original textual data.
- Preprocessed data.
- Multiple embeddings derived from various pre-trained language models, including:
- Early language models (GloVe, BERT)
- Multiple LLMs (LLaMa-2, LLaMa-3, Mistral)
- OpenAI embedding models (text-embedding-3-small, text-embedding-3-large, text-embedding-ada-002)
Dataset Description
Text-ADBench addresses the critical task of text anomaly detection by providing a standardized and comprehensive benchmark. It facilitates rigorous comparison and development of innovative approaches by systematically evaluating embedding-based text anomaly detection across diverse models and datasets. The benchmark highlights that embedding quality significantly influences anomaly detection performance and that traditional shallow algorithms can be as effective as deep learning approaches when utilizing LLM-derived embeddings.
- Curated by: Feng Xiao and Jicong Fan
- Language(s) (NLP): English
- License: MIT
Dataset Sources
- Paper: TextADBench
Uses
Direct Use
This dataset is intended for researchers and practitioners in natural language processing and artificial intelligence, specifically for:
- Benchmarking existing text anomaly detection methods.
- Developing and evaluating new anomaly detection algorithms on diverse text data.
- Studying the impact of various LLM embeddings on anomaly detection efficacy.
- Exploring efficient strategies for rapid model evaluation and selection in practical applications, leveraging observed low-rank characteristics in performance matrices.
Out-of-Scope Use
This dataset is not intended for:
- General text classification tasks unrelated to anomaly detection.
- Training large language models from scratch, as it primarily provides embeddings and benchmark data, not raw corpus data for pre-training.
- Applications where biases present in the original source datasets or embedding models could lead to unfair or discriminatory outcomes without proper mitigation.
Dataset Structure
The repository contains 8 distinct text datasets: 20Newsgroups, DBpedia14, IMDB, SMS_SPAM, SST2, WOS, Enron, and Reuters21578. For each dataset, the repository provides:
- Original textual data (e.g., in
text_data/
). - Preprocessed versions of the text data.
- Multiple sets of embeddings, generated using a range of models including GloVe, BERT, Llama-2, Llama-3, Mistral, and OpenAI's text embedding models (e.g., in
text_embedding/
).
For a detailed file structure, please refer to the GitHub repository.
Dataset Creation
Curation Rationale
The dataset was created to address a critical gap in the field of text anomaly detection: the absence of standardized and comprehensive benchmarks. By providing a unified framework, Text-ADBench enables rigorous comparison and facilitates the development of innovative approaches to text anomaly detection, leveraging the advancements in large language models.
Source Data
Data Collection and Processing
The benchmark leverages a wide array of publicly accessible multi-domain text datasets. The original textual data was collected, followed by preprocessing steps. Subsequently, embeddings were generated using diverse pre-trained language models, encompassing both early models and modern LLMs. The benchmark toolkit also supports generating embeddings for new text data.
Who are the source data producers?
The source data producers include the original authors and maintainers of the 8 constituent text datasets (e.g., 20Newsgroups, IMDB, SMS_SPAM, etc.). The benchmark and its generated embeddings were curated by Feng Xiao and Jicong Fan, the authors of the Text-ADBench paper.
- Downloads last month
- 110