Datasets:
license: mit
task_categories:
- text-classification
language:
- id
tags:
- hate-speech-detection
- abusive-language
- text-classification
- indonesian
- social-media
- nlp
- content-moderation
- multi-label-classification
size_categories:
- 10K<n<100K
Indonesian Hate Speech Detection Dataset
Dataset Summary
This dataset contains 13,169 Indonesian tweets annotated for hate speech detection and abusive language classification. The dataset provides comprehensive multi-label annotations covering different types of hate speech, target categories, and intensity levels, making it valuable for building robust content moderation systems for Indonesian social media.
Dataset Details
- Total Samples: 13,169 Indonesian tweets
- Language: Indonesian (Bahasa Indonesia)
- Annotation Type: Multi-label binary classification
- Labels: 12 different hate speech and abusive language categories
- Format: CSV file
- Text Length: 4-561 characters (average: 114 characters)
Label Categories
Primary Classifications
Label | Description | Positive Cases | Percentage |
---|---|---|---|
HS |
Hate Speech - General hate speech detection | 5,561 | 42.2% |
Abusive |
Abusive Language - Offensive or abusive content | 5,043 | 38.3% |
Target-Based Classifications
Label | Description | Positive Cases | Percentage |
---|---|---|---|
HS_Individual |
Hate speech targeting specific individuals | 3,575 | 27.1% |
HS_Group |
Hate speech targeting groups/communities | 1,986 | 15.1% |
HS_Religion |
Religious hate speech | 793 | 6.0% |
HS_Race |
Racial/ethnic hate speech | 566 | 4.3% |
HS_Physical |
Physical appearance-based hate speech | 323 | 2.5% |
HS_Gender |
Gender-based hate speech | 306 | 2.3% |
HS_Other |
Other types of hate speech | 3,740 | 28.4% |
Intensity Classifications
Label | Description | Positive Cases | Percentage |
---|---|---|---|
HS_Weak |
Weak/mild hate speech | 3,383 | 25.7% |
HS_Moderate |
Moderate hate speech | 1,705 | 12.9% |
HS_Strong |
Strong/severe hate speech | 473 | 3.6% |
Key Statistics
Text Characteristics:
- Average tweet length: 114 characters
- Shortest tweet: 4 characters
- Longest tweet: 561 characters
- Language: Indonesian (Bahasa Indonesia)
Label Distribution:
- Balanced primary labels: ~42% hate speech, ~38% abusive
- Imbalanced target categories: Physical (2.5%) to Individual (27.1%)
- Severity pyramid: Weak (25.7%) > Moderate (12.9%) > Strong (3.6%)
Use Cases
This dataset is ideal for:
- Multi-label Text Classification: Train models to detect multiple types of hate speech
- Indonesian NLP: Develop language-specific content moderation systems
- Social Media Monitoring: Build automated detection for Indonesian platforms
- Severity Assessment: Create models that classify hate speech intensity
- Target Analysis: Understand different targets of hate speech
- Content Moderation: Deploy real-time filtering systems
- Research: Study hate speech patterns in Indonesian social media
Quick Start
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
# Load dataset
df = pd.read_csv('data.csv')
# Prepare features and targets
X = df['Tweet']
y = df[['HS', 'Abusive', 'HS_Individual', 'HS_Group', 'HS_Religion',
'HS_Race', 'HS_Physical', 'HS_Gender', 'HS_Other']]
# Split data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000, ngram_range=(1, 2))
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)
# Train multi-label classifier
classifier = MultiOutputClassifier(LogisticRegression(random_state=42))
classifier.fit(X_train_vec, y_train)
# Evaluate
y_pred = classifier.predict(X_test_vec)
print("Multi-label Classification Report:")
for i, label in enumerate(y.columns):
print(f"\n{label}:")
print(classification_report(y_test.iloc[:, i], y_pred[:, i]))
Advanced Usage Examples
Intensity-Based Classification
# Focus on hate speech intensity levels
intensity_labels = ['HS_Weak', 'HS_Moderate', 'HS_Strong']
hate_speech_data = df[df['HS'] == 1] # Only hate speech samples
# Multi-class intensity classification
y_intensity = hate_speech_data[intensity_labels]
Target-Specific Models
# Build specialized models for different targets
target_labels = ['HS_Individual', 'HS_Group', 'HS_Religion', 'HS_Race',
'HS_Physical', 'HS_Gender', 'HS_Other']
# Train target-specific classifiers
for target in target_labels:
# Create binary classifier for each target type
pass
Indonesian Text Preprocessing
import re
def preprocess_indonesian_text(text):
# Convert to lowercase
text = text.lower()
# Remove URLs
text = re.sub(r'http\S+|www\S+|https\S+', '', text, flags=re.MULTILINE)
# Remove user mentions and RT
text = re.sub(r'@\w+|rt\s+', '', text)
# Remove extra whitespace
text = re.sub(r'\s+', ' ', text).strip()
return text
# Apply preprocessing
df['Tweet_processed'] = df['Tweet'].apply(preprocess_indonesian_text)
Model Architecture Suggestions
Traditional ML
- TF-IDF + Logistic Regression: Baseline multi-label classifier
- TF-IDF + SVM: Better performance on imbalanced classes
- Ensemble Methods: Random Forest or Gradient Boosting
Deep Learning
- BERT-based Models: Use Indonesian BERT (IndoBERT) for better performance
- Multilingual Models: mBERT or XLM-R for cross-lingual transfer
- Custom Architecture: BiLSTM + Attention for sequence modeling
Multi-task Learning
# Hierarchical classification approach
# 1. First classify: Normal vs Abusive vs Hate Speech
# 2. If Hate Speech: Classify target and intensity
# 3. Multi-task loss combining all objectives
Evaluation Metrics
Given the multi-label and imbalanced nature:
Primary Metrics
- F1-Score: Macro and micro averages
- AUC-ROC: For each label separately
- Hamming Loss: Multi-label specific metric
- Precision/Recall: Per-label analysis
Specialized Metrics
from sklearn.metrics import multilabel_confusion_matrix, jaccard_score
# Multi-label specific metrics
jaccard = jaccard_score(y_true, y_pred, average='macro')
hamming = hamming_loss(y_true, y_pred)
Data Quality & Considerations
Strengths
- ✅ Comprehensive Labeling: Multiple dimensions of hate speech
- ✅ Large Scale: 13K+ samples for robust training
- ✅ Real-world Data: Actual Indonesian tweets
- ✅ Intensity Levels: Enables nuanced classification
- ✅ Multiple Targets: Covers various hate speech types
Limitations
- ⚠️ Class Imbalance: Some categories <5% positive samples
- ⚠️ Language Specific: Limited to Indonesian context
- ⚠️ Temporal Bias: Tweet collection timeframe not specified
- ⚠️ Cultural Context: May not generalize across Indonesian regions
Ethical Considerations
Content Warning: This dataset contains hate speech and abusive language examples.
Responsible Use
- Research Purpose: Intended for academic and safety research
- Content Moderation: Building protective systems
- Bias Awareness: Monitor for demographic biases in predictions
- Privacy: Tweets should be handled according to platform policies
Not Suitable For
- Training generative models that could amplify hate speech
- Creating offensive content detection without human oversight
- Commercial use without proper ethical review
Related Work & Benchmarks
Indonesian NLP Resources
- IndoBERT: Pre-trained Indonesian BERT model
- Indonesian Sentiment: Related sentiment analysis datasets
- Multilingual Models: Cross-lingual hate speech detection
Benchmark Performance
Consider comparing against:
- Traditional ML baselines (TF-IDF + SVM)
- Pre-trained language models (mBERT, IndoBERT)
- Multi-task learning approaches
Citation
@dataset{indonesian_hate_speech_2025,
title={Indonesian Hate Speech Detection Dataset},
year={2025},
publisher={Dataset From Kaggle},
url={https://huggingface.co/datasets/nahiar/indonesian-hate-speech},
note={Multi-label hate speech and abusive language detection for Indonesian social media}
}
Acknowledgments
This dataset contributes to safer Indonesian social media environments and supports research in:
- Multilingual content moderation
- Southeast Asian NLP
- Cross-cultural hate speech patterns
- Social media safety systems
Note: Handle this sensitive content responsibly and in accordance with ethical AI principles.