Datasets:
File size: 9,152 Bytes
2d2810b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 |
---
license: mit
task_categories:
- text-classification
language:
- id
tags:
- hate-speech-detection
- abusive-language
- text-classification
- indonesian
- social-media
- nlp
- content-moderation
- multi-label-classification
size_categories:
- 10K<n<100K
---
# Indonesian Hate Speech Detection Dataset
## Dataset Summary
This dataset contains **13,169 Indonesian tweets** annotated for hate speech detection and abusive language classification. The dataset provides comprehensive multi-label annotations covering different types of hate speech, target categories, and intensity levels, making it valuable for building robust content moderation systems for Indonesian social media.
## Dataset Details
- **Total Samples**: 13,169 Indonesian tweets
- **Language**: Indonesian (Bahasa Indonesia)
- **Annotation Type**: Multi-label binary classification
- **Labels**: 12 different hate speech and abusive language categories
- **Format**: CSV file
- **Text Length**: 4-561 characters (average: 114 characters)
## Label Categories
### Primary Classifications
| Label | Description | Positive Cases | Percentage |
|-------|-------------|----------------|------------|
| `HS` | **Hate Speech** - General hate speech detection | 5,561 | 42.2% |
| `Abusive` | **Abusive Language** - Offensive or abusive content | 5,043 | 38.3% |
### Target-Based Classifications
| Label | Description | Positive Cases | Percentage |
|-------|-------------|----------------|------------|
| `HS_Individual` | Hate speech targeting specific individuals | 3,575 | 27.1% |
| `HS_Group` | Hate speech targeting groups/communities | 1,986 | 15.1% |
| `HS_Religion` | Religious hate speech | 793 | 6.0% |
| `HS_Race` | Racial/ethnic hate speech | 566 | 4.3% |
| `HS_Physical` | Physical appearance-based hate speech | 323 | 2.5% |
| `HS_Gender` | Gender-based hate speech | 306 | 2.3% |
| `HS_Other` | Other types of hate speech | 3,740 | 28.4% |
### Intensity Classifications
| Label | Description | Positive Cases | Percentage |
|-------|-------------|----------------|------------|
| `HS_Weak` | Weak/mild hate speech | 3,383 | 25.7% |
| `HS_Moderate` | Moderate hate speech | 1,705 | 12.9% |
| `HS_Strong` | Strong/severe hate speech | 473 | 3.6% |
## Key Statistics
**Text Characteristics:**
- **Average tweet length**: 114 characters
- **Shortest tweet**: 4 characters
- **Longest tweet**: 561 characters
- **Language**: Indonesian (Bahasa Indonesia)
**Label Distribution:**
- **Balanced primary labels**: ~42% hate speech, ~38% abusive
- **Imbalanced target categories**: Physical (2.5%) to Individual (27.1%)
- **Severity pyramid**: Weak (25.7%) > Moderate (12.9%) > Strong (3.6%)
## Use Cases
This dataset is ideal for:
- **Multi-label Text Classification**: Train models to detect multiple types of hate speech
- **Indonesian NLP**: Develop language-specific content moderation systems
- **Social Media Monitoring**: Build automated detection for Indonesian platforms
- **Severity Assessment**: Create models that classify hate speech intensity
- **Target Analysis**: Understand different targets of hate speech
- **Content Moderation**: Deploy real-time filtering systems
- **Research**: Study hate speech patterns in Indonesian social media
## Quick Start
```python
import pandas as pd
from sklearn.model_selection import train_test_split
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.multioutput import MultiOutputClassifier
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import classification_report
# Load dataset
df = pd.read_csv('data.csv')
# Prepare features and targets
X = df['Tweet']
y = df[['HS', 'Abusive', 'HS_Individual', 'HS_Group', 'HS_Religion',
'HS_Race', 'HS_Physical', 'HS_Gender', 'HS_Other']]
# Split data
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.2, random_state=42
)
# Vectorize text
vectorizer = TfidfVectorizer(max_features=10000, ngram_range=(1, 2))
X_train_vec = vectorizer.fit_transform(X_train)
X_test_vec = vectorizer.transform(X_test)
# Train multi-label classifier
classifier = MultiOutputClassifier(LogisticRegression(random_state=42))
classifier.fit(X_train_vec, y_train)
# Evaluate
y_pred = classifier.predict(X_test_vec)
print("Multi-label Classification Report:")
for i, label in enumerate(y.columns):
print(f"\n{label}:")
print(classification_report(y_test.iloc[:, i], y_pred[:, i]))
```
## Advanced Usage Examples
### Intensity-Based Classification
```python
# Focus on hate speech intensity levels
intensity_labels = ['HS_Weak', 'HS_Moderate', 'HS_Strong']
hate_speech_data = df[df['HS'] == 1] # Only hate speech samples
# Multi-class intensity classification
y_intensity = hate_speech_data[intensity_labels]
```
### Target-Specific Models
```python
# Build specialized models for different targets
target_labels = ['HS_Individual', 'HS_Group', 'HS_Religion', 'HS_Race',
'HS_Physical', 'HS_Gender', 'HS_Other']
# Train target-specific classifiers
for target in target_labels:
# Create binary classifier for each target type
pass
```
### Indonesian Text Preprocessing
```python
import re
def preprocess_indonesian_text(text):
# Convert to lowercase
text = text.lower()
# Remove URLs
text = re.sub(r'http\S+|www\S+|https\S+', '', text, flags=re.MULTILINE)
# Remove user mentions and RT
text = re.sub(r'@\w+|rt\s+', '', text)
# Remove extra whitespace
text = re.sub(r'\s+', ' ', text).strip()
return text
# Apply preprocessing
df['Tweet_processed'] = df['Tweet'].apply(preprocess_indonesian_text)
```
## Model Architecture Suggestions
### Traditional ML
- **TF-IDF + Logistic Regression**: Baseline multi-label classifier
- **TF-IDF + SVM**: Better performance on imbalanced classes
- **Ensemble Methods**: Random Forest or Gradient Boosting
### Deep Learning
- **BERT-based Models**: Use Indonesian BERT (IndoBERT) for better performance
- **Multilingual Models**: mBERT or XLM-R for cross-lingual transfer
- **Custom Architecture**: BiLSTM + Attention for sequence modeling
### Multi-task Learning
```python
# Hierarchical classification approach
# 1. First classify: Normal vs Abusive vs Hate Speech
# 2. If Hate Speech: Classify target and intensity
# 3. Multi-task loss combining all objectives
```
## Evaluation Metrics
Given the multi-label and imbalanced nature:
### Primary Metrics
- **F1-Score**: Macro and micro averages
- **AUC-ROC**: For each label separately
- **Hamming Loss**: Multi-label specific metric
- **Precision/Recall**: Per-label analysis
### Specialized Metrics
```python
from sklearn.metrics import multilabel_confusion_matrix, jaccard_score
# Multi-label specific metrics
jaccard = jaccard_score(y_true, y_pred, average='macro')
hamming = hamming_loss(y_true, y_pred)
```
## Data Quality & Considerations
### Strengths
- ✅ **Comprehensive Labeling**: Multiple dimensions of hate speech
- ✅ **Large Scale**: 13K+ samples for robust training
- ✅ **Real-world Data**: Actual Indonesian tweets
- ✅ **Intensity Levels**: Enables nuanced classification
- ✅ **Multiple Targets**: Covers various hate speech types
### Limitations
- ⚠️ **Class Imbalance**: Some categories <5% positive samples
- ⚠️ **Language Specific**: Limited to Indonesian context
- ⚠️ **Temporal Bias**: Tweet collection timeframe not specified
- ⚠️ **Cultural Context**: May not generalize across Indonesian regions
## Ethical Considerations
**Content Warning**: This dataset contains hate speech and abusive language examples.
### Responsible Use
- **Research Purpose**: Intended for academic and safety research
- **Content Moderation**: Building protective systems
- **Bias Awareness**: Monitor for demographic biases in predictions
- **Privacy**: Tweets should be handled according to platform policies
### Not Suitable For
- Training generative models that could amplify hate speech
- Creating offensive content detection without human oversight
- Commercial use without proper ethical review
## Related Work & Benchmarks
### Indonesian NLP Resources
- **IndoBERT**: Pre-trained Indonesian BERT model
- **Indonesian Sentiment**: Related sentiment analysis datasets
- **Multilingual Models**: Cross-lingual hate speech detection
### Benchmark Performance
Consider comparing against:
- Traditional ML baselines (TF-IDF + SVM)
- Pre-trained language models (mBERT, IndoBERT)
- Multi-task learning approaches
## Citation
```bibtex
@dataset{indonesian_hate_speech_2025,
title={Indonesian Hate Speech Detection Dataset},
year={2025},
publisher={Dataset From Kaggle},
url={https://huggingface.co/datasets/nahiar/indonesian-hate-speech},
note={Multi-label hate speech and abusive language detection for Indonesian social media}
}
```
## Acknowledgments
This dataset contributes to safer Indonesian social media environments and supports research in:
- Multilingual content moderation
- Southeast Asian NLP
- Cross-cultural hate speech patterns
- Social media safety systems
**Note**: Handle this sensitive content responsibly and in accordance with ethical AI principles. |