πŸ“Œ Spam Classifier (BERT Fine-Tuned)

Introduction

This is my first fine-tuned model on Hugging Face πŸš€. It is a spam vs ham (not spam) classifier built using a BERT model fine-tuned on SMS spam data. The goal is to help detect unwanted spam messages while keeping normal communications intact. I created and uploaded this model as part of my learning journey into NLP and Transformers.
The model was trained on a spam/ham dataset with high accuracy and strong F1 performance.
It can be used for SMS filtering, email pre-screening, or any application requiring spam detection.

πŸ“– Model Details

  • Architecture: BERT base (bert-base-cased)
  • Task: Binary Text Classification
  • Labels: 0 = ham, 1 = spam
  • Dataset: Custom spam/ham dataset (e.g., SMS Spam Collection)
  • Fine-tuned epochs: 3
  • Framework: Hugging Face Transformers

πŸ§ͺ Evaluation Results

Metric Score
Accuracy 99.3%
F1 Score 97.5%
Precision 100%
Recall 95.1%

πŸš€ How to Use

from transformers import pipeline

classifier = pipeline("text-classification", model="Sathya77/spam-ham-classifier")

classifier("Congratulations! You won a free gift card!")
# β†’ [{'label': 'spam', 'score': 0.99}]

πŸš€ Limitations and Future Work

  • May not generalize perfectly to domains outside SMS/email.
  • Some borderline spam messages may still be misclassified.
  • Future improvements: larger training data, multilingual support.

Thank You For Supporting me....

Downloads last month
8
Safetensors
Model size
108M params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Sathya77/spam-ham-classifier

Finetuned
(2528)
this model

Dataset used to train Sathya77/spam-ham-classifier