Model Description
This is a lightweight model based on Qwen3-0.6B fine-tuned on the GoEmotions dataset for multi-class, multi-label emotion classification.
GoEmotions Dataset
GoEmotions is the largest manually annotated English language fine-grained emotion dataset, consisting of 58,000 Reddit comments labeled with 27 emotion categories or Neutral. The dataset was designed with both psychological validity and data applicability in mind, offering a balanced representation of emotions with 12 positive, 11 negative, and 4 ambiguous emotion categories.
The 27 emotion categories include: admiration, amusement, anger, annoyance, approval, caring, confusion, curiosity, desire, disappointment, disapproval, disgust, embarrassment, excitement, fear, gratitude, grief, joy, love, nervousness, optimism, pride, realization, relief, remorse, sadness, and surprise.
Model Architecture
Built on the Qwen3-0.6B architecture, this model provides an efficient solution for emotion detection tasks while maintaining strong performance. The lightweight design (0.6B parameters) makes it suitable for deployment in resource-constrained environments, including edge devices and real-time applications.
Multi-Label Classification
Each input sentence can be assigned multiple emotion labels simultaneously, reflecting the complexity of real emotional expressions where people often experience mixed or overlapping emotions. This multi-label capability is particularly valuable for understanding nuanced emotional content in conversational contexts.
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained(
"FritzStack/Go-QWENemotions-0.6B",
trust_remote_code=True
)
model = AutoModelForCausalLM.from_pretrained(
"FritzStack/Go-QWENemotions-0.6B",
load_in_4bit=True,
device_map="auto",
trust_remote_code=True
)
def predict_emotions(text, max_new_tokens=50):
"""
Predict emotions for a given text
"""
prompt = f"{text}"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=max_new_tokens,
do_sample=True,
temperature=0.9,
top_k = 10,
#repetition_penalty=35.,
pad_token_id=tokenizer.eos_token_id
)
generated_text = tokenizer.decode(
outputs[0][len(inputs.input_ids[0]):],
skip_special_tokens=False
).strip()
return generated_text
Uploaded model
- Developed by: FritzStack
- License: apache-2.0
- Finetuned from model : unsloth/qwen3-0.6b-unsloth-bnb-4bit
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.
