metadata
library_name: transformers.js
base_model:
- prithivMLmods/Speech-Emotion-Classification
Speech-Emotion-Classification (ONNX)
This is an ONNX version of prithivMLmods/Speech-Emotion-Classification. It was automatically converted and uploaded using this space.
Speech-Emotion-Classification
Speech-Emotion-Classification is a fine-tuned version of
facebook/wav2vec2-base-960h
for multi-class audio classification, specifically trained to detect emotions in speech. This model utilizes theWav2Vec2ForSequenceClassification
architecture to accurately classify speaker emotions from audio signals.
Intended Use
Speech-Emotion-Classification
is designed for:
- Speech Emotion Analytics – Analyze speaker emotions in call centers, interviews, or therapeutic sessions.
- Conversational AI Personalization – Adjust voice assistant responses based on detected emotion.
- Mental Health Monitoring – Support emotion recognition in voice-based wellness or teletherapy apps.
- Voice Dataset Curation – Tag or filter speech datasets by emotion for research or model training.
- Media Annotation – Automatically annotate podcasts, audiobooks, or videos with speaker emotion metadata.