File size: 3,220 Bytes
3eb3e49 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
# OpenAI Whisper-Base Fine-Tuned Model for Speech-to-Text
This repository hosts a fine-tuned version of the OpenAI Whisper-Base model optimized for speech-to-text tasks using the [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/) dataset. The model is designed to efficiently transcribe speech into text while maintaining high accuracy.
## Model Details
- **Model Architecture**: OpenAI Whisper-Base
- **Task**: Speech-to-Text
- **Dataset**: [Mozilla Common Voice 13.0](https://commonvoice.mozilla.org/)
- **Quantization**: FP16
- **Fine-tuning Framework**: Hugging Face Transformers
## π Usage
### Installation
```bash
pip install transformers torch
```
### Loading the Model
```python
from transformers import WhisperProcessor, WhisperForConditionalGeneration
import torch
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name = "AventIQ-AI/whisper-speech-text"
model = WhisperForConditionalGeneration.from_pretrained(model_name).to(device)
processor = WhisperProcessor.from_pretrained(model_name)
```
### Speech-to-Text Inference
```python
import torchaudio
# Load and process audio file
def transcribe(audio_path):
waveform, sample_rate = torchaudio.load(audio_path)
inputs = processor(waveform, sampling_rate=sample_rate, return_tensors="pt").input_features.to(device)
# Generate transcription
with torch.no_grad():
predicted_ids = model.generate(inputs)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
return transcription
# Example usage
audio_file = "sample_audio.wav"
print(transcribe(audio_file))
```
## π Evaluation Results
After fine-tuning the Whisper-Base model for speech-to-text, we evaluated the model's performance on the validation set from the Common Voice 13.0 dataset. The following results were obtained:
| Metric | Score | Meaning |
|------------|--------|------------------------------------------------|
| **WER** | 8.2% | Word Error Rate: Measures transcription accuracy |
| **CER** | 4.5% | Character Error Rate: Measures character-level accuracy |
## Fine-Tuning Details
### Dataset
The Mozilla Common Voice 13.0 dataset, containing diverse multilingual speech samples, was used for fine-tuning the model.
### Training
- **Number of epochs**: 3
- **Batch size**: 8
- **Evaluation strategy**: epochs
### Quantization
Post-training quantization was applied using PyTorch's built-in quantization framework to reduce the model size and improve inference efficiency.
## π Repository Structure
```bash
.
βββ model/ # Contains the quantized model files
βββ tokenizer_config/ # Tokenizer configuration and vocabulary files
βββ model.safetensors/ # Quantized Model
βββ README.md # Model documentation
```
## β οΈ Limitations
- The model may struggle with highly noisy or overlapping speech.
- Quantization may lead to slight degradation in accuracy compared to full-precision models.
- Performance may vary across different accents and dialects.
## π€ Contributing
Contributions are welcome! Feel free to open an issue or submit a pull request if you have suggestions or improvements.
|