test3 / README.md
Codyfederer's picture
Upload merged dataset: test3
978395d verified
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- en
tags:
- speech
- audio
- dataset
- tts
- asr
- merged-dataset
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "data.csv"
default: true
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: emotion
dtype: string
- name: language
dtype: string
splits:
- name: train
num_examples: 345
config_name: default
---
# test3
This is a merged speech dataset containing 345 audio segments from 2 source datasets.
## Dataset Information
- **Total Segments**: 345
- **Speakers**: 7
- **Languages**: en
- **Emotions**: happy, neutral, angry, sad
- **Original Datasets**: 2
## Dataset Structure
Each example contains:
- `audio`: Audio file (WAV format, 16kHz sampling rate)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
- `emotion`: Detected emotion (neutral, happy, sad, etc.)
- `language`: Language code (en, es, fr, etc.)
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/test3")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
```
### Alternative: Load from CSV
```python
import pandas as pd
from datasets import Dataset, Audio, Features, Value
# Load the CSV file
df = pd.read_csv("data.csv")
# Define features
features = Features({
"audio": Audio(sampling_rate=16000),
"text": Value("string"),
"speaker_id": Value("string"),
"emotion": Value("string"),
"language": Value("string")
})
# Create dataset
dataset = Dataset.from_pandas(df, features=features)
```
### Dataset Structure
The dataset includes:
- `data.csv` - Main dataset file with all columns
- `segments/` - Directory containing all audio files
- `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
CSV columns:
- `audio`: Path to the audio file (in segments/ directory)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier
- `emotion`: Detected emotion
- `language`: Language code
## Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts.
For example:
- Original Dataset A: `speaker_0`, `speaker_1`
- Original Dataset B: `speaker_0`, `speaker_1`
- Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
Original dataset information is preserved in the metadata for reference.
## Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
## License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
## Citation
```bibtex
@dataset{vyvo_merged_dataset,
title={test3},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/test3}
}
```
This dataset was created using the Vyvo Dataset Builder tool.