File size: 3,605 Bytes
978395d |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 |
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- en
tags:
- speech
- audio
- dataset
- tts
- asr
- merged-dataset
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "data.csv"
default: true
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: 16000
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: emotion
dtype: string
- name: language
dtype: string
splits:
- name: train
num_examples: 345
config_name: default
---
# test3
This is a merged speech dataset containing 345 audio segments from 2 source datasets.
## Dataset Information
- **Total Segments**: 345
- **Speakers**: 7
- **Languages**: en
- **Emotions**: happy, neutral, angry, sad
- **Original Datasets**: 2
## Dataset Structure
Each example contains:
- `audio`: Audio file (WAV format, 16kHz sampling rate)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
- `emotion`: Detected emotion (neutral, happy, sad, etc.)
- `language`: Language code (en, es, fr, etc.)
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/test3")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
```
### Alternative: Load from CSV
```python
import pandas as pd
from datasets import Dataset, Audio, Features, Value
# Load the CSV file
df = pd.read_csv("data.csv")
# Define features
features = Features({
"audio": Audio(sampling_rate=16000),
"text": Value("string"),
"speaker_id": Value("string"),
"emotion": Value("string"),
"language": Value("string")
})
# Create dataset
dataset = Dataset.from_pandas(df, features=features)
```
### Dataset Structure
The dataset includes:
- `data.csv` - Main dataset file with all columns
- `segments/` - Directory containing all audio files
- `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
CSV columns:
- `audio`: Path to the audio file (in segments/ directory)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier
- `emotion`: Detected emotion
- `language`: Language code
## Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts.
For example:
- Original Dataset A: `speaker_0`, `speaker_1`
- Original Dataset B: `speaker_0`, `speaker_1`
- Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
Original dataset information is preserved in the metadata for reference.
## Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
## License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
## Citation
```bibtex
@dataset{vyvo_merged_dataset,
title={test3},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/test3}
}
```
This dataset was created using the Vyvo Dataset Builder tool.
|