File size: 3,717 Bytes
0190a49 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 |
---
license: cc-by-4.0
task_categories:
- automatic-speech-recognition
- text-to-speech
language:
- tr
tags:
- speech
- audio
- dataset
- tts
- asr
- merged-dataset
size_categories:
- n<1K
configs:
- config_name: default
data_files:
- split: train
path: "data.jsonl"
default: true
dataset_info:
features:
- name: audio
dtype:
audio:
sampling_rate: null
- name: text
dtype: string
- name: speaker_id
dtype: string
- name: emotion
dtype: string
- name: language
dtype: string
splits:
- name: train
num_examples: 491
config_name: default
---
# tetttttt
This is a merged speech dataset containing 491 audio segments from 2 source datasets.
## Dataset Information
- **Total Segments**: 491
- **Speakers**: 2
- **Languages**: tr
- **Emotions**: angry, neutral, happy
- **Original Datasets**: 2
## Dataset Structure
Each example contains:
- `audio`: Audio file (WAV format, original sampling rate preserved)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier (made unique across all merged datasets)
- `emotion`: Detected emotion (neutral, happy, sad, etc.)
- `language`: Language code (en, es, fr, etc.)
## Usage
### Loading the Dataset
```python
from datasets import load_dataset
# Load the dataset
dataset = load_dataset("Codyfederer/tetttttt")
# Access the training split
train_data = dataset["train"]
# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")
# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate
```
### Alternative: Load from JSONL
```python
from datasets import Dataset, Audio, Features, Value
import json
# Load the JSONL file
rows = []
with open("data.jsonl", "r", encoding="utf-8") as f:
for line in f:
rows.append(json.loads(line))
features = Features({
"audio": Audio(sampling_rate=None),
"text": Value("string"),
"speaker_id": Value("string"),
"emotion": Value("string"),
"language": Value("string")
})
dataset = Dataset.from_list(rows, features=features)
```
### Dataset Structure
The dataset includes:
- `data.jsonl` - Main dataset file with all columns (JSON Lines)
- `*.wav` - Audio files under `audio_XXX/` subdirectories
- `load_dataset.txt` - Python script for loading the dataset (rename to .py to use)
JSONL keys:
- `audio`: Relative audio path (e.g., `audio_000/segment_000000_speaker_0.wav`)
- `text`: Transcription of the audio
- `speaker_id`: Unique speaker identifier
- `emotion`: Detected emotion
- `language`: Language code
## Speaker ID Mapping
Speaker IDs have been made unique across all merged datasets to avoid conflicts.
For example:
- Original Dataset A: `speaker_0`, `speaker_1`
- Original Dataset B: `speaker_0`, `speaker_1`
- Merged Dataset: `speaker_0`, `speaker_1`, `speaker_2`, `speaker_3`
Original dataset information is preserved in the metadata for reference.
## Data Quality
This dataset was created using the Vyvo Dataset Builder with:
- Automatic transcription and diarization
- Quality filtering for audio segments
- Music and noise filtering
- Emotion detection
- Language identification
## License
This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).
## Citation
```bibtex
@dataset{vyvo_merged_dataset,
title={tetttttt},
author={Vyvo Dataset Builder},
year={2025},
url={https://huggingface.co/datasets/Codyfederer/tetttttt}
}
```
This dataset was created using the Vyvo Dataset Builder tool.
|