test6 / README.md
Codyfederer's picture
Upload merged dataset: test6
597af08 verified
metadata
license: cc-by-4.0
task_categories:
  - automatic-speech-recognition
  - text-to-speech
language:
  - en
tags:
  - speech
  - audio
  - dataset
  - tts
  - asr
  - merged-dataset
size_categories:
  - 1K<n<10K
configs:
  - config_name: default
    data_files:
      - split: train
        path: data.csv
    default: true
dataset_info:
  features:
    - name: audio
      dtype:
        audio:
          sampling_rate: 16000
    - name: text
      dtype: string
    - name: speaker_id
      dtype: string
    - name: emotion
      dtype: string
    - name: language
      dtype: string
  splits:
    - name: train
      num_examples: 1994
  config_name: default

test6

This is a merged speech dataset containing 1994 audio segments from 2 source datasets.

Dataset Information

  • Total Segments: 1994
  • Speakers: 3
  • Languages: en
  • Emotions: neutral, negative_surprise, positive_surprise, distress, relief, contentment, adoration, interest, confusion, happy, sadness, triumph, fear, disappointment, awe, realization, angry
  • Original Datasets: 2

Dataset Structure

Each example contains:

  • audio: Audio file (WAV format, 16kHz sampling rate)
  • text: Transcription of the audio
  • speaker_id: Unique speaker identifier (made unique across all merged datasets)
  • emotion: Detected emotion (neutral, happy, sad, etc.)
  • language: Language code (en, es, fr, etc.)

Usage

Loading the Dataset

from datasets import load_dataset

# Load the dataset
dataset = load_dataset("Codyfederer/test6")

# Access the training split
train_data = dataset["train"]

# Example: Get first sample
sample = train_data[0]
print(f"Text: {sample['text']}")
print(f"Speaker: {sample['speaker_id']}")
print(f"Language: {sample['language']}")
print(f"Emotion: {sample['emotion']}")

# Play audio (requires audio libraries)
# sample['audio']['array'] contains the audio data
# sample['audio']['sampling_rate'] contains the sampling rate

Alternative: Load from CSV

import pandas as pd
from datasets import Dataset, Audio, Features, Value

# Load the CSV file
df = pd.read_csv("data.csv")

# Define features
features = Features({
    "audio": Audio(sampling_rate=16000),
    "text": Value("string"),
    "speaker_id": Value("string"),
    "emotion": Value("string"),
    "language": Value("string")
})

# Create dataset
dataset = Dataset.from_pandas(df, features=features)

Dataset Structure

The dataset includes:

  • data.csv - Main dataset file with all columns
  • *.wav - Audio files in the root directory
  • load_dataset.txt - Python script for loading the dataset (rename to .py to use)

CSV columns:

  • audio: Audio filename (in root directory)
  • text: Transcription of the audio
  • speaker_id: Unique speaker identifier
  • emotion: Detected emotion
  • language: Language code

Speaker ID Mapping

Speaker IDs have been made unique across all merged datasets to avoid conflicts. For example:

  • Original Dataset A: speaker_0, speaker_1
  • Original Dataset B: speaker_0, speaker_1
  • Merged Dataset: speaker_0, speaker_1, speaker_2, speaker_3

Original dataset information is preserved in the metadata for reference.

Data Quality

This dataset was created using the Vyvo Dataset Builder with:

  • Automatic transcription and diarization
  • Quality filtering for audio segments
  • Music and noise filtering
  • Emotion detection
  • Language identification

License

This dataset is released under the Creative Commons Attribution 4.0 International License (CC BY 4.0).

Citation

@dataset{vyvo_merged_dataset,
  title={test6},
  author={Vyvo Dataset Builder},
  year={2025},
  url={https://huggingface.co/datasets/Codyfederer/test6}
}

This dataset was created using the Vyvo Dataset Builder tool.