StyleSet / README.md
dcml0714's picture
Update README.md
97fbd9c verified
metadata
dataset_info:
  - config_name: role_playing
    features:
      - name: ID
        dtype: int64
      - name: text_0
        dtype: string
      - name: text_1
        dtype: string
      - name: audio_0
        dtype:
          audio:
            sampling_rate: 16000
      - name: audio_1
        dtype:
          audio:
            sampling_rate: 16000
      - name: source
        dtype: string
      - name: speaker1
        dtype: string
      - name: speaker2
        dtype: string
    splits:
      - name: test
        num_bytes: 182310504
        num_examples: 20
    download_size: 148908359
    dataset_size: 182310504
  - config_name: voice_instruction_following
    features:
      - name: ID
        dtype: int64
      - name: text_1
        dtype: string
      - name: text_2
        dtype: string
      - name: audio_1
        dtype:
          audio:
            sampling_rate: 16000
      - name: audio_2
        dtype:
          audio:
            sampling_rate: 16000
    splits:
      - name: test
        num_bytes: 36665909
        num_examples: 20
    download_size: 35109899
    dataset_size: 36665909
configs:
  - config_name: role_playing
    data_files:
      - split: test
        path: role_playing/test-*
  - config_name: voice_instruction_following
    data_files:
      - split: test
        path: voice_instruction_following/test-*

StyleSet

WARNING: This dataset contains some profane words.

A spoken language benchmark for evaluating speaking-style-related speech generation
Released in our paper, Audio-Aware Large Language Models as Judges for Speaking Styles

This dataset is released by NTU Speech Lab under the MIT license.

image/png


Tasks

  1. Voice Style Instruction Following

    • Reproduce a given sentence verbatim.
    • Match specified prosodic styles (emotion, volume, pace, emphasis, pitch, non-verbal cues).
  2. Role Playing

    • Continue a two-turn dialogue prompt in character.
    • Generate the next utterance with appropriate prosody and style.
    • The dataset is modified from IEMOCAP with the consent of the authors. Please refer to IEMOCAP for details and the original data of IEMOCAP. We do not redistribute the data here.

Evaluation

We use ALLM-as-a-judge for evaluation. Currently, we found that gemini-2.5-pro-0506 reaches the best agreement with human evaluators. The complete evaluation prompt and evaluation pipelines can be found in Table 3 to Table 5 in our paper.

Citation

If you use StyleSet or find ALLM-as-a-judge useful, please cite our paper by

@misc{chiang2025audioawarelargelanguagemodels,
      title={Audio-Aware Large Language Models as Judges for Speaking Styles}, 
      author={Cheng-Han Chiang and Xiaofei Wang and Chung-Ching Lin and Kevin Lin and Linjie Li and Radu Kopetz and Yao Qian and Zhendong Wang and Zhengyuan Yang and Hung-yi Lee and Lijuan Wang},
      year={2025},
      eprint={2506.05984},
      archivePrefix={arXiv},
      primaryClass={eess.AS},
      url={https://arxiv.org/abs/2506.05984}, 
}