Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,92 @@
|
|
1 |
-
---
|
2 |
-
license: apache-2.0
|
3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: apache-2.0
|
3 |
+
---
|
4 |
+
# README
|
5 |
+
|
6 |
+
## Introduction
|
7 |
+
This repository hosts Ming-Freeform-Audio-Edit, the benchmark test set for evaluating the downstream editing tasks of the Ming-UniAudio model.
|
8 |
+
|
9 |
+
This test set covers 7 distinct editing tasks, categorized as follows:
|
10 |
+
|
11 |
+
+ Semantic Editing (3 tasks):
|
12 |
+
|
13 |
+
+ Free-form Deletion
|
14 |
+
+ Free-form Insertion
|
15 |
+
+ Free-form Substitution
|
16 |
+
+ Acoustic Editing (5 tasks):
|
17 |
+
+ Time-stretching
|
18 |
+
+ Pitch Shifting
|
19 |
+
+ Dialect Conversion
|
20 |
+
+ Emotion Conversion
|
21 |
+
+ Volume Conversion
|
22 |
+
|
23 |
+
The audio samples are sourced from well-known open-source datasets, including seed-tts eval, LibriTTS, and Gigaspeech.
|
24 |
+
|
25 |
+
## Dataset statistics
|
26 |
+
### Semantic Editing
|
27 |
+
| Task Types\ # samples \ Language | Zh deletion | Zh insertion | Zh substitution | En deletion | En insertion | En substitution |
|
28 |
+
| -------------------------------- | ----------: | -----------: | --------------: | ----------: | -----------: | --------------: |
|
29 |
+
| Index-based | 186 | 180 | 36 | 138 | 100 | 67 |
|
30 |
+
| Content-based | 95 | 110 | 289 | 62 | 99 | 189 |
|
31 |
+
| Total | 281 | 290 | 325 | 200 | 199 | 256 |
|
32 |
+
|
33 |
+
*Index-based* instruction: specifies an operation on content at positions $i$ to $j$. (e.g. delete the characters or words from index 3 to 12)
|
34 |
+
|
35 |
+
*Content-based*: targets specific characters or words for editing. (e.g. insert 'hello' before 'world')
|
36 |
+
### Acoustic Editing
|
37 |
+
| Task Types\ # samples \ Language | Zh | En |
|
38 |
+
| -------------------------------- | ---: | ---: |
|
39 |
+
| Time-stretching | 50 | 50 |
|
40 |
+
| Pitch Shifting | 50 | 50 |
|
41 |
+
| Dialect Conversion | 250 | --- |
|
42 |
+
| Emotion Conversion | 84 | 72 |
|
43 |
+
| Volume Conversion | 50 | 50 |
|
44 |
+
## Evaluation Metrics
|
45 |
+
### Semantic Editing
|
46 |
+
For the deletion, insertion, and substitution tasks, we evaluate the performance using four key metrics:
|
47 |
+
+ Word Error Rate (WER) of the Edited Region (wer)
|
48 |
+
+ Word Error Rate (WER) of the Non-edited Region (wer.noedit)
|
49 |
+
+ Edit Operation Accuracy (acc)
|
50 |
+
+ Speaker Similarity (sim)
|
51 |
+
|
52 |
+
These metrics can be calculated by running the following command:
|
53 |
+
```bash
|
54 |
+
# run pip install -r requirements.txt first
|
55 |
+
bash eval_scripts/semantic/run_eval.sh /path/contains/edited/audios
|
56 |
+
```
|
57 |
+
NOTE: the directory passed to the above script should have the structure as follows:
|
58 |
+
```
|
59 |
+
.
|
60 |
+
├── del
|
61 |
+
│ └── edit_del_basic
|
62 |
+
│ ├── eval_result
|
63 |
+
│ ├── hyp.txt
|
64 |
+
│ ├── input_wavs
|
65 |
+
│ ├── origin_wavs
|
66 |
+
│ ├── ref.txt
|
67 |
+
│ ├── test.jsonl
|
68 |
+
│ ├── test_parse.jsonl # This is need to run the evaluation script
|
69 |
+
│ ├── test.yaml
|
70 |
+
│ └── tts/ # This is the directory contains the edited wavs
|
71 |
+
```
|
72 |
+
|
73 |
+
Examples of test_parse.jsonl:
|
74 |
+
``` json
|
75 |
+
{"uid": "00107947-00000092", "input_wav_path": "wavs/00107947-00000092.wav","output_wav_path": "edited_wavs/00107947-00000092.wav", "instruction": "Please recognize the language of this speech and transcribe it. And delete '随着经济的发'.\n", "asr_label": "随着经济的发展食物浪费也随之增长", "asr_text": "随着经济的发展食物浪费也随之增长", "edited_text_label": "展食物浪费也随之增长", "edited_text": "<edit></edit>展食物浪费也随之增长", "origin_speech_url": null,}
|
76 |
+
|
77 |
+
{"uid": "00010823-00000019", "input_wav_path": "wavs/00010823-00000019.wav", "output_wav_path": "edited_wavs/00010823-00000019.wav", "instruction": "Please recognize the language of this speech and transcribe it. And delete the characters or words from index 4 to index 10.\n", "asr_label": "我们将为全球城市的可持续发展贡献力量", "asr_text": "我们将为全球城市的可持续发展贡献力量", "edited_text_label": "我们将持续发展贡献力量", "edited_text": "我们将<edit></edit>持续发展贡献力量", "origin_speech_url": null}
|
78 |
+
```
|
79 |
+
### Acoustic Editing
|
80 |
+
For the acoustic editing tasks, we use WER and SPK-SIM as the primary evaluation metrics. These two metrics can be calculated by running the following commands:
|
81 |
+
```bash
|
82 |
+
bash eval_scripts/acoustic/cal_wer_sim.sh /path/contains/edited/audios
|
83 |
+
```
|
84 |
+
|
85 |
+
Additionally, for the dialect and emotion conversion tasks, we assess the conversion accuracy by leveraging a large language model (LLM) through API calls.
|
86 |
+
```bash
|
87 |
+
# dialect conversion accuracy
|
88 |
+
python eval_scripts/acoustic/pyscripts/dialect_api.py --output_dir <保存评测结果的根目录> --generated_audio_dir <存放已生成音频文件的目录路径>
|
89 |
+
# emotion conversion accuracy
|
90 |
+
# fisrt, run: bash eval_scripts/acoustic/cal_wer_sim.sh /path/contains/edited/audios
|
91 |
+
python pyscripts/emo_acc.py
|
92 |
+
```
|