WSYue-TTS / README.md
ASLP-lab's picture
Update README.md
8ea78af verified
---
license: apache-2.0
---
<!-- ![WenetSpeech-Yue](https://huggingface.co/datasets/ASLP-lab/WenetSpeech-Yue/resolve/main/wenetspeech_pipe.svg) -->
## 👉🏻 WenetSpeech-Yue 👈🏻
**WenetSpeech-Yue**: [Demos](https://aslp-lab.github.io/WenetSpeech-Yue/); [Paper](https://arxiv.org/abs/2509.03959); [Github](https://github.com/ASLP-lab/WenetSpeech-Yue); [HuggingFace](https://huggingface.co/datasets/ASLP-lab/WenetSpeech-Yue)
## Highlight🔥
**WenetSpeech-Yue TTS Models** have been released!
This repository contains two versions of the TTS models:
1. **ASLP-lab/Cosyvoice2-Yue**: The base model for Cantonese TTS.
2. **ASLP-lab/Cosyvoice2-Yue-ZoengJyutGaai**: A fine-tuned, higher-quality version for more natural speech generation.
## Install
**Clone and install**
- Clone the repo
``` sh
git clone https://github.com/ASLP-lab/WenetSpeech-Yue.git
cd WenetSpeech-Yue/CosyVoice2-Yue
```
- Create Conda env:
``` sh
conda create -n cosyvoice python=3.10
conda activate cosyvoice
# pynini is required by WeTextProcessing, use conda to install it as it can be executed on all platform.
conda install -y -c conda-forge pynini==2.1.5
pip install -r requirements.txt -i https://mirrors.aliyun.com/pypi/simple/ --trusted-host=mirrors.aliyun.com
```
**Model download**
``` python
from huggingface_hub import snapshot_download
snapshot_download('ASLP-lab/Cosyvoice2-Yue', local_dir='pretrained_models/Cosyvoice2-Yue')
snapshot_download('ASLP-lab/Cosyvoice2-Yue-ZoengJyutGaai', local_dir='pretrained_models/Cosyvoice2-Yue-ZoengJyutGaai')
```
**Usage**
``` python
import sys
sys.path.append('third_party/Matcha-TTS')
from cosyvoice.cli.cosyvoice import CosyVoice, CosyVoice2
from cosyvoice.utils.file_utils import load_wav
import torchaudio
import opencc
# s2t
converter = opencc.OpenCC('s2t.json')
cosyvoice_base = CosyVoice2(
'pretrained_models/Cosyvoice2-Yue',
load_jit=False, load_trt=False, load_vllm=False, fp16=False
)
cosyvoice_zjg = CosyVoice2(
'pretrained_models/Cosyvoice2-Yue-ZoengJyutGaai',
load_jit=False, load_trt=False, load_vllm=False, fp16=False
)
prompt_speech_16k = load_wav('asset/sg_017_090.wav', 16000)
text = '收到朋友从远方寄嚟嘅生日礼物,嗰份意外嘅惊喜同埋深深嘅祝福令我心入面充满咗甜蜜嘅快乐,笑容好似花咁绽放。'
text = converter.convert(text)
for i, j in enumerate(cosyvoice_base.inference_instruct2(text, '用粤语说这句话', prompt_speech_16k, stream=False)):
torchaudio.save('base_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
for i, j in enumerate(cosyvoice_zjg.inference_instruct2(text, '用粤语说这句话', prompt_speech_16k, stream=False)):
torchaudio.save('zjg_{}.wav'.format(i), j['tts_speech'], cosyvoice.sample_rate)
```
## Contact
If you are interested in leaving a message to our research team, feel free to email lhli@mail.nwpu.edu.cn or gzhao@mail.nwpu.edu.cn.