pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdatareverb_rir_singlewav.scp_noise_db_range12_17_noise_scpdatareverb_noise_singlewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob0.999_noise_apply_prob1._sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4441309/
This model was trained by kamo-naoyuki using reverb/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["reverb"]}
|
espnet/kamo-naoyuki_reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdata-truncated-0e9753
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:reverb",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-reverb #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdatareverb_rir_singlewav.scp_noise_db_range12_17_noise_scpdatareverb_noise_singlewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob0.999_noise_apply_prob1._sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdatareverb_rir_singlewav.scp_noise_db_range12_17_noise_scpdatareverb_noise_singlewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob0.999_noise_apply_prob1._sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-reverb #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/reverb_asr_train_asr_transformer2_raw_en_char_rir_scpdatareverb_rir_singlewav.scp_noise_db_range12_17_noise_scpdatareverb_noise_singlewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob0.999_noise_apply_prob1._sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/reverb_asr_train_asr_transformer4_raw_char_batch_bins16000000_accum_grad1_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4278363/
This model was trained by kamo-naoyuki using reverb/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["reverb"]}
|
espnet/kamo-naoyuki_reverb_asr_train_asr_transformer4_raw_char_batch_bins1600-truncated-1b72bb
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:reverb",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-reverb #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/reverb_asr_train_asr_transformer4_raw_char_batch_bins16000000_accum_grad1_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/reverb_asr_train_asr_transformer4_raw_char_batch_bins16000000_accum_grad1_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-reverb #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/reverb_asr_train_asr_transformer4_raw_char_batch_bins16000000_accum_grad1_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using reverb/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/timit_asr_train_asr_raw_word_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4284058/
This model was trained by kamo-naoyuki using timit/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["timit"]}
|
espnet/kamo-naoyuki_timit_asr_train_asr_raw_word_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:timit",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-timit #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/timit_asr_train_asr_raw_word_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using timit/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/timit_asr_train_asr_raw_word_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using timit/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-timit #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/timit_asr_train_asr_raw_word_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using timit/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/wsj`
♻️ Imported from https://zenodo.org/record/4003381/
This model was trained by kamo-naoyuki using wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wsj"]}
|
espnet/kamo-naoyuki_wsj
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:wsj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/wsj'
️ Imported from URL
This model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/wsj'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/wsj'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/wsj_transformer2`
♻️ Imported from https://zenodo.org/record/4243201/
This model was trained by kamo-naoyuki using wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wsj"]}
|
espnet/kamo-naoyuki_wsj_transformer2
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:wsj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/wsj_transformer2'
️ Imported from URL
This model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/wsj_transformer2'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/wsj_transformer2'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/kan-bayashi_csj_asr_train_asr_conformer`
This model was trained by Nelson Yalta using csj recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 0d8cd47dd3572248b502bc831cd305e648170233
pip install -e .
cd egs2/csj/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/kan-bayashi_csj_asr_train_asr_conformer
```
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_char_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 47308
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 6
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
pretrain_path: []
pretrain_key: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 15000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_sp/train/speech_shape
- exp/asr_stats_raw_sp/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_sp/valid/speech_shape
- exp/asr_stats_raw_sp/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodup_sp/wav.scp
- speech
- sound
- - dump/raw/train_nodup_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/train_dev/wav.scp
- speech
- sound
- - dump/raw/train_dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- "\u306E"
- "\u3044"
- "\u3067"
- "\u3068"
- "\u30FC"
- "\u3066"
- "\u3046"
- "\u307E"
- "\u3059"
- "\u3057"
- "\u306B"
- "\u3063"
- "\u306A"
- "\u3048"
- "\u305F"
- "\u3053"
- "\u304C"
- "\u304B"
- "\u306F"
- "\u308B"
- "\u3042"
- "\u3093"
- "\u308C"
- "\u3082"
- "\u3092"
- "\u305D"
- "\u308A"
- "\u3089"
- "\u3051"
- "\u304F"
- "\u3069"
- "\u3088"
- "\u304D"
- "\u3060"
- "\u304A"
- "\u30F3"
- "\u306D"
- "\u4E00"
- "\u3055"
- "\u30B9"
- "\u8A00"
- "\u3061"
- "\u3064"
- "\u5206"
- "\u30C8"
- "\u3084"
- "\u4EBA"
- "\u30EB"
- "\u601D"
- "\u308F"
- "\u6642"
- "\u65B9"
- "\u3058"
- "\u30A4"
- "\u884C"
- "\u4F55"
- "\u307F"
- "\u5341"
- "\u30E9"
- "\u4E8C"
- "\u672C"
- "\u8A9E"
- "\u5927"
- "\u7684"
- "\u30AF"
- "\u30BF"
- "\u308D"
- "\u3070"
- "\u3087"
- "\u3083"
- "\u97F3"
- "\u51FA"
- "\u305B"
- "\u30C3"
- "\u5408"
- "\u65E5"
- "\u4E2D"
- "\u751F"
- "\u4ECA"
- "\u898B"
- "\u30EA"
- "\u9593"
- "\u8A71"
- "\u3081"
- "\u30A2"
- "\u5F8C"
- "\u81EA"
- "\u305A"
- "\u79C1"
- "\u30C6"
- "\u4E0A"
- "\u5E74"
- "\u5B66"
- "\u4E09"
- "\u30B7"
- "\u5834"
- "\u30C7"
- "\u5B9F"
- "\u5B50"
- "\u4F53"
- "\u8003"
- "\u5BFE"
- "\u7528"
- "\u6587"
- "\u30D1"
- "\u5F53"
- "\u7D50"
- "\u5EA6"
- "\u5165"
- "\u8A33"
- "\u30D5"
- "\u98A8"
- "\u30E0"
- "\u30D7"
- "\u6700"
- "\u30C9"
- "\u30EC"
- "\u30ED"
- "\u4F5C"
- "\u6570"
- "\u76EE"
- "\u30B8"
- "\u95A2"
- "\u30B0"
- "\u767A"
- "\u8005"
- "\u5B9A"
- "\u3005"
- "\u3050"
- "\u30B3"
- "\u4E8B"
- "\u624B"
- "\u5168"
- "\u5909"
- "\u30DE"
- "\u6027"
- "\u8868"
- "\u4F8B"
- "\u52D5"
- "\u8981"
- "\u5148"
- "\u524D"
- "\u610F"
- "\u90E8"
- "\u4F1A"
- "\u6301"
- "\u30E1"
- "\u5316"
- "\u9054"
- "\u4ED8"
- "\u5F62"
- "\u73FE"
- "\u4E94"
- "\u30AB"
- "\u3079"
- "\u53D6"
- "\u56DE"
- "\u5E38"
- "\u4F7F"
- "\u611F"
- "\u66F8"
- "\u6C17"
- "\u6CD5"
- "\u7A0B"
- "\u3071"
- "\u56DB"
- "\u591A"
- "\u8272"
- "\u30BB"
- "\u7406"
- "\u975E"
- "\u30D0"
- "\u58F0"
- "\u5358"
- "\u756A"
- "\uFF21"
- "\u6210"
- "\u540C"
- "\u901A"
- "\u30A3"
- "\u679C"
- "\u30AD"
- "\u554F"
- "\u984C"
- "\u69CB"
- "\u56FD"
- "\u6765"
- "\u9AD8"
- "\u6B21"
- "\u9A13"
- "\u3052"
- "\u30C1"
- "\u4EE5"
- "\u3054"
- "\u4EE3"
- "\u30E2"
- "\u30AA"
- "\u51C4"
- "\u7279"
- "\u77E5"
- "\u30E5"
- "\u7269"
- "\u660E"
- "\u70B9"
- "\u5473"
- "\u767E"
- "\u89E3"
- "\u8FD1"
- "\u8B58"
- "\u5730"
- "\u540D"
- "\u805E"
- "\u4E0B"
- "\u5C0F"
- "\u6559"
- "\u30B5"
- "\u70BA"
- "\u4E5D"
- "\u30D6"
- "\u5BB6"
- "\u30CB"
- "\u521D"
- "\u30D9"
- "\u30E7"
- "\u5C11"
- "\u8A8D"
- "\u8AD6"
- "\u529B"
- "\u516D"
- "\u30D3"
- "\u60C5"
- "\u7FD2"
- "\u30A6"
- "\u7ACB"
- "\u5FC3"
- "\u8ABF"
- "\u5831"
- "\u30A8"
- "\uFF24"
- "\uFF2E"
- "\u793A"
- "\u793E"
- "\u9055"
- "\u969B"
- "\u3056"
- "\u8AAC"
- "\u5FDC"
- "\u98DF"
- "\u72B6"
- "\u9577"
- "\u7814"
- "\u6821"
- "\u5185"
- "\u639B"
- "\u30DF"
- "\u5916"
- "\u5411"
- "\u80FD"
- "\u516B"
- "\u9762"
- "\u7A76"
- "\u7136"
- "\u3073"
- "\u30D4"
- "\u4E3B"
- "\u4FC2"
- "\u5024"
- "\u91CD"
- "\u8A5E"
- "\u4F9B"
- "\u5F97"
- "\u5FC5"
- "\u5973"
- "\u78BA"
- "\u7D42"
- "\u30BA"
- "\u6BCD"
- "\u696D"
- "\u7387"
- "\u65B0"
- "\u6D3B"
- "\u697D"
- "\u8449"
- "\u8A08"
- "\u30CA"
- "\u3080"
- "\u6240"
- "\u4E16"
- "\u6B63"
- "\u30E3"
- "\u8A18"
- "\u671F"
- "\u5207"
- "\u3078"
- "\u6A5F"
- "\u30DA"
- "\u5343"
- "\u985E"
- "\u5143"
- "\u614B"
- "\u826F"
- "\u5728"
- "\u6709"
- "\u30C0"
- "\u4E03"
- "\uFF23"
- "\u5225"
- "\u30EF"
- "\u691C"
- "\u7D9A"
- "\u9078"
- "\u57FA"
- "\u76F8"
- "\u6708"
- "\u4FA1"
- "\u7D20"
- "\u4ED6"
- "\u6BD4"
- "\u9023"
- "\u96C6"
- "\u30A7"
- "\u307B"
- "\u4F4D"
- "\u597D"
- "\uFF2D"
- "\u5F37"
- "\u4E0D"
- "\u5FA1"
- "\u6790"
- "\u30DD"
- "\u7121"
- "\u89AA"
- "\u53D7"
- "\u3086"
- "\u7F6E"
- "\u8C61"
- "\u4ED5"
- "\u5F0F"
- "\u30CD"
- "\u6307"
- "\u8AAD"
- "\u6C7A"
- "\u8ECA"
- "\u96FB"
- "\u904E"
- "\u30B1"
- "\u8A55"
- "\u5229"
- "\u6B8B"
- "\u8D77"
- "\u30CE"
- "\u7D4C"
- "\u56F3"
- "\u4F1D"
- "\u500B"
- "\u30C4"
- "\u7BC0"
- "\u9053"
- "\u5E73"
- "\u91D1"
- "\u899A"
- "\uFF34"
- "\u4F4F"
- "\u59CB"
- "\u63D0"
- "\u5B58"
- "\u5171"
- "\u30DB"
- "\u7B2C"
- "\u7D44"
- "\u89B3"
- "\u80B2"
- "\u6771"
- "\u305E"
- "\u958B"
- "\u52A0"
- "\u5F15"
- "\uFF33"
- "\u53E3"
- "\u6C34"
- "\u5BB9"
- "\u5468"
- "\u5B87"
- "\u7D04"
- "\u5B57"
- "\u3076"
- "\u9803"
- "\u3072"
- "\u5B99"
- "\u6BB5"
- "\u30BD"
- "\u97FF"
- "\u30DC"
- "\u53CB"
- "\u91CF"
- "\u6599"
- "\u3085"
- "\u5CF6"
- "\u8EAB"
- "\u76F4"
- "\u753B"
- "\u7DDA"
- "\u54C1"
- "\u5DEE"
- "\u4EF6"
- "\u9069"
- "\u5F35"
- "\u8FBA"
- "\u8FBC"
- "\u91CE"
- "\u69D8"
- "\u578B"
- "\u4E88"
- "\u7A2E"
- "\u5074"
- "\u8FF0"
- "\u5C71"
- "\u5C4B"
- "\u5E30"
- "\u30CF"
- "\u4E57"
- "\u539F"
- "\u683C"
- "\u8CEA"
- "\u666E"
- "\uFF30"
- "\u9020"
- "\u753A"
- "\u30B4"
- "\u82F1"
- "\u63A5"
- "\u304E"
- "\u6E2C"
- "\u3075"
- "\u7FA9"
- "\u4EAC"
- "\u5272"
- "\u5236"
- "\u7B54"
- "\u5404"
- "\u4FE1"
- "\u754C"
- "\u6211"
- "\u7A7A"
- "\uFF0E"
- "\u7740"
- "\u53EF"
- "\u66F4"
- "\u6D77"
- "\u4E0E"
- "\u9032"
- "\u52B9"
- "\u5F7C"
- "\u771F"
- "\u7530"
- "\u5FB4"
- "\u6D41"
- "\u5177"
- "\uFF32"
- "\u5E02"
- "\u67FB"
- "\u5B89"
- "\uFF22"
- "\u5E83"
- "\u50D5"
- "\u6CE2"
- "\u5C40"
- "\u8A2D"
- "\u7537"
- "\u767D"
- "\u30B6"
- "\u53CD"
- "\u6226"
- "\u533A"
- "\u6C42"
- "\u96D1"
- "\uFF29"
- "\u6B69"
- "\u8CB7"
- "\u982D"
- "\u7B97"
- "\u534A"
- "\u4FDD"
- "\u5E03"
- "\u96E3"
- "\uFF2C"
- "\u5224"
- "\u843D"
- "\u8DB3"
- "\u5E97"
- "\u7533"
- "\u8FD4"
- "\u30AE"
- "\u4E07"
- "\u6728"
- "\u6614"
- "\u8F03"
- "\u7D22"
- "\uFF26"
- "\u30B2"
- "\u6B86"
- "\u60AA"
- "\u5883"
- "\u548C"
- "\u907A"
- "\u57DF"
- "\u968E"
- "\u542B"
- "\u305C"
- "\u30BC"
- "\u65AD"
- "\u9650"
- "\u63A8"
- "\u4F4E"
- "\u5F71"
- "\u898F"
- "\u6319"
- "\u90FD"
- "\u307C"
- "\u6848"
- "\u4EEE"
- "\u88AB"
- "\u547C"
- "\u30A1"
- "\u96E2"
- "\u7CFB"
- "\u79FB"
- "\u30AC"
- "\u5DDD"
- "\u6E96"
- "\u904B"
- "\u6761"
- "\u5FF5"
- "\u6C11"
- "\uFF27"
- "\u7236"
- "\u75C5"
- "\u79D1"
- "\u4E21"
- "\u7531"
- "\u8A66"
- "\u56E0"
- "\u547D"
- "\u795E"
- "\uFF28"
- "\u7570"
- "\u7C21"
- "\u53E4"
- "\u6F14"
- "\u5897"
- "\u51E6"
- "\u8B70"
- "\u7DD2"
- "\u7CBE"
- "\u6613"
- "\u53F7"
- "\u65CF"
- "\u52FF"
- "\u60F3"
- "\u5217"
- "\u5C0E"
- "\u8EE2"
- "\u54E1"
- "\u30E6"
- "\u6BCE"
- "\u8996"
- "\u4E26"
- "\u98DB"
- "\u4F3C"
- "\u6620"
- "\u7D71"
- "\u4EA4"
- "\u30D2"
- "\u6B4C"
- "\u5F85"
- "\u8CC7"
- "\u8907"
- "\u8AA4"
- "\u63DB"
- "\u6A19"
- "\u6CC1"
- "\u914D"
- "\u62BD"
- "\u822C"
- "\u7403"
- "\u9006"
- "\u65C5"
- "\u6628"
- "\u9662"
- "\u99C5"
- "\u74B0"
- "\u5BDF"
- "\u516C"
- "\u6B73"
- "\u5C5E"
- "\u8F9E"
- "\u5947"
- "\u6CBB"
- "\u5E7E"
- "\u82E5"
- "\u58F2"
- "\u632F"
- "\u7686"
- "\u6CE8"
- "\u6B74"
- "\u9805"
- "\u5F93"
- "\u5747"
- "\u5F79"
- "\u9806"
- "\u53BB"
- "\u56E3"
- "\u8853"
- "\u7DF4"
- "\u6FC0"
- "\u6982"
- "\u66FF"
- "\u7B49"
- "\u98F2"
- "\u53F2"
- "\u88DC"
- "\u901F"
- "\u53C2"
- "\u65E9"
- "\u53CE"
- "\u9332"
- "\u671D"
- "\u5186"
- "\u5370"
- "\u5668"
- "\u63A2"
- "\u7D00"
- "\u9001"
- "\u6E1B"
- "\u571F"
- "\u5929"
- "\uFF2F"
- "\u50BE"
- "\u72AC"
- "\u9060"
- "\u5E2F"
- "\u52A9"
- "\u6A2A"
- "\u591C"
- "\u7523"
- "\u8AB2"
- "\u5BA2"
- "\u629E"
- "\u5712"
- "\u4E38"
- "\u50CF"
- "\u50CD"
- "\u6750"
- "\u5DE5"
- "\u904A"
- "\u544A"
- "\u523A"
- "\u6539"
- "\u8D64"
- "\u8074"
- "\u4ECB"
- "\u8077"
- "\u53F0"
- "\u77ED"
- "\u8AB0"
- "\u7D30"
- "\u672A"
- "\u770C"
- "\u9928"
- "\u6B62"
- "\u53F3"
- "\u306C"
- "\u3065"
- "\u56F2"
- "\u8A0E"
- "\u6B7B"
- "\u5EFA"
- "\u592B"
- "\u7AE0"
- "\u964D"
- "\u666F"
- "\u706B"
- "\u30A9"
- "\u9E97"
- "\u8B1B"
- "\u72EC"
- "\u5DE6"
- "\u5C64"
- "\uFF25"
- "\u5C55"
- "\u653F"
- "\u5099"
- "\u4F59"
- "\u7D76"
- "\u5065"
- "\u518D"
- "\u9580"
- "\u5546"
- "\u52DD"
- "\u52C9"
- "\u82B1"
- "\u30E4"
- "\u8EF8"
- "\u97FB"
- "\u66F2"
- "\u6574"
- "\u652F"
- "\u6271"
- "\u53E5"
- "\u6280"
- "\u5317"
- "\u30D8"
- "\u897F"
- "\u5247"
- "\u4FEE"
- "\u6388"
- "\u9031"
- "\u5BA4"
- "\u52D9"
- "\u9664"
- "\u533B"
- "\u6563"
- "\u56FA"
- "\u7AEF"
- "\u653E"
- "\u99AC"
- "\u7A4D"
- "\u8208"
- "\u592A"
- "\u5ACC"
- "\u9F62"
- "\u672B"
- "\u7D05"
- "\u6E90"
- "\u6E80"
- "\u5931"
- "\u5BDD"
- "\u6D88"
- "\u6E08"
- "\u4FBF"
- "\u983C"
- "\u4F01"
- "\u5B8C"
- "\u4F11"
- "\u9752"
- "\u7591"
- "\u8D70"
- "\u6975"
- "\u767B"
- "\u8AC7"
- "\u6839"
- "\u6025"
- "\u512A"
- "\u7D75"
- "\u623B"
- "\u5E2B"
- "\u5F59"
- "\u6DF7"
- "\u8DEF"
- "\u7E70"
- "\uFF2B"
- "\u8A3C"
- "\u713C"
- "\u6562"
- "\u5BB3"
- "\u96F6"
- "\u6253"
- "\u82E6"
- "\u7701"
- "\u7D19"
- "\u5C02"
- "\u8DDD"
- "\u9854"
- "\u8D8A"
- "\u4E89"
- "\u56F0"
- "\u5BC4"
- "\u5199"
- "\u4E92"
- "\u6DF1"
- "\u5A5A"
- "\u7DCF"
- "\u89A7"
- "\u80CC"
- "\u7BC9"
- "\u6E29"
- "\u8336"
- "\u62EC"
- "\u8CA0"
- "\u590F"
- "\u89E6"
- "\u7D14"
- "\u9045"
- "\u58EB"
- "\u96A3"
- "\u6050"
- "\u91C8"
- "\u967A"
- "\u5150"
- "\u5BBF"
- "\u6A21"
- "\u77F3"
- "\u983B"
- "\u5B09"
- "\u5EA7"
- "\u7642"
- "\u7E4B"
- "\uFF38"
- "\u5C06"
- "\u8FFD"
- "\u5EAD"
- "\u6238"
- "\u5371"
- "\u5BC6"
- "\u5DF1"
- "\u9014"
- "\u7BC4"
- "\u99C4"
- "\u7D39"
- "\u4EFB"
- "\u968F"
- "\u5357"
- "\uFF11"
- "\u5EB7"
- "\u9818"
- "\u5FD8"
- "\u3045"
- "\u59FF"
- "\u7F8E"
- "\u55B6"
- "\u6349"
- "\u65E2"
- "\u7167"
- "\uFF2A"
- "\u4EF2"
- "\u9152"
- "\u52E2"
- "\u9ED2"
- "\u5149"
- "\u6E21"
- "\u75DB"
- "\u62C5"
- "\u5F31"
- "\u307D"
- "\uFF36"
- "\u7D0D"
- "\u629C"
- "\u5E45"
- "\u6D17"
- "\u7A81"
- "\u671B"
- "\u5373"
- "\u9858"
- "\u7565"
- "\uFF12"
- "\u9811"
- "\u5FD7"
- "\u5B85"
- "\u7247"
- "\u656C"
- "\u6751"
- "\u60B2"
- "\u81A8"
- "\u89D2"
- "\u30E8"
- "\u4F9D"
- "\u8A73"
- "\u5F8B"
- "\u9B5A"
- "\u52B4"
- "\u5A66"
- "\u6163"
- "\u732B"
- "\u5019"
- "\u8001"
- "\u558B"
- "\u79F0"
- "\u796D"
- "\u7FA4"
- "\u7E2E"
- "\u6C38"
- "\u616E"
- "\u5EF6"
- "\u7A3F"
- "\u611B"
- "\u8089"
- "\u9589"
- "\u8CBB"
- "\u6295"
- "\u6D3E"
- "\u81F4"
- "\u7BA1"
- "\u7C73"
- "\u5E95"
- "\u7D99"
- "\u6C0F"
- "\u690D"
- "\u501F"
- "\u5727"
- "\u52E4"
- "\u6F22"
- "\u66AE"
- "\u5F27"
- "\u88C5"
- "\u57CE"
- "\u5287"
- "\u76DB"
- "\u63F4"
- "\u9244"
- "\u8C37"
- "\u5E72"
- "\u7E26"
- "\u8A31"
- "\u6016"
- "\u9A5A"
- "\u8A8C"
- "\uFF35"
- "\u8B77"
- "\u5B88"
- "\u8033"
- "\u6B32"
- "\u8239"
- "\uFF10"
- "\u5178"
- "\u67D3"
- "\u7D1A"
- "\u98FE"
- "\u5144"
- "\u71B1"
- "\u8F09"
- "\u88FD"
- "\u5BFA"
- "\u662D"
- "\u7FFB"
- "\u5426"
- "\u5584"
- "\u62BC"
- "\u53CA"
- "\u6A29"
- "\u559C"
- "\u670D"
- "\u8CB0"
- "\u8EFD"
- "\u677F"
- "\u61B6"
- "\u98FC"
- "\u5C3E"
- "\u5FA9"
- "\u5E78"
- "\u7389"
- "\u5354"
- "\u679A"
- "\u90CE"
- "\u8840"
- "\u524A"
- "\u5922"
- "\u63A1"
- "\u6674"
- "\u6B20"
- "\u602A"
- "\u65BD"
- "\u7DE8"
- "\u98EF"
- "\u7B56"
- "\u9000"
- "\uFF39"
- "\u8349"
- "\u61F8"
- "\u6458"
- "\u58CA"
- "\u4F38"
- "\u85AC"
- "\u9996"
- "\u5BFF"
- "\u53B3"
- "\u606F"
- "\u5C45"
- "\u643A"
- "\u9F3B"
- "\u9280"
- "\u4EA1"
- "\u6CCA"
- "\u8857"
- "\u9759"
- "\u9CE5"
- "\u677E"
- "\u5F92"
- "\u969C"
- "\u7B4B"
- "\u7559"
- "\u51B7"
- "\u5C24"
- "\u68EE"
- "\u5438"
- "\u5012"
- "\u68B0"
- "\u6D0B"
- "\u821E"
- "\u6A4B"
- "\u500D"
- "\u6255"
- "\u5352"
- "\u7E04"
- "\u6C5A"
- "\u53F8"
- "\u6625"
- "\u793C"
- "\u66DC"
- "\u6545"
- "\u526F"
- "\u5F01"
- "\u5439"
- "\u85E4"
- "\u8DE1"
- "\u962A"
- "\u4E86"
- "\u91E3"
- "\u9632"
- "\u7834"
- "\u6012"
- "\u662F"
- "\u30A5"
- "\u7AF6"
- "\u8179"
- "\u4E95"
- "\u4E08"
- "\u64AE"
- "\u72ED"
- "\u5BD2"
- "\u7B46"
- "\u5965"
- "\u8C4A"
- "\u732E"
- "\u5C31"
- "\u5A18"
- "\u79D2"
- "\u6C5F"
- "\u8E0F"
- "\u8A13"
- "\u7372"
- "\u96E8"
- "\u6BBA"
- "\u57CB"
- "\u64CD"
- "\u9AA8"
- "\u8D85"
- "\u6D5C"
- "\u8B66"
- "\u7DD1"
- "\u7D61"
- "\u8133"
- "\u7B11"
- "\u6D6E"
- "\u7D66"
- "\u7126"
- "\u8A70"
- "\u878D"
- "\u738B"
- "\u5C3A"
- "\u5E7C"
- "\u820C"
- "\u663C"
- "\u88CF"
- "\u6CE3"
- "\u67C4"
- "\u9396"
- "\u62E1"
- "\u8A3A"
- "\u7DE0"
- "\u5B98"
- "\u6697"
- "\u820E"
- "\u6298"
- "\u5264"
- "\u4E73"
- "\u6B6F"
- "\u7248"
- "\u5C04"
- "\u8108"
- "\u9707"
- "\u7802"
- "\u4F34"
- "\u72AF"
- "\u4F50"
- "\u5DDE"
- "\u8FB2"
- "\u8DA3"
- "\u990A"
- "\u675F"
- "\u6E2F"
- "\u8FEB"
- "\u5F3E"
- "\u798F"
- "\u51AC"
- "\u541B"
- "\u6B66"
- "\u77AC"
- "\u67A0"
- "\u6CA2"
- "\u661F"
- "\u5BCC"
- "\u6557"
- "\u5D0E"
- "\u6355"
- "\u8377"
- "\u5F1F"
- "\u95BE"
- "\u7E54"
- "\u7C89"
- "\u725B"
- "\u8DF5"
- "\u9999"
- "\u6797"
- "\u83DC"
- "\u62CD"
- "\u63CF"
- "\u888B"
- "\u6607"
- "\u91DD"
- "\u8FCE"
- "\u585A"
- "\u5A46"
- "\uFF49"
- "\u8ECD"
- "\uFF13"
- "\uFF37"
- "\u5BC2"
- "\u8F29"
- "\u3074"
- "\u5DFB"
- "\u4E01"
- "\u504F"
- "\u79CB"
- "\u5E9C"
- "\u6CC9"
- "\u81F3"
- "\u6368"
- "\u7956"
- "\u8584"
- "\u5B97"
- "\u5FB9"
- "\u93E1"
- "\u75C7"
- "\u6CB9"
- "\u8131"
- "\u9CF4"
- "\u7AE5"
- "\u6BDB"
- "\u9077"
- "\u84CB"
- "\u58C1"
- "\u5915"
- "\u5589"
- "\u907F"
- "\u984D"
- "\u6EA2"
- "\u96F0"
- "\u4EE4"
- "\u59C9"
- "\u63E1"
- "\u3077"
- "\u523B"
- "\u62E0"
- "\u8CA1"
- "\u8FF7"
- "\u9063"
- "\u82B8"
- "\u5E8F"
- "\u76E3"
- "\u8457"
- "\u5869"
- "\u5009"
- "\u7F6A"
- "\u6F5C"
- "\u7D5E"
- "\u764C"
- "\u5BAE"
- "\u5E2D"
- "\u8F2A"
- "\u594F"
- "\u846C"
- "\u6C60"
- "\u6CBF"
- "\u5FAE"
- "\u5305"
- "\u76CA"
- "\u76AE"
- "\u4FC3"
- "\u6297"
- "\u5FEB"
- "\u66AB"
- "\u52E7"
- "\u8CA9"
- "\u8C46"
- "\u5B63"
- "\u529F"
- "\u9A12"
- "\uFF54"
- "\u97D3"
- "\u6ED1"
- "\u75B2"
- "\u9003"
- "\u9061"
- "\u5E79"
- "\u60A9"
- "\u83D3"
- "\u672D"
- "\u6804"
- "\u9177"
- "\u8B1D"
- "\u6C96"
- "\u96EA"
- "\u5360"
- "\u60D1"
- "\u63FA"
- "\u866B"
- "\u62B1"
- "\uFF4B"
- "\u5CA1"
- "\u6E9C"
- "\u8535"
- "\u7763"
- "\u6838"
- "\u4E71"
- "\u4E45"
- "\u9EC4"
- "\u9670"
- "\u7720"
- "\u7B26"
- "\u6B8A"
- "\u628A"
- "\u6291"
- "\u5E0C"
- "\u63C3"
- "\u6483"
- "\u5EAB"
- "\u5409"
- "\u6E6F"
- "\u65CB"
- "\u640D"
- "\u52AA"
- "\u64E6"
- "\u9769"
- "\u6E0B"
- "\u773C"
- "\u592E"
- "\u8CDE"
- "\u5374"
- "\u5948"
- "\u539A"
- "\u59D4"
- "\u83EF"
- "\u96A0"
- "\uFF4E"
- "\u30CC"
- "\u9BAE"
- "\u515A"
- "\u5C65"
- "\u8A98"
- "\u6469"
- "\u6162"
- "\u5442"
- "\u7206"
- "\u7BB1"
- "\u6075"
- "\u9678"
- "\u7DCA"
- "\u7E3E"
- "\u5742"
- "\u7B52"
- "\u7532"
- "\u5348"
- "\u5230"
- "\u8CAC"
- "\u5C0A"
- "\u6CF3"
- "\u6279"
- "\u7518"
- "\u5B6B"
- "\u7159"
- "\u8A2A"
- "\u50B7"
- "\u6E05"
- "\u716E"
- "\u88C1"
- "\u9694"
- "\u8ED2"
- "\uFF31"
- "\u7FBD"
- "\u5D29"
- "\u7A74"
- "\u7CD6"
- "\u707D"
- "\u5275"
- "\u6F70"
- "\u6691"
- "\u87BA"
- "\u653B"
- "\u6577"
- "\u6575"
- "\u76E4"
- "\u9732"
- "\u7A93"
- "\u63B2"
- "\u81E8"
- "\u53E9"
- "\u5145"
- "\u4FFA"
- "\u8F38"
- "\u967D"
- "\u6B27"
- "\u6687"
- "\u6B6A"
- "\u6DFB"
- "\u60A3"
- "\u5FD9"
- "\u70AD"
- "\u829D"
- "\u8EDF"
- "\u88D5"
- "\u7E01"
- "\u6F2B"
- "\u7A1A"
- "\u7968"
- "\u8A69"
- "\u5CB8"
- "\u7687"
- "\uFF4A"
- "\u6627"
- "\u5100"
- "\u5857"
- "\u8E0A"
- "\u8AF8"
- "\u6D74"
- "\u904D"
- "\u66D6"
- "\u5BE7"
- "\u99B4"
- "\u5339"
- "\u03B1"
- "\u627F"
- "\u30BE"
- "\u6383"
- "\u5375"
- "\u5999"
- "\u3043"
- "\u66B4"
- "\u62B5"
- "\u604B"
- "\u8863"
- "\u6EB6"
- "\u7DAD"
- "\u514D"
- "\u6392"
- "\u685C"
- "\u7573"
- "\u7B87"
- "\u6398"
- "\u535A"
- "\u6FC3"
- "\u7FCC"
- "\u8056"
- "\u7DB2"
- "\u885B"
- "\u64EC"
- "\u5E8A"
- "\u9178"
- "\u6669"
- "\u4E7E"
- "\u90AA"
- "\u7551"
- "\u6EDE"
- "\u5802"
- "\u7E41"
- "\u4ECF"
- "\u5FB3"
- "\u7DE9"
- "\u6A39"
- "\u6551"
- "\u633F"
- "\u68D2"
- "\u906D"
- "\u676F"
- "\u6065"
- "\u6E56"
- "\u6E09"
- "\u81D3"
- "\u8CB4"
- "\u723A"
- "\u7981"
- "\u4F75"
- "\u5263"
- "\u786C"
- "\u58C7"
- "\u80A9"
- "\u6D78"
- "\u4F0A"
- "\u5B9D"
- "\u6094"
- "\u8E8D"
- "\u6DB2"
- "\u99C6"
- "\u6D25"
- "\u307A"
- "\u6D45"
- "\u8B72"
- "\u5CA9"
- "\u9B45"
- "\u587E"
- "\u03B8"
- "\u6696"
- "\u6CB3"
- "\u8A95"
- "\u7F36"
- "\u5507"
- "\u80A2"
- "\u6328"
- "\u62F6"
- "\u7A0E"
- "\u50AC"
- "\u8A34"
- "\uFF58"
- "\u968A"
- "\u659C"
- "\u770B"
- "\uFF50"
- "\u6D66"
- "\u8352"
- "\uFF41"
- "\u71C3"
- "\u52A3"
- "\u5BA3"
- "\u8FBF"
- "\u790E"
- "\u62FE"
- "\u5C4A"
- "\u6905"
- "\u5EC3"
- "\u6749"
- "\u9AEA"
- "\u77E2"
- "\u67D4"
- "\u55AB"
- "\u73CD"
- "\u57FC"
- "\u88C2"
- "\u63B4"
- "\u59BB"
- "\u8CA7"
- "\u934B"
- "\u59A5"
- "\u59B9"
- "\u5175"
- "\uFF14"
- "\u623F"
- "\u5951"
- "\u65E8"
- "\uFF44"
- "\u0394"
- "\u5DE1"
- "\u8A02"
- "\u5F90"
- "\u8CC0"
- "\u7BED"
- "\u9810"
- "\u84C4"
- "\u8846"
- "\u5DE8"
- "\u5506"
- "\u65E6"
- "\u5531"
- "\u9047"
- "\u6E67"
- "\u8010"
- "\u96C4"
- "\u6D99"
- "\u8CB8"
- "\u822A"
- "\u5104"
- "\u5618"
- "\u6C37"
- "\u78C1"
- "\u679D"
- "\u8CAB"
- "\u61D0"
- "\u52DF"
- "\u8155"
- "\u65E7"
- "\u7AF9"
- "\u99D0"
- "\u8A72"
- "\uFF52"
- "\u5893"
- "\u518A"
- "\u80F8"
- "\u758E"
- "\u773A"
- "\uFF45"
- "\u9855"
- "\u631F"
- "\u55A7"
- "\u520A"
- "\u68C4"
- "\u990C"
- "\u67F1"
- "\u5800"
- "\u8ACB"
- "\u79D8"
- "\u6717"
- "\u96F2"
- "\u8170"
- "\u7A32"
- "\u828B"
- "\u8C9D"
- "\u5C48"
- "\u91CC"
- "\u508D"
- "\u8102"
- "\u6FC1"
- "\u54B2"
- "\u6BD2"
- "\u6EC5"
- "\u5629"
- "\u6442"
- "\u6E7E"
- "\u83CC"
- "\u8150"
- "\u5211"
- "\u5F25"
- "\u5AC1"
- "\u61A7"
- "\u4E18"
- "\u5C90"
- "\u52B1"
- "\u8CA2"
- "\u6C41"
- "\u96C7"
- "\u5076"
- "\u9774"
- "\u72D9"
- "\u719F"
- "\u900F"
- "\uFF59"
- "\u8CFC"
- "\u5319"
- "\uFF46"
- "\uFF15"
- "\u92AD"
- "\u6D12"
- "\u8A17"
- "\u809D"
- "\u963F"
- "\u80C3"
- "\uFF53"
- "\u885D"
- "\u621A"
- "\uFF4D"
- "\u84B8"
- "\u4FF3"
- "\u8972"
- "\u5265"
- "\u5BE9"
- "\u6817"
- "\u8A87"
- "\u5237"
- "\u7CF8"
- "\u90F7"
- "\u5049"
- "\u6C57"
- "\u53CC"
- "\u98FD"
- "\u77DB"
- "\u984E"
- "\u552F"
- "\u6590"
- "\u7DB4"
- "\u5B64"
- "\u90F5"
- "\u76D7"
- "\u9E7F"
- "\u8CC3"
- "\u76FE"
- "\u682A"
- "\u9ED9"
- "\u7C8B"
- "\u63DA"
- "\u9808"
- "\u7092"
- "\u9285"
- "\u5E81"
- "\u9B54"
- "\u75E9"
- "\u9802"
- "\u76BF"
- "\u970A"
- "\u5E55"
- "\u570F"
- "\u574A"
- "\u72C2"
- "\u8912"
- "\u9451"
- "\u50B5"
- "\u77AD"
- "\u565B"
- "\u5E33"
- "\u5782"
- "\u8870"
- "\u4ED9"
- "\u9EA6"
- "\u8CA8"
- "\u7AAA"
- "\u6F6E"
- "\u6FEF"
- "\u5238"
- "\u7D1B"
- "\u7384"
- "\u7C4D"
- "\uFF43"
- "\u74F6"
- "\u5DE3"
- "\u5192"
- "\u6CBC"
- "\u99D2"
- "\u5C3D"
- "\u517C"
- "\u7C97"
- "\u63BB"
- "\u80BA"
- "\u9154"
- "\uFF4C"
- "\u702C"
- "\u505C"
- "\u6F20"
- "\u673A"
- "\u916C"
- "\u4FD7"
- "\u8986"
- "\u5C3B"
- "\u9375"
- "\u5805"
- "\u6F2C"
- "\u2212"
- "\u79C0"
- "\u6885"
- "\u9042"
- "\u57F9"
- "\u871C"
- "\uFF42"
- "\u30FB"
- "\u52C7"
- "\u8ECC"
- "\u7F85"
- "\uFF3A"
- "\u5BB4"
- "\u8C5A"
- "\u7A3C"
- "\u62AB"
- "\u8CAF"
- "\u9EBB"
- "\u6C4E"
- "\u51DD"
- "\u5FE0"
- "\uFF55"
- "\u5F80"
- "\u8AE6"
- "\u8B19"
- "\u6F0F"
- "\u5410"
- "\u3047"
- "\u7652"
- "\u9663"
- "\u6D6A"
- "\u52D8"
- "\u53D9"
- "\u5200"
- "\u67B6"
- "\u57F7"
- "\u5674"
- "\u5197"
- "\u4E4F"
- "\u837B"
- "\u81ED"
- "\u708A"
- "\u598A"
- "\u808C"
- "\u8CDB"
- "\u5C0B"
- "\u9175"
- "\u757F"
- "\u5270"
- "\u706F"
- "\u8C6A"
- "\u9685"
- "\u9905"
- "\u7949"
- "\u80AF"
- "\u62DB"
- "\u7A3D"
- "\u5F6B"
- "\u5F69"
- "\u03B2"
- "\u6B04"
- "\u718A"
- "\u68CB"
- "\u6CB8"
- "\u6C88"
- "\u8339"
- "\u7ABA"
- "\u5B9C"
- "\u8217"
- "\u7CA7"
- "\u683D"
- "\u80AA"
- "\u9665"
- "\u6CE1"
- "\u95D8"
- "\u8F3F"
- "\u5353"
- "\u7070"
- "\u8F9B"
- "\u6F01"
- "\u9F13"
- "\u585E"
- "\u8CD1"
- "\u76C6"
- "\u68FA"
- "\u6311"
- "\u54F2"
- "\u9867"
- "\u8B21"
- "\u8302"
- "\u90A3"
- "\u80DE"
- "\u4F3A"
- "\u5A92"
- "\u708E"
- "\u67D0"
- "\u564C"
- "\u5203"
- "\u6F5F"
- "\u7656"
- "\u4E80"
- "\u63EE"
- "\u511F"
- "\u4E39"
- "\u7DEF"
- "\u9DB4"
- "\u4E4B"
- "\u6BB4"
- "\u4EF0"
- "\u5949"
- "\u7E2B"
- "\u75F4"
- "\u8650"
- "\u61B2"
- "\u71E5"
- "\u6DC0"
- "\uFF57"
- "\u88F8"
- "\u82BD"
- "\u63A7"
- "\u95A3"
- "\u7587"
- "\u925B"
- "\u8178"
- "\u5642"
- "\u935B"
- "\u654F"
- "\u9162"
- "\u938C"
- "\u81E3"
- "\u8E74"
- "\u5A01"
- "\u6D44"
- "\u7965"
- "\u795D"
- "\u86C7"
- "\u811A"
- "\u4F0F"
- "\u6F54"
- "\u5510"
- "\u6955"
- "\u57A3"
- "\u932F"
- "\u514B"
- "\u614C"
- "\u6BBF"
- "\u819C"
- "\u61A9"
- "\u9065"
- "\u82DB"
- "\u9676"
- "\u8997"
- "\u78E8"
- "\u624D"
- "\u5E1D"
- "\u642C"
- "\u722A"
- "\u90CA"
- "\u80A5"
- "\u819D"
- "\u62D2"
- "\u868A"
- "\u5208"
- "\u5132"
- "\uFF48"
- "\u596E"
- "\u7761"
- "\u5BEE"
- "\uFF17"
- "\u4FB5"
- "\u9B31"
- "\u635C"
- "\u6DBC"
- "\u5A20"
- "\u7363"
- "\u7C92"
- "\u963B"
- "\u6CE5"
- "\u7ADC"
- "\u91A4"
- "\u92ED"
- "\u6606"
- "\u9234"
- "\u7DBF"
- "\u830E"
- "\u8107"
- "\u7948"
- "\u8A60"
- "\u6B53"
- "\u7F70"
- "\u68DA"
- "\u83CA"
- "\u6069"
- "\u7267"
- "\u540A"
- "\u8DF3"
- "\u6DE1"
- "\u7F72"
- "\u596A"
- "\u9038"
- "\u6170"
- "\u5EB6"
- "\u9262"
- "\u8B5C"
- "\u5ECA"
- "\u5606"
- "\u62ED"
- "\u8CED"
- "\u99C1"
- "\u7F8A"
- "\u5384"
- "\u7D10"
- "\u9673"
- "\u816B"
- "\u6841"
- "\u9298"
- "\u96CC"
- "\u636E"
- "\u62DD"
- "\u60E8"
- "\u96DB"
- "\u845B"
- "\u7FA8"
- "\u609F"
- "\u76DF"
- "\u7E4A"
- "\u9192"
- "\u65EC"
- "\u6DAF"
- "\u8CC4"
- "\u6E7F"
- "\u6F02"
- "\u7D2B"
- "\u30F4"
- "\u4E9C"
- "\u8AA0"
- "\u5854"
- "\u5E4C"
- "\u80C6"
- "\u64A5"
- "\u865A"
- "\u6F64"
- "\u9699"
- "\u5F84"
- "\u6C72"
- "\u8CE2"
- "\u5BF8"
- "\u8888"
- "\u88DF"
- "\u8266"
- "\uFF19"
- "\u62D8"
- "\uFF47"
- "\u5841"
- "\u5BDB"
- "\u51A0"
- "\u614E"
- "\u971E"
- "\u731B"
- "\u67CF"
- "\u733F"
- "\u9084"
- "\u50E7"
- "\u53EB"
- "\u53F1"
- "\u72E9"
- "\u63C9"
- "\u7D2F"
- "\u5982"
- "\u7897"
- "\u6BBB"
- "\u906E"
- "\u5FCD"
- "\u6EF4"
- "\u6B96"
- "\u8D08"
- "\u74A7"
- "\u6F38"
- "\u6589"
- "\u03BC"
- "\u9686"
- "\u6176"
- "\u72A0"
- "\u7272"
- "\u5146"
- "\u576A"
- "\u6284"
- "\u65D7"
- "\u50DA"
- "\u5C3F"
- "\u51CD"
- "\u902E"
- "\u7B39"
- "\u8F1D"
- "\u5C1A"
- "\u8015"
- "\u51CC"
- "\u632B"
- "\u4F10"
- "\u7BB8"
- "\u4E91"
- "\u5968"
- "\u819A"
- "\u9010"
- "\u03B3"
- "\u5F26"
- "\u9700"
- "\u5C01"
- "\u5E3D"
- "\u6F31"
- "\u9283"
- "\u507D"
- "\u5875"
- "\u7E1B"
- "\u58A8"
- "\u6020"
- "\u96F7"
- "\u5766"
- "\u68A8"
- "\u90ED"
- "\u7A4F"
- "\u67FF"
- "\u7AFF"
- "\u5E61"
- "\u5F81"
- "\u99B3"
- "\u9EBA"
- "\u03C4"
- "\u8154"
- "\u7C98"
- "\u7409"
- "\u731F"
- "\u4EC1"
- "\u8358"
- "\u6492"
- "\u7C3F"
- "\u90E1"
- "\u7B4C"
- "\u5D8B"
- "\u6FE1"
- "\u618E"
- "\u5446"
- "\u6F15"
- "\u5A29"
- "\u68DF"
- "\u6052"
- "\uFF18"
- "\u5553"
- "\u5B5D"
- "\u67F3"
- "\u64A4"
- "\u85CD"
- "\u95C7"
- "\u5B22"
- "\u67F4"
- "\u6734"
- "\u6D1E"
- "\u5CB3"
- "\u9B3C"
- "\u8DE8"
- "\u3049"
- "\u70C8"
- "\u559A"
- "\u6F84"
- "\u6FEB"
- "\u82A6"
- "\u62D3"
- "\u51FD"
- "\u6843"
- "\u76F2"
- "\u6CA1"
- "\u7A6B"
- "\u6212"
- "\u99FF"
- "\u8D05"
- "\u67AF"
- "\u6C70"
- "\u53F6"
- "\u90A6"
- "\u66C7"
- "\u9A30"
- "\u711A"
- "\u51F6"
- "\u5CF0"
- "\u69FD"
- "\u67DA"
- "\u5320"
- "\u9A19"
- "\u502B"
- "\u84EE"
- "\u634C"
- "\u61F2"
- "\u8B0E"
- "\u91B8"
- "\u56DA"
- "\u7344"
- "\u6EDD"
- "\u6795"
- "\u60DC"
- "\u7DB1"
- "\u8B33"
- "\u7089"
- "\u5DFE"
- "\u91DC"
- "\u9BAB"
- "\u6E58"
- "\u92F3"
- "\u5351"
- "\uFF51"
- "\u7DBB"
- "\u5EF7"
- "\u85A6"
- "\u667A"
- "\u6C99"
- "\u8CBF"
- "\u8098"
- "\uFF16"
- "\u5F0A"
- "\u66F0"
- "\u7881"
- "\u9DFA"
- "\u6676"
- "\u8D74"
- "\u8513"
- "\u75D2"
- "\u79E9"
- "\u5DE7"
- "\u9418"
- "\u7B1B"
- "\u638C"
- "\u53EC"
- "\u5347"
- "\u6249"
- "\u5A2F"
- "\u8A1F"
- "\u8247"
- "\u64B2"
- "\uFF56"
- "\u6182"
- "\u90B8"
- "\u5098"
- "\u7CDE"
- "\u03BB"
- "\u5C16"
- "\u723D"
- "\u7832"
- "\u55A9"
- "\u80CE"
- "\u84B2"
- "\u9DF9"
- "\u755C"
- "\u6897"
- "\uFF4F"
- "\u5023"
- "\u6247"
- "\u7DFB"
- "\u6756"
- "\u622F"
- "\u5D50"
- "\u6A3D"
- "\u6F06"
- "\u9CE9"
- "\u039B"
- "\u5FAA"
- "\u8896"
- "\u9784"
- "\u6851"
- "\u5D16"
- "\u59A8"
- "\u66A6"
- "\u59D3"
- "\u7A00"
- "\u3041"
- "\u920D"
- "\u9727"
- "\u9837"
- "\u8105"
- "\u7B20"
- "\u86CD"
- "\u8328"
- "\u69CD"
- "\u3062"
- "\u59EB"
- "\u6ABB"
- "\u8463"
- "\u6C7D"
- "\u541F"
- "\u807E"
- "\u73E0"
- "\u62B9"
- "\u9D28"
- "\u64AB"
- "\u8607"
- "\u7AC3"
- "\u864E"
- "\u78EF"
- "\u77E9"
- "\u7CCA"
- "\u55AA"
- "\u8A6E"
- "\u82D1"
- "\u98F4"
- "\u6089"
- "\u674F"
- "\u9B42"
- "\u914C"
- "\u9BC9"
- "\u8A50"
- "\u03A3"
- "\u7815"
- "\u55DC"
- "\u7FFC"
- "\u4F0E"
- "\u751A"
- "\u5F66"
- "\u961C"
- "\u8706"
- "\u6109"
- "\u80F4"
- "\u8776"
- "\u8B00"
- "\u9271"
- "\u75E2"
- "\u73ED"
- "\u9438"
- "\u92F8"
- "\u62D9"
- "\u6068"
- "\u4EAD"
- "\u4EAB"
- "\u75AB"
- "\u5F13"
- "\u74E6"
- "\u7D46"
- "\u814E"
- "\u62F3"
- "\u9A0E"
- "\u58B3"
- "\u83F1"
- "\u6813"
- "\u5256"
- "\u6D2A"
- "\u5484"
- "\u9591"
- "\u58EE"
- "\u9945"
- "\u65ED"
- "\u8987"
- "\u80A1"
- "\u86D9"
- "\u724C"
- "\u965B"
- "\u714E"
- "\u63AC"
- "\u9AED"
- "\u9019"
- "\u5E7B"
- "\u54B3"
- "\u6E26"
- "\u55C5"
- "\u7A42"
- "\u7434"
- "\u5FCC"
- "\u70CF"
- "\u5448"
- "\u91D8"
- "\u611A"
- "\u6C3E"
- "\u8AFE"
- "\u6E9D"
- "\u7336"
- "\u7AAF"
- "\u8ACF"
- "\u8CC2"
- "\u57C3"
- "\u51F8"
- "\u7D0B"
- "\u6ADB"
- "\u525B"
- "\u98E2"
- "\u4FCA"
- "\u54C0"
- "\u5BB0"
- "\u93AE"
- "\u7435"
- "\u7436"
- "\u96C5"
- "\u8494"
- "\u85AA"
- "\u8A93"
- "\u59EA"
- "\u62D7"
- "\u8778"
- "\u7169"
- "\u7B51"
- "\u690E"
- "\u4FB6"
- "\u553E"
- "\u7BAA"
- "\u5075"
- "\u8861"
- "\u03C3"
- "\u88FE"
- "\u95B2"
- "\u805A"
- "\u4E3C"
- "\u633D"
- "\u7E4D"
- "\u82D7"
- "\u9E93"
- "\u03C6"
- "\u03B4"
- "\u4E32"
- "\u51E1"
- "\u5F18"
- "\u85FB"
- "\u61C7"
- "\u817F"
- "\u7A9F"
- "\u6803"
- "\u6652"
- "\u5E84"
- "\u7891"
- "\u7B4F"
- "\u7B25"
- "\u5E06"
- "\u96B7"
- "\u8FB0"
- "\u75BE"
- "\u8FE6"
- "\u8A6B"
- "\u5617"
- "\u582A"
- "\u6842"
- "\u5B9B"
- "\u58F7"
- "\u8AED"
- "\u97AD"
- "\u9310"
- "\u6DF5"
- "\u79E4"
- "\u7525"
- "\u4F8D"
- "\u66FD"
- "\u6572"
- "\u63AA"
- "\u6168"
- "\u83E9"
- "\u5CE0"
- "\u901D"
- "\u5F70"
- "\u67F5"
- "\u82AF"
- "\u7C50"
- "\u57A2"
- "\u03BE"
- "\u77EF"
- "\u8C8C"
- "\u8F44"
- "\u8A89"
- "\u9813"
- "\u7D79"
- "\u9E78"
- "\u5E7D"
- "\u6881"
- "\u642D"
- "\u54BD"
- "\u82B3"
- "\u7729"
- "\u0393"
- "\u61A4"
- "\u7985"
- "\u6063"
- "\u5840"
- "\u7149"
- "\u75FA"
- "\uFF06"
- "\u7A40"
- "\u545F"
- "\u918D"
- "\u9190"
- "\u7901"
- "\u51F9"
- "\u86EE"
- "\u5974"
- "\u64AD"
- "\u7E79"
- "\u8499"
- "\u8A63"
- "\u4E5F"
- "\u5420"
- "\u4E59"
- "\u8E8A"
- "\u8E87"
- "\u9D2C"
- "\u7A92"
- "\u59E5"
- "\u9326"
- "\u694A"
- "\u8017"
- "\u6F09"
- "\u60E7"
- "\u4FE3"
- "\u6876"
- "\u5CFB"
- "\u905C"
- "\u65FA"
- "\u75D5"
- "\u03A6"
- "\u6234"
- "\u658E"
- "\u8CD3"
- "\u7BC7"
- "\u8429"
- "\u85E9"
- "\u7950"
- "\u8B83"
- "\u83AB"
- "\u9C39"
- "\u85A9"
- "\u5378"
- "\u4E9B"
- "\u75B9"
- "\u8E44"
- "\u4E56"
- "\uFF5A"
- "\u92FC"
- "\u6A3A"
- "\u5B8F"
- "\u7BE4"
- "\u8258"
- "\u81B3"
- "\u7A83"
- "\u7E82"
- "\u5598"
- "\u786B"
- "\u99D5"
- "\u7261"
- "\u732A"
- "\u62D0"
- "\u60DA"
- "\u60A0"
- "\u7CE7"
- "\u95A5"
- "\u03C0"
- "\u853D"
- "\u6850"
- "\u981A"
- "\u9214"
- "\u697C"
- "\u8C9E"
- "\u602F"
- "\u817A"
- "\u8305"
- "\u6CF0"
- "\u9913"
- "\u5C51"
- "\u9BDB"
- "\u929B"
- "\u9AB8"
- "\u9C57"
- "\u5824"
- "\u9675"
- "\u6DD8"
- "\u64C1"
- "\u81FC"
- "\u6D32"
- "\u8FBB"
- "\u8A23"
- "\u5C4F"
- "\u9BE8"
- "\u895F"
- "\u5CE1"
- "\u660C"
- "\u982C"
- "\u5806"
- "\u865C"
- "\u840E"
- "\u9EB9"
- "\u7CE0"
- "\u68B1"
- "\u8AFA"
- "\u5403"
- "\u66A2"
- "\u5B54"
- "\u5EB8"
- "\u5DF3"
- "\u589C"
- "\u85AE"
- "\u6101"
- "\u664B"
- "\u8236"
- "\u8FC5"
- "\u6B3A"
- "\u9640"
- "\u7709"
- "\u6CC4"
- "\u59FB"
- "\u9688"
- "\u58CC"
- "\u69D9"
- "\u5E87"
- "\u52D2"
- "\u6E07"
- "\u91E7"
- "\u4E43"
- "\u82D4"
- "\u9306"
- "\u58D5"
- "\u78D0"
- "\u6962"
- "\u65A7"
- "\u5E63"
- "\u03B7"
- "\u7E55"
- "\u83C5"
- "\u7109"
- "\u5112"
- "\u5D07"
- "\u8276"
- "\u5449"
- "\u7984"
- "\u54C9"
- "\u68AF"
- "\u5937"
- "\u546A"
- "\u56C3"
- "\u84BC"
- "\u9A28"
- "\u9D3B"
- "\u862D"
- "\u7CA5"
- "\u7D3A"
- "\u7D17"
- "\u7164"
- "\u03C9"
- "\u52FE"
- "\u97A0"
- "\u4F3D"
- "\u7AAE"
- "\u6E15"
- "\u0392"
- "\u8D66"
- "\u6597"
- "\u66F9"
- "\u8CE0"
- "\u5CAC"
- "\u847A"
- "\u7D33"
- "\u5B8D"
- "\u6191"
- "\u6357"
- "\u7C9B"
- "\u8CCA"
- "\u9F8D"
- "\u81C6"
- "\u6C8C"
- "\u52C5"
- "\u8096"
- "\u559D"
- "\u8CAA"
- "\u82AD"
- "\u8549"
- "\u919C"
- "\u64B9"
- "\u5740"
- "\u7BE0"
- "\u7D2C"
- "\u75B1"
- "\u52F2"
- "\u86FE"
- "\u88B4"
- "\u8749"
- "\u685F"
- "\u4FF5"
- "\u818F"
- "\u5DF7"
- "\u5072"
- "\u6148"
- "\u754F"
- "\u96BB"
- "\u606D"
- "\u64B0"
- "\u9D0E"
- "\u52AB"
- "\u63C6"
- "\u914E"
- "\u8106"
- "\u6241"
- "\u9761"
- "\u8511"
- "\u95CA"
- "\u96BC"
- "\u6CCC"
- "\u5996"
- "\u65A1"
- "\u52C3"
- "\u637B"
- "\u6E13"
- "\u937E"
- "\u5954"
- "\u6155"
- "\u5984"
- "\u6A0B"
- "\u936C"
- "\u502D"
- "\u8679"
- "\u03BD"
- "\u60A6"
- "\u8151"
- "\u62EE"
- "\u51E0"
- "\u80E1"
- "\u8FC2"
- "\u8EAF"
- "\u50ED"
- "\u6ECB"
- "\u7B8B"
- "\u75F0"
- "\u65AC"
- "\u85AB"
- "\u673D"
- "\u82A5"
- "\u9756"
- "\u907C"
- "\u6591"
- "\u7953"
- "\u5B95"
- "\u976D"
- "\u72D7"
- "\u81BF"
- "\u59AC"
- "\u5A7F"
- "\u7554"
- "\u7AEA"
- "\u9D5C"
- "\u8CE6"
- "\u7E1E"
- "\u6731"
- "\u7C95"
- "\u69FB"
- "\u6D69"
- "\u511A"
- "\u8CDC"
- "\u8B39"
- "\u68B5"
- "\u5A9B"
- "\u7947"
- "\u5516"
- "\u03C8"
- "\u03C1"
- "\u5A9A"
- "\u540E"
- "\u6FB1"
- "\u7DBE"
- "\u6372"
- "\u67E9"
- "\u6DF3"
- "\u74DC"
- "\u5631"
- "\u51B4"
- "\u6115"
- "\u9211"
- "\u51B6"
- "\u67A2"
- "\u03A9"
- "\u77B0"
- "\u6775"
- "\u5EB5"
- "\u4F2F"
- "\u840C"
- "\u5609"
- "\u4FC4"
- "\u7D06"
- "\u81A0"
- "\u7252"
- "\u8EB0"
- "\u543E"
- "\u50FB"
- "\u704C"
- "\u646F"
- "\u5091"
- "\u929A"
- "\u8B90"
- "\u8910"
- "\u8FB1"
- "\u7345"
- "\u7B94"
- "\u73A9"
- "\u4F43"
- "\u583A"
- "\u5504"
- "\u515C"
- "\u62CC"
- "\u5751"
- "\u75D8"
- "\u69CC"
- "\u77B3"
- "\u79BF"
- "\u66D9"
- "\u5DF2"
- "\u7FC1"
- "\u5C3C"
- "\u60BC"
- "\u7F77"
- "\u699C"
- "\u5451"
- "\u79E6"
- "\u533F"
- "\u03BA"
- "\u7259"
- "\u4F46"
- "\u572D"
- "\u548E"
- "\u745E"
- "\u7A1C"
- "\u785D"
- "\u6BC5"
- "\u7015"
- "\u8702"
- "\u978D"
- "\u6A2B"
- "\u7566"
- "\u660F"
- "\u755D"
- "\u4FAE"
- "\u548B"
- "\u6367"
- "\u7F9E"
- "\u803D"
- "\u60B8"
- "\u51E7"
- "\u4EAE"
- "\u9AC4"
- "\u54FA"
- "\u4FEF"
- "\u567A"
- "\u8058"
- "\u8654"
- "\u5B8B"
- "\u93A7"
- "\u968B"
- "\u51B3"
- "\u59D1"
- "\u7078"
- "\u927E"
- "\u8F5F"
- "\u60F0"
- "\u03C7"
- "\u643E"
- "\u6854"
- "\u7F6B"
- "\u8E4A"
- "\u68B6"
- "\u6893"
- "\u7F75"
- "\u65A5"
- "\u6276"
- "\u6147"
- "\u61C3"
- "\u9949"
- "\u6E25"
- "\u6AD3"
- "\u80E4"
- "\u56A2"
- "\u9CF3"
- "\u6A84"
- "\u8C79"
- "\u50B2"
- "\u50D1"
- "\u7586"
- "\u6134"
- "\u53A8"
- "\u6FB9"
- "\u9320"
- "\u64E2"
- "\u6EBA"
- "\u7624"
- "\u73CA"
- "\u5BC5"
- "\u6977"
- "\u9583"
- "\u9CF6"
- "\u7119"
- "\u6912"
- "\u9B4F"
- "\u9798"
- "\u68A2"
- "\u6900"
- "\u8ACC"
- "\u696B"
- "\u5F14"
- "\u65D2"
- "\u5957"
- "\u9F5F"
- "\u9F6C"
- "\u7D18"
- "\u810A"
- "\u536F"
- "\u727D"
- "\u6BD8"
- "\u6714"
- "\u514E"
- "\u721B"
- "\u6D9C"
- "\u5851"
- "\u5F04"
- "\u676D"
- "\u63A0"
- "\u80B4"
- "\u626E"
- "\u51F1"
- "\u798D"
- "\u8036"
- "\u808B"
- "\u7235"
- "\u61AB"
- "\u57D3"
- "\u5983"
- "\u9910"
- "\u7C7E"
- "\u7262"
- "\u6816"
- "\u9017"
- "\u7058"
- "\u5E5F"
- "\u68F2"
- "\u5687"
- "\u7827"
- "\u6E1A"
- "\u7C9F"
- "\u7A7F"
- "\u7F60"
- "\u68F9"
- "\u8594"
- "\u8587"
- "\u526A"
- "\u7B48"
- "\u936E"
- "\u892A"
- "\u7AA9"
- "\u58F1"
- "\u30F2"
- "\u7460"
- "\u7483"
- "\u61BE"
- "\u5E16"
- "\u6960"
- "\u03B5"
- "\u5480"
- "\u56BC"
- "\u56A5"
- "\u6D29"
- "\u6A58"
- "\u6867"
- "\u6A9C"
- "\u63F6"
- "\u63C4"
- "\u88E1"
- "\u6A80"
- "\u900D"
- "\u9081"
- "\u6028"
- "\u73B2"
- "\u90C1"
- "\u5815"
- "\u8AB9"
- "\u8B17"
- "\u8956"
- "\u51F0"
- "\u9B41"
- "\u5B75"
- "\u7766"
- "\u71FB"
- "\u5243"
- "\u53A9"
- "\u71D7"
- "\u84D1"
- "\u5EFB"
- "\u75D4"
- "\u837C"
- "\u6190"
- "\u6070"
- "\u8F9F"
- "\u5F98"
- "\u5F8A"
- "\u4FA0"
- "\u5830"
- "\u971C"
- "\u809B"
- "\u76E7"
- "\u5835"
- "\u72DB"
- "\u9D8F"
- "\u9119"
- "\u4F73"
- "\u916A"
- "\u8AE7"
- "\u6973"
- "\u7826"
- "\u5AC9"
- "\u5DEB"
- "\u53E1"
- "\u9716"
- "\u6E23"
- "\u5544"
- "\u798E"
- "\u6CAB"
- "\u821F"
- "\u6C5D"
- "\u5302"
- "\u99F1"
- "\u6C08"
- "\u308E"
- "\u714C"
- "\u7DAC"
- "\u5F1B"
- "\u586B"
- "\u84C1"
- "\u5039"
- "\u7CFE"
- "\u51A5"
- "\u674E"
- "\u966A"
- "\u8877"
- "\u59E6"
- "\u5962"
- "\u75BC"
- "\u8A54"
- "\u8599"
- "\u8B5A"
- "\u5CEF"
- "\u684E"
- "\u688F"
- "\u9B92"
- "\u8A1B"
- "\u55B0"
- "\u7960"
- "\u67A1"
- "\u6681"
- "\u4E5E"
- "\u91C7"
- "\u9739"
- "\u9742"
- "\u687F"
- "\u929C"
- "\u4F51"
- "\u79BE"
- "\u5944"
- "\u6930"
- "\u87F9"
- "\u8061"
- "\u98AF"
- "\u30C2"
- "\u8E81"
- "\u8E42"
- "\u8E99"
- "\u8695"
- "\u693F"
- "\u62F7"
- "\u9257"
- "\u8882"
- "\u78CB"
- "\u7422"
- "\u6B3D"
- "\u60B6"
- "\u53C9"
- "\u7E37"
- "\u8A36"
- "\u50C5"
- "\u5C6F"
- "\u5EEC"
- "\u5C41"
- "\u99A8"
- "\u6E20"
- "\u8568"
- "\u699B"
- "\u675C"
- "\u7791"
- "\u6A8E"
- "\u8ECB"
- "\u8F62"
- "\u8700"
- "\u8235"
- "\u82B9"
- "\u6B3E"
- "\u639F"
- "\u8E2A"
- "\u745A"
- "\u71E6"
- "\u7D21"
- "\u584A"
- "\u8171"
- "\u6753"
- "\u65A4"
- "\u786F"
- "\u55AC"
- "\u8B04"
- "\u79DF"
- "\u8180"
- "\u80F1"
- "\u6EC4"
- "\u9C10"
- "\u8475"
- "\u8471"
- "\u8461"
- "\u5A49"
- "\u88D4"
- "\u9F0E"
- "\u9187"
- "\u67EF"
- "\u991E"
- "\u96C1"
- "\u8AA6"
- "\u8A62"
- "\u633A"
- "\u7AFA"
- "\u8A82"
- "\u5191"
- "\u8718"
- "\u86DB"
- "\u70B8"
- "\u932B"
- "\u58C5"
- "\u8087"
- "\u54AC"
- "\u9B8E"
- "\u67D1"
- "\u7D9C"
- "\u5BE1"
- "\u7977"
- "\u522E"
- "\u8CCE"
- "\u9B18"
- "\u884D"
- "\u5FD6"
- "\u685D"
- "\u0398"
- "\u039A"
- "\u03A8"
- "\u53E2"
- "\u4FCE"
- "\u7396"
- "\u78A7"
- "\u8766"
- "\u8521"
- "\u649A"
- "\u7A14"
- "\u752B"
- "\u6D35"
- "\u7893"
- "\u9ECE"
- "\u5AE1"
- "\u8755"
- "\u725F"
- "\u6B89"
- "\u6C83"
- "\u7B50"
- "\u619A"
- "\u6E24"
- "\u9B4D"
- "\u9B4E"
- "\u71ED"
- "\u7940"
- "\u6D1B"
- "\u88F3"
- "\u4E11"
- "\u9846"
- "\u9952"
- "\u5EC9"
- "\u689F"
- "\u848B"
- "\u6DD1"
- "\u8737"
- "\u9644"
- "\u695A"
- "\u9F20"
- "\u5154"
- "\u61AC"
- "\u5F57"
- "\u66FC"
- "\u5D11"
- "\u57DC"
- "\u5F77"
- "\u5F7F"
- "\u5DF4"
- "\u831C"
- "\u6D9B"
- "\u57E0"
- "\u945A"
- "\u92D2"
- "\u5C09"
- "\u53AD"
- "\u7B75"
- "\u7AE3"
- "\u7E8F"
- "\u6194"
- "\u60B4"
- "\u8E5F"
- "\u675E"
- "\u7825"
- "\u8F14"
- "\u9C52"
- "\u4FAF"
- "\u7D62"
- "\u5475"
- "\u698E"
- "\u53EA"
- "\u71D5"
- "\u5C60"
- "\u5614"
- "\u74E2"
- "\u9291"
- "\u880D"
- "\u932C"
- "\u608C"
- "\u8A1D"
- "\u7DB8"
- "\u530D"
- "\u5310"
- "\u637A"
- "\u6A59"
- "\u5BB5"
- "\u9D60"
- "\u57F4"
- "\u7690"
- "\u9021"
- "\u4FF8"
- "\u7A63"
- "\u54A4"
- "\u8309"
- "\u8389"
- "\u6643"
- "\u6EF8"
- "\u5289"
- "\u5026"
- "\u8944"
- "\u7B4D"
- "\u5239"
- "\u83BD"
- "\u9041"
- "\u66F5"
- "\u79BD"
- "\u7B67"
- "\u7E0A"
- "\u7FD4"
- "\u5BF5"
- "\u834F"
- "\u758B"
- "\u84EC"
- "\u83B1"
- "\u8EAC"
- "\u696E"
- "\u76C8"
- "\u5C13"
- "\u72FC"
- "\u85C9"
- "\u965F"
- "\u620E"
- "\u4E8E"
- "\u6F58"
- "\u8012"
- "\u5F82"
- "\u5FA0"
- "\u99AE"
- "\u5F6D"
- "\u5E47"
- "\u9087"
- "\u6CD3"
- "\u80B1"
- "\u65BC"
- "\u6602"
- "\u8E64"
- "\u7463"
- "\u9A65"
- "\u4EA8"
- "\u8AEE"
- "\u77EE"
- "\u8569"
- "\u6566"
- "\u30EE"
- "\u6208"
- "\u8229"
- "\u9B6F"
- "\u65E0"
- "\u6159"
- "\u6127"
- "\u8340"
- "\u6309"
- "\u914B"
- "\u59F6"
- "\u723E"
- "\u8602"
- "\u986B"
- "\u593E"
- "\u59DA"
- "\u701D"
- "\u6FD8"
- "\u964B"
- "\u777E"
- "\u5B30"
- "\u5DBA"
- "\u821B"
- "\u7B65"
- "\u95A4"
- "\u68D8"
- "\u9812"
- "\u59BE"
- "\u8B2C"
- "\u4F0D"
- "\u537F"
- "\u8FEA"
- "\u5686"
- "\u60F9"
- "\u80DA"
- "\u6C6A"
- "\u543B"
- "\u9B51"
- "\u8F3B"
- "\u59C6"
- "\u84FC"
- "\u6AC2"
- "\u5315"
- "\u4F70"
- "\u7246"
- "\u5CD9"
- "\u725D"
- "\u9DF2"
- "\u7DCB"
- "\u7BAD"
- "\u82EB"
- "\u5366"
- "\u5B5F"
- "\u5323"
- "\u4ED4"
- "\u5D19"
- "\u6787"
- "\u6777"
- "\u81C0"
- "\u681E"
- "\u9E1E"
- "\u61FA"
- "\u55DA"
- "\u6DB8"
- "\u30C5"
- "\u8D16"
- "\u5E9A"
- "\u93D1"
- "\u9149"
- "\u670B"
- "\u70F9"
- "\u53C8"
- "\u7337"
- "\u7C00"
- "\u5B2C"
- "\u88B7"
- "\u6BB7"
- "\u51DB"
- "\u4EC0"
- "\u71FF"
- "\u5556"
- "\u7BC6"
- "\u7DD8"
- "\u5036"
- "\u6AC3"
- "\u8A03"
- "\u540F"
- "\u5CB1"
- "\u8A25"
- "\u958F"
- "\u5DBD"
- "\u722C"
- "\u618A"
- "\u7511"
- "\u6144"
- "\u5E25"
- "\u7704"
- "\u5A11"
- "\u50E5"
- "\u5016"
- "\u800C"
- "\u8F4D"
- "\u5583"
- "\u81BE"
- "\u7099"
- "\u85AF"
- "\u97EE"
- "\u4E99"
- "\u8B14"
- "\u86CE"
- "\u7425"
- "\u73C0"
- "\u698A"
- "\u7C3E"
- "\u8D6D"
- "\u8823"
- "\u8299"
- "\u8B01"
- "\u9022"
- "\u8466"
- "\u6670"
- "\u5398"
- "\u707C"
- "\u903C"
- "\u9328"
- "\u700B"
- "\u5FF8"
- "\u6029"
- "\u7165"
- "\u7B0F"
- "\u5FFD"
- "\u7708"
- "\u7DEC"
- "\u5C4D"
- "\u75BD"
- "\u6E5B"
- "\u788D"
- "\u8AE4"
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_sp/train/feats_stats.npz
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d6
normalize_before: true
macaron_style: false
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "jp", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["csj"]}
|
espnet/kan-bayashi_csj_asr_train_asr_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"jp",
"dataset:csj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"jp"
] |
TAGS
#espnet #audio #automatic-speech-recognition #jp #dataset-csj #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR model
### 'espnet/kan-bayashi_csj_asr_train_asr_conformer'
This model was trained by Nelson Yalta using csj recipe in espnet.
### Demo: How to use in ESPnet2
## ASR config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR model",
"### 'espnet/kan-bayashi_csj_asr_train_asr_conformer'\n\nThis model was trained by Nelson Yalta using csj recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #jp #dataset-csj #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR model",
"### 'espnet/kan-bayashi_csj_asr_train_asr_conformer'\n\nThis model was trained by Nelson Yalta using csj recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4037458/
This model was trained by kan-bayashi using csj/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["csj"]}
|
espnet/kan-bayashi_csj_asr_train_asr_transformer_raw_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"ja",
"dataset:csj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #automatic-speech-recognition #ja #dataset-csj #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.URL'
️ Imported from URL
This model was trained by kan-bayashi using csj/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #ja #dataset-csj #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kan-bayashi/csj_asr_train_asr_transformer_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4031955/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/csmsc_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_fastspeech`
♻️ Imported from https://zenodo.org/record/3986227/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_fastspeech'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_fastspeech2`
♻️ Imported from https://zenodo.org/record/4031953/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_full_band_vits`
♻️ Imported from https://zenodo.org/record/5443852/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_full_band_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/csmsc_full_band_vits'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_full_band_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_full_band_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tacotron2`
♻️ Imported from https://zenodo.org/record/3969118/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_transformer`
♻️ Imported from https://zenodo.org/record/4034125/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_transformer'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4031955/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_conformer_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4031953/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_fastspeech2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986227/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_fastspeech_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5443852/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_tts_train_full_band_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best`
♻️ Imported from https://zenodo.org/record/3969118/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
You first need to import the following packages
```bash
pip install torch
pip install espnet_model_zoo
```
Then start using it!
```python
import soundfile
from espnet2.bin.tts_inference import Text2Speech
text2speech = Text2Speech.from_pretrained("espnet/kan-bayashi_csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best")
text = "春江潮水连海平,海上明月共潮生"
speech = text2speech(text)["wav"]
soundfile.write("out.wav", speech.numpy(), text2speech.fs, "PCM_16")
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
You first need to import the following packages
Then start using it!
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2\nYou first need to import the following packages\n\nThen start using it!",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_tacotron2_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2\nYou first need to import the following packages\n\nThen start using it!",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4034125/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/csmsc_tts_train_transformer_raw_phn_pypinyin_g2p_phone_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5499120/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_tts_train_vits_raw_phn_pypinyin_g2p_phone_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/csmsc_vits`
♻️ Imported from https://zenodo.org/record/5499120/
This model was trained by kan-bayashi using csmsc/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["csmsc"]}
|
espnet/kan-bayashi_csmsc_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"zh",
"dataset:csmsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/csmsc_vits'
️ Imported from URL
This model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #zh #dataset-csmsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/csmsc_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using csmsc/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4032246/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_conformer_fastspeech2_accent`
♻️ Imported from https://zenodo.org/record/4381102/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_conformer_fastspeech2_accent
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_conformer_fastspeech2_accent'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause`
♻️ Imported from https://zenodo.org/record/4436448/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_conformer_fastspeech2_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody`
♻️ Imported from https://zenodo.org/record/5499050/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_conformer_fastspeech2_tacotron2_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_tacotron2_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody`
♻️ Imported from https://zenodo.org/record/5499066/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_conformer_fastspeech2_transformer_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_conformer_fastspeech2_transformer_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_fastspeech`
♻️ Imported from https://zenodo.org/record/3986225/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_fastspeech'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_fastspeech2`
♻️ Imported from https://zenodo.org/record/4032224/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_fastspeech2_accent`
♻️ Imported from https://zenodo.org/record/4381100/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_fastspeech2_accent
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_fastspeech2_accent'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_fastspeech2_accent_with_pause`
♻️ Imported from https://zenodo.org/record/4436450/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_fastspeech2_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_fastspeech2_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_fastspeech2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_full_band_vits_accent_with_pause`
♻️ Imported from https://zenodo.org/record/5431984/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_full_band_vits_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_full_band_vits_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_full_band_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_full_band_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_full_band_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521340/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_full_band_vits_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_full_band_vits_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_full_band_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_full_band_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tacotron2`
♻️ Imported from https://zenodo.org/record/3963886/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tacotron2_accent`
♻️ Imported from https://zenodo.org/record/4381098/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tacotron2_accent
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tacotron2_accent'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tacotron2_accent_with_pause`
♻️ Imported from https://zenodo.org/record/4433194/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tacotron2_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tacotron2_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tacotron2_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tacotron2_prosody`
♻️ Imported from https://zenodo.org/record/5499026/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tacotron2_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tacotron2_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tacotron2_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tacotron2_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_transformer`
♻️ Imported from https://zenodo.org/record/4034121/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_transformer'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_transformer_accent`
♻️ Imported from https://zenodo.org/record/4381096/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_transformer_accent
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_transformer_accent'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer_accent'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_transformer_accent_with_pause`
♻️ Imported from https://zenodo.org/record/4433196/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_transformer_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_transformer_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_transformer_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_transformer_prosody`
♻️ Imported from https://zenodo.org/record/5499040/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_transformer_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_transformer_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_transformer_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_transformer_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4032246/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381102/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-15ef5f
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4436448/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-a7f080
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave`
♻️ Imported from https://zenodo.org/record/5499050/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw-truncated-569e81
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4391409/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-35ef5a
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4433198/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-74c1b4
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave`
♻️ Imported from https://zenodo.org/record/5499066/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_conformer_fastspeech2_transformer_teacher_r-truncated-f43d8f
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4032224/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381100/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jacon-truncated-f45dcb
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4436450/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jacon-truncated-e5d906
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_tacotron2_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4391405/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jac-truncated-6f4cf5
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4433200/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jac-truncated-60fc24
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech2_transformer_teacher_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986225/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_fastspeech_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5431984/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_a-truncated-d7d5d0
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521340/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_p-truncated-66d5fc
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_full_band_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381098/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4433194/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave`
♻️ Imported from https://zenodo.org/record/5499026/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best`
♻️ Imported from https://zenodo.org/record/3963886/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_tacotron2_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4381096/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4433196/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_acce-truncated-be0f66
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave`
♻️ Imported from https://zenodo.org/record/5499040/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_prosody_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4034121/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/jsut_tts_train_transformer_raw_phn_jaconv_pyopenjtalk_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5414980/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with-truncated-ba3566
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521354/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_tts_train_vits_raw_phn_jaconv_pyopenjtalk_prosody_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_vits_accent_with_pause`
♻️ Imported from https://zenodo.org/record/5414980/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_vits_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_vits_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jsut_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521354/
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jsut"]}
|
espnet/kan-bayashi_jsut_vits_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jsut_vits_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jsut_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_jvs001_vits_accent_with_pause`
♻️ Imported from https://zenodo.org/record/5432540/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_jvs001_vits_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_jvs001_vits_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs001_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs001_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_jvs010_vits_accent_with_pause`
♻️ Imported from https://zenodo.org/record/5432566/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_jvs010_vits_accent_with_pause
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_jvs010_vits_accent_with_pause'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs010_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs010_vits_accent_with_pause'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_jvs010_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521494/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_jvs010_vits_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_jvs010_vits_prosody'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs010_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_jvs010_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest`
♻️ Imported from https://zenodo.org/record/5432540/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-178804
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs001_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest`
♻️ Imported from https://zenodo.org/record/5432566/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjta-truncated-d57a28
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_accent_with_pause_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest`
♻️ Imported from https://zenodo.org/record/5521494/
This model was trained by kan-bayashi using jvs/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["jvs"]}
|
espnet/kan-bayashi_jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:jvs",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'
️ Imported from URL
This model was trained by kan-bayashi using jvs/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-jvs #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/jvs_tts_finetune_jvs010_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using jvs/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_gst+xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4418774/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_gst_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_gst+xvector_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_gst+xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_gst+xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_gst+xvector_trasnformer`
♻️ Imported from https://zenodo.org/record/4409702/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_gst_xvector_trasnformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_gst+xvector_trasnformer'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_gst+xvector_trasnformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_gst+xvector_trasnformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss`
♻️ Imported from https://zenodo.org/record/4418774/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_gst_xvector_conformer_fastspeech2_trans-truncated-c3209b
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4409702/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_gst_xvector_trasnformer_raw_phn_tacotro-truncated-250027
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_gst+xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss`
♻️ Imported from https://zenodo.org/record/4418754/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_conformer_fastspeech2_transform-truncated-42b443
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4409704/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2-truncated-e5fb13
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_tts_train_xvector_trasnformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521416/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no-truncated-09d645
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/libritts_tts_train_xvector_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4418754/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_xvector_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/libritts_xvector_trasnformer`
♻️ Imported from https://zenodo.org/record/4409704/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_trasnformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/libritts_xvector_trasnformer'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_xvector_trasnformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/libritts_xvector_trasnformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/libritts_xvector_vits`
♻️ Imported from https://zenodo.org/record/5521416/
This model was trained by kan-bayashi using libritts/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["libritts"]}
|
espnet/kan-bayashi_libritts_xvector_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:libritts",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/libritts_xvector_vits'
️ Imported from URL
This model was trained by kan-bayashi using libritts/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/libritts_xvector_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-libritts #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/libritts_xvector_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using libritts/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036268/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_fastspeech`
♻️ Imported from https://zenodo.org/record/3986231/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_fastspeech'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036272/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_joint_finetune_conformer_fastspeech2_hifigan
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_joint_finetune_conformer_fastspeech2_hifigan'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_joint_train_conformer_fastspeech2_hifigan
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_joint_train_conformer_fastspeech2_hifigan'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tacotron2`
♻️ Imported from https://zenodo.org/record/3989498/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_transformer`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_transformer'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5498896/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_-truncated-737899
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_finetune_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036268/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_-truncated-ec9e34
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036272/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986231/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5498487/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw-truncated-af8fe0
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_train_joint_conformer_fastspeech2_hifigan_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3989498/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4039194/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/ljspeech_tts_train_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5443814/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_tts_train_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/ljspeech_vits`
♻️ Imported from https://zenodo.org/record/5443814/
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"]}
|
espnet/kan-bayashi_ljspeech_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/ljspeech_vits'
️ Imported from URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/ljspeech_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_full_band_vits_prosody`
♻️ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["tsukuyomi"]}
|
espnet/kan-bayashi_tsukuyomi_full_band_vits_prosody
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-tsukuyomi #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/tsukuyomi_full_band_vits_prosody'
️ Imported from URL
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/tsukuyomi_full_band_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-tsukuyomi #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/tsukuyomi_full_band_vits_prosody'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.