pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest`
♻️ Imported from https://zenodo.org/record/5521446/
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["tsukuyomi"]}
|
espnet/kan-bayashi_tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest
| null |
[
"espnet",
"audio",
"text-to-speech",
"ja",
"dataset:tsukuyomi",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #text-to-speech #ja #dataset-tsukuyomi #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'
️ Imported from URL
This model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #ja #dataset-tsukuyomi #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/tsukuyomi_tts_finetune_full_band_jsut_vits_raw_phn_jaconv_pyopenjtalk_prosody_latest'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using tsukuyomi/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_full_band_multi_spk_vits`
♻️ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_full_band_multi_spk_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/vctk_full_band_multi_spk_vits'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_full_band_multi_spk_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_full_band_multi_spk_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036264/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #has_space #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_fastspeech`
♻️ Imported from https://zenodo.org/record/3986241/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_fastspeech
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst_fastspeech'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_fastspeech'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_fastspeech2`
♻️ Imported from https://zenodo.org/record/4036266/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_tacotron2`
♻️ Imported from https://zenodo.org/record/3986237/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst_transformer`
♻️ Imported from https://zenodo.org/record/4037456/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst_transformer'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst+xvector_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_tacotron2`
♻️ Imported from https://zenodo.org/record/4394598/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst+xvector_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_gst+xvector_transformer`
♻️ Imported from https://zenodo.org/record/4393277/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_gst_xvector_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_gst+xvector_transformer'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_gst+xvector_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_multi_spk_vits`
♻️ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_multi_spk_vits
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/vctk_multi_spk_vits'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_multi_spk_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_multi_spk_vits'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5521431/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g-truncated-50b003
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_tts_train_full_band_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036264/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_-truncated-69081b
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_conformer_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4036266/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_fastspeech2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986241/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_fastspeech_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best`
♻️ Imported from https://zenodo.org/record/3986237/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.best
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4037456/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394608/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_xvector_conformer_fastspeech2_transform-truncated-e051a9
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst+xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_gst+xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394598/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_gst_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_gst+xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst+xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_gst+xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## ESPnet2 TTS pretrained model
### `kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave`
♻️ Imported from https://zenodo.org/record/5500759/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 TTS pretrained model
### 'kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 TTS pretrained model",
"### 'kan-bayashi/vctk_tts_train_multi_spk_vits_raw_phn_tacotron_g2p_en_no_space_train.total_count.ave'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394602/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_conformer_fastspeech2_transformer_t-truncated-69a657
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_conformer_fastspeech2_transformer_teacher_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4394600/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_tacotron2_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave`
♻️ Imported from https://zenodo.org/record/4393279/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_tts_train_xvector_transformer_raw_phn_tacotron_g2p_en_no_space_train.URL'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_conformer_fastspeech2`
♻️ Imported from https://zenodo.org/record/4394602/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_conformer_fastspeech2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_xvector_conformer_fastspeech2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_conformer_fastspeech2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_tacotron2`
♻️ Imported from https://zenodo.org/record/4394600/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_tacotron2
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_xvector_tacotron2'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_tacotron2'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
## Example ESPnet2 TTS model
### `kan-bayashi/vctk_xvector_transformer`
♻️ Imported from https://zenodo.org/record/4393279/
This model was trained by kan-bayashi using vctk/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["vctk"]}
|
espnet/kan-bayashi_vctk_xvector_transformer
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:vctk",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 TTS model
### 'kan-bayashi/vctk_xvector_transformer'
️ Imported from URL
This model was trained by kan-bayashi using vctk/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-vctk #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 TTS model",
"### 'kan-bayashi/vctk_xvector_transformer'\n️ Imported from URL\n\nThis model was trained by kan-bayashi using vctk/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
text-to-speech
|
espnet
|
# ESPnet2 ASR pretrained model
## `kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave`
♻️ Imported from <https://zenodo.org/record/4017026#.YN70XJozZH4>
This model was trained by kan-bayashi using ljspeech/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Training config
See full config in [`config.yaml`](./config.yaml)
```yaml
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "text-to-speech"], "datasets": ["ljspeech"], "widget": [{"text": "Hello, how are you doing?"}]}
|
espnet/kan_bayashi_jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.loss.ave
| null |
[
"espnet",
"audio",
"text-to-speech",
"en",
"dataset:ljspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
# ESPnet2 ASR pretrained model
## 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'
️ Imported from <URL
This model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ASR pretrained model",
"## 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n\n️ Imported from <URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #text-to-speech #en #dataset-ljspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ASR pretrained model",
"## 'kan-bayashi/jsut_tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk_train.URL'\n\n️ Imported from <URL\n\nThis model was trained by kan-bayashi using ljspeech/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char`
This model was trained by Pengcheng Guo using wenetspeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 5c21f63e45e0961a5d817017c282b0cafd68a3aa
pip install -e .
cd egs2/wenetspeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Oct 6 15:11:20 CST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.2a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer_raw_zh_char
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|7176|67.1|32.9|0.0|0.1|33.0|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/dev|13825|16684|32.1|54.1|13.8|0.1|68.0|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|8599|13.4|84.6|2.0|0.1|86.7|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|25995|46.2|50.4|3.4|1.1|54.9|52.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/aishell_test|7176|104765|96.3|3.6|0.1|0.2|3.9|32.9|
|decode_asr_rnn_asr_model_valid.acc.ave_10bestdev|13825|333357|90.7|3.4|5.9|0.4|9.7|64.2|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_meeting|8370|220614|84.6|5.0|10.4|0.5|15.9|86.8|
|decode_asr_rnn_asr_model_valid.acc.ave_10best/test_net|24774|416968|91.8|5.3|2.9|0.6|8.8|52.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_zh_char
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 44205
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 30
patience: null
val_scheduler_criterion:
- valid
- acc
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 30000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_l/wav.scp
- speech
- sound
- - dump/raw/train_l/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0015
scheduler: warmuplr
scheduler_conf:
warmup_steps: 30000
token_list:
- <blank>
- <unk>
- 的
- 我
- 是
- 你
- 了
- 一
- 不
- 这
- 个
- 有
- 就
- 们
- 在
- 他
- 人
- 么
- 来
- 说
- 那
- 要
- 好
- 啊
- 大
- 到
- 上
- 也
- 没
- 都
- 去
- 能
- 子
- 会
- 为
- 得
- 时
- 还
- 可
- 以
- 什
- 家
- 后
- 看
- 呢
- 对
- 事
- 天
- 下
- 过
- 想
- 多
- 小
- 出
- 自
- 儿
- 生
- 给
- 里
- 现
- 着
- 然
- 吧
- 样
- 道
- 吗
- 心
- 跟
- 中
- 很
- 点
- 年
- 和
- 地
- 怎
- 知
- 十
- 老
- 当
- 把
- 话
- 别
- 所
- 之
- 情
- 实
- 开
- 面
- 回
- 行
- 国
- 做
- 己
- 经
- 如
- 真
- 起
- 候
- 些
- 让
- 发
- 她
- 觉
- 但
- 成
- 定
- 意
- 二
- 长
- 最
- 方
- 三
- 前
- 因
- 用
- 呀
- 种
- 只
- 走
- 其
- 问
- 再
- 果
- 而
- 分
- 两
- 打
- 学
- 间
- 您
- 本
- 于
- 明
- 手
- 公
- 听
- 比
- 作
- 女
- 太
- 今
- 从
- 关
- 妈
- 同
- 法
- 动
- 已
- 见
- 才
- 孩
- 感
- 吃
- 常
- 次
- 它
- 进
- 先
- 找
- 身
- 全
- 理
- 又
- 力
- 正
- 主
- 应
- 高
- 被
- 钱
- 快
- 等
- 头
- 重
- 车
- 谢
- 日
- 东
- 放
- 无
- 工
- 咱
- 哪
- 五
- 者
- 像
- 西
- 该
- 干
- 相
- 信
- 机
- 百
- 特
- 业
- 活
- 师
- 边
- 爱
- 友
- 新
- 外
- 位
- 更
- 直
- 几
- 第
- 非
- 四
- 题
- 接
- 少
- 哥
- 死
- 完
- 刚
- 电
- 气
- 安
- 爸
- 白
- 告
- 美
- 解
- 叫
- 月
- 带
- 欢
- 谁
- 体
- 喜
- 部
- 场
- 姐
- 军
- 万
- 结
- 合
- 难
- 八
- 每
- 目
- 亲
- 朋
- 认
- 总
- 加
- 通
- 办
- 马
- 件
- 受
- 任
- 请
- 住
- 王
- 思
- 门
- 名
- 平
- 系
- 文
- 帮
- 路
- 变
- 记
- 水
- 九
- 算
- 将
- 口
- 男
- 度
- 报
- 六
- 张
- 管
- 够
- 性
- 表
- 提
- 何
- 讲
- 期
- 拿
- 保
- 嘛
- 司
- 原
- 始
- 此
- 诉
- 处
- 清
- 内
- 产
- 金
- 晚
- 早
- 交
- 离
- 眼
- 队
- 七
- 入
- 山
- 代
- 市
- 海
- 物
- 零
- 望
- 世
- 婚
- 命
- 越
- 收
- 向
- 花
- 房
- 错
- 节
- 父
- 反
- 战
- 买
- 量
- 或
- 员
- 号
- 千
- 怕
- 底
- 且
- 品
- 民
- 化
- 爷
- 并
- 与
- 服
- 需
- 资
- 求
- 教
- 娘
- 医
- 数
- 院
- 书
- 利
- 往
- 确
- 各
- 单
- 风
- 送
- 必
- 条
- 包
- 准
- 光
- 整
- 病
- 弟
- 嗯
- 计
- 照
- 强
- 务
- 影
- 城
- 夫
- 俩
- 决
- 声
- 连
- 乐
- 息
- 远
- 北
- 至
- 饭
- 留
- 宝
- 神
- 近
- 考
- 备
- 案
- 界
- 容
- 况
- 母
- 较
- 持
- 证
- 选
- 制
- 程
- 喝
- 害
- 字
- 失
- 立
- 台
- 玩
- 查
- 块
- 便
- 挺
- 段
- 周
- 由
- 句
- 紧
- 李
- 据
- 杀
- 南
- 商
- 识
- 网
- 式
- 愿
- 传
- 流
- 消
- 伤
- 根
- 演
- 希
- 故
- 坐
- 建
- 注
- 许
- 调
- 共
- 空
- 半
- 却
- 酒
- 联
- 微
- 言
- 肯
- 赶
- 跑
- 笑
- 区
- 岁
- 红
- 达
- 官
- 轻
- 易
- 火
- 线
- 拉
- 首
- 导
- 团
- 慢
- 指
- 写
- 深
- 论
- 片
- 改
- 啥
- 满
- 步
- 音
- 功
- 聊
- 客
- 未
- 格
- 基
- 睡
- 观
- 份
- 视
- 色
- 价
- 政
- 转
- 终
- 复
- 啦
- 呃
- 阿
- 倒
- 义
- 警
- 林
- 使
- 科
- 运
- 苦
- 待
- 费
- 随
- 救
- 试
- 班
- 敢
- 精
- 及
- 术
- 造
- 续
- 养
- 展
- 答
- 绝
- 众
- 站
- 妹
- 差
- 谈
- 卖
- 播
- 创
- 领
- 象
- 志
- 投
- 习
- 兄
- 元
- 皇
- 专
- 态
- 急
- 局
- 兴
- 楚
- 飞
- 护
- 装
- 热
- 奶
- 取
- 设
- 游
- 读
- 福
- 药
- 担
- 历
- 忙
- 规
- 掉
- 刘
- 切
- 断
- 尽
- 社
- 久
- 支
- 板
- 星
- 姑
- 曾
- 突
- 除
- 华
- 责
- 排
- 京
- 值
- 士
- 统
- 换
- 德
- 衣
- 组
- 示
- 脸
- 刻
- 黑
- 遇
- 虽
- 顾
- 戏
- 怪
- 懂
- 叔
- 夜
- 陈
- 亮
- 江
- 兵
- 负
- 布
- 青
- 落
- 推
- 假
- 类
- 令
- 技
- 英
- 质
- 黄
- 治
- 形
- 助
- 球
- 歌
- 参
- 广
- 继
- 简
- 画
- 奇
- 陪
- 阳
- 险
- 须
- 念
- 迎
- 幸
- 抓
- 破
- 另
- 争
- 竟
- 户
- 律
- 择
- 究
- 龙
- 足
- 店
- 脑
- 斯
- 党
- 权
- 约
- 疑
- 议
- 严
- 密
- 克
- 存
- 穿
- 承
- 校
- 击
- 际
- 标
- 云
- 营
- 察
- 超
- 食
- 集
- 级
- 礼
- 静
- 背
- 武
- 初
- 拍
- 梦
- 验
- 响
- 角
- 石
- 股
- 追
- 怀
- 婆
- 适
- 独
- 忘
- 血
- 醒
- 具
- 罪
- 享
- 毛
- 香
- 状
- 配
- 靠
- 语
- 仅
- 低
- 细
- 米
- 既
- 钟
- 极
- 停
- 味
- 则
- 油
- 器
- 楼
- 菜
- 研
- 互
- 压
- 贵
- 村
- 属
- 派
- 乎
- 坏
- 控
- 显
- 图
- 双
- 职
- 永
- 哈
- 鬼
- 依
- 料
- 按
- 府
- 坚
- 某
- 甚
- 居
- 练
- 顺
- 模
- 即
- 州
- 引
- 乱
- 速
- 庭
- 朝
- 室
- 似
- 付
- 划
- 尔
- 境
- 犯
- 烦
- 环
- 伙
- 巴
- 春
- 古
- 妇
- 势
- 款
- 增
- 财
- 河
- 守
- 虑
- 汉
- 枪
- 妻
- 爹
- 弄
- 委
- 企
- 冲
- 置
- 麻
- 育
- 项
- 防
- 胡
- 杨
- 致
- 辈
- 括
- 毕
- 卫
- 修
- 史
- 型
- 牌
- 嘴
- 苏
- 群
- 举
- 痛
- 座
- 概
- 搞
- 围
- 土
- 毒
- 唱
- 冷
- 累
- 玉
- 获
- 误
- 跳
- 脚
- 雨
- 剧
- 休
- 皮
- 止
- 济
- 肉
- 丽
- 借
- 铁
- 牛
- 哭
- 招
- 闹
- 银
- 优
- 温
- 狗
- 退
- 洗
- 拜
- 否
- 票
- 偷
- 抱
- 博
- 般
- 效
- 套
- 维
- 普
- 康
- 富
- 宫
- 索
- 罗
- 堂
- 智
- 省
- 介
- 孙
- 灵
- 评
- 藏
- 称
- 课
- 货
- 姨
- 艺
- 骗
- 雪
- 赛
- 景
- 昨
- 健
- 鱼
- 激
- 危
- 熟
- 圈
- 闻
- 监
- 替
- 君
- 恋
- 良
- 掌
- 草
- 松
- 供
- 努
- 例
- 短
- 帝
- 姓
- 率
- 族
- 亿
- 赵
- 蛋
- 判
- 预
- 频
- 卡
- 架
- 纪
- 弃
- 秀
- 兰
- 层
- 检
- 伴
- 抗
- 讨
- 源
- 夏
- 咋
- 惊
- 录
- 善
- 补
- 刀
- 充
- 升
- 章
- 午
- 若
- 私
- 吴
- 素
- 旅
- 临
- 挑
- 唐
- 露
- 树
- 斗
- 舞
- 左
- 叶
- 副
- 晓
- 厂
- 弹
- 印
- 秘
- 屋
- 田
- 木
- 困
- 园
- 封
- 逃
- 批
- 馆
- 疼
- 败
- 陆
- 敌
- 散
- 采
- 翻
- 缺
- 胜
- 免
- 销
- 鸡
- 降
- 波
- 测
- 限
- 释
- 忍
- 归
- 床
- 餐
- 茶
- 码
- 宁
- 乡
- 辛
- 彩
- 亚
- 浪
- 漂
- 庆
- 训
- 范
- 烧
- 词
- 吵
- 媳
- 探
- 余
- 恐
- 积
- 农
- 遍
- 舒
- 顶
- 构
- 呼
- 丝
- 执
- 雅
- 惯
- 右
- 脱
- 恩
- 野
- 折
- 趣
- 笔
- 谓
- 盘
- 贝
- 宣
- 绍
- 嘉
- 宋
- 抢
- 嫌
- 尊
- 碰
- 绪
- 丢
- 厉
- 沙
- 轮
- 施
- 织
- 托
- 县
- 策
- 杯
- 逼
- 傻
- 束
- 街
- 疗
- 益
- 骨
- 迷
- 姻
- 恶
- 默
- 寻
- 搜
- 哦
- 材
- 吸
- 劳
- 勇
- 占
- 暴
- 船
- 徐
- 虎
- 融
- 异
- 审
- 攻
- 雷
- 稳
- 呗
- 输
- 睛
- 臣
- 端
- 威
- 秋
- 欧
- 冰
- 韩
- 减
- <space>
- 操
- 混
- 汽
- 暗
- 隐
- 嫂
- 沉
- 烟
- 顿
- 凭
- 洋
- 嫁
- 购
- 粉
- 遗
- 杂
- 协
- 尝
- 键
- 亡
- 秦
- 纸
- 拥
- 革
- 猫
- 伯
- 祝
- 签
- 傅
- 牙
- 湖
- 莫
- 杰
- 旁
- 港
- 劲
- 宗
- 偏
- 触
- 唯
- 吓
- 辆
- 沈
- 列
- 梅
- 祖
- 舍
- 尤
- 赚
- 疫
- 腾
- 拼
- 奖
- 刺
- 齐
- 诚
- 媒
- 戴
- 账
- 炸
- 骂
- 避
- 麦
- 爆
- 域
- 烈
- 暖
- 季
- 猜
- 佳
- 净
- 腿
- 磨
- 曲
- 虚
- 阵
- 荣
- 访
- 核
- 鲜
- 阶
- 镇
- 灯
- 估
- 剩
- 硬
- 租
- 敬
- 损
- 惜
- 挂
- 董
- 巨
- 忆
- 登
- 丈
- 帅
- 童
- 耳
- 央
- 软
- 移
- 略
- 额
- 厅
- 挥
- 透
- 络
- 弱
- 珍
- 恨
- 巧
- 丁
- 谋
- 孤
- 豆
- 诗
- 冒
- 狼
- 渐
- 峰
- 售
- 凡
- 聚
- 洞
- 抽
- 劝
- 闭
- 摆
- 冬
- 凶
- 魔
- 灭
- 雄
- 挣
- 搬
- 龄
- 朱
- 编
- 航
- 席
- 驾
- 授
- 鼓
- 握
- 隔
- 猪
- 仙
- 颜
- 镜
- 胖
- 赢
- 仇
- 晨
- 欺
- 刑
- 谷
- 旦
- 亏
- 盖
- 症
- 喊
- 蓝
- 讯
- 殿
- 梁
- 躲
- 旧
- 针
- 箱
- 丰
- 洲
- 鞋
- 征
- 蒙
- 伟
- 袋
- 庄
- 患
- 怨
- 佛
- 稍
- 朵
- 纳
- 吉
- 川
- 典
- 迹
- 瑞
- 废
- 搭
- 涨
- 汤
- 启
- 桌
- 摸
- 赔
- 宜
- 纯
- 贴
- 聪
- 熊
- 延
- 瓶
- 版
- 缘
- 距
- 甜
- 析
- 盛
- 孕
- 彻
- 桥
- 尚
- 染
- 撞
- 途
- 沟
- 疯
- 敏
- 瞧
- 漫
- 胆
- 诺
- 刷
- 饿
- 仍
- 喂
- 辞
- 迟
- 淡
- 郑
- 歉
- 扰
- 宾
- 圆
- 赞
- 肚
- 慧
- 泪
- 吹
- 拖
- 遭
- 穷
- 罚
- 悔
- 绿
- 忽
- 唉
- 毫
- 绩
- 暂
- 射
- 岛
- 拾
- 珠
- 欠
- 忠
- 陷
- 阴
- 尼
- 悲
- 糊
- 撤
- 徒
- 剑
- 币
- 娜
- 违
- 泡
- 仗
- 粮
- 培
- 趟
- 菲
- 拒
- 棒
- 脾
- 赏
- 窗
- 宇
- 闲
- 附
- 踏
- 彼
- 涉
- 锁
- 撒
- 魂
- 羊
- 述
- 屈
- 库
- 滚
- 凉
- 颗
- 寒
- 呐
- 墙
- 娃
- 序
- 迪
- 丹
- 扬
- 瞎
- 递
- 凤
- 碗
- 屁
- 锅
- 奔
- 幅
- 债
- 糖
- 奋
- 汇
- 圣
- 订
- 偶
- 残
- 宽
- 狂
- 鼠
- 狠
- 幕
- 固
- 竞
- 蜜
- 吐
- 摄
- 骑
- 篇
- 毁
- 尾
- 摇
- 奥
- 厚
- 妖
- 禁
- 逐
- 均
- 尸
- 冠
- 阅
- 辑
- 捕
- 载
- 郭
- 俺
- 诊
- 欲
- 扎
- 鸟
- 柔
- 迫
- 豪
- 踪
- 扔
- 碎
- 末
- 娶
- 扫
- 朕
- 励
- 乔
- 闺
- 档
- 厨
- 倍
- 湾
- 郎
- 幼
- 纷
- 奴
- 阻
- 饮
- 怒
- 妙
- 琴
- 曹
- 脏
- 牵
- 瓜
- 滴
- 炮
- 缓
- 含
- 献
- 柜
- 仔
- 艾
- 潜
- 赌
- 震
- 础
- 添
- 兔
- 焦
- 躺
- 森
- 肥
- 洪
- 孝
- 偿
- 悉
- 撑
- 甘
- 桃
- 苹
- 魏
- 鲁
- 池
- 狱
- 厌
- 纠
- 朗
- 贷
- 铺
- 殊
- 坦
- 爬
- 擦
- 酸
- 钢
- 咖
- 瞒
- 蛮
- 谅
- 耐
- 申
- 夸
- 欣
- 诶
- 驶
- 屏
- 烂
- 凌
- 甲
- 胎
- 仪
- 貌
- 番
- 涂
- 抬
- 舅
- 扯
- 鹿
- 摩
- 诸
- 秒
- 泽
- 埋
- 蒋
- 隆
- 赖
- 奸
- 咬
- 恢
- 宿
- 乖
- 邀
- 抵
- 臭
- 闪
- 莉
- 熬
- 链
- 盯
- 侦
- 灾
- 堆
- 灰
- 卷
- 盾
- 障
- 截
- 恰
- 佩
- 戒
- 莲
- 裁
- 芬
- 戚
- 匪
- 滑
- 趁
- 询
- 绑
- 辣
- 挖
- 俗
- 祸
- 符
- 扣
- 插
- 仁
- 壁
- 腰
- 斤
- 燕
- 筑
- 柱
- 夺
- 援
- 映
- 壮
- 杜
- 摔
- 润
- 恭
- 乌
- 慰
- 啡
- 著
- 井
- 跌
- 牢
- 荐
- 拔
- 惹
- 侯
- 玲
- 炎
- 胸
- 旗
- 牲
- 喽
- 涛
- 衡
- 矛
- 伍
- 贤
- 惨
- 糟
- 慌
- 伏
- 醉
- 仓
- 拆
- 乘
- 疾
- 鼻
- 潮
- 予
- 奉
- 伦
- 劫
- 伊
- 怜
- 孟
- 肺
- 忧
- 倾
- 矩
- 荒
- 奏
- 塔
- 塞
- 迅
- 轨
- 瞬
- 丫
- 狐
- 叛
- 繁
- 眠
- 孔
- 谱
- 悄
- 泰
- 姜
- 侵
- 妃
- 冯
- 柳
- 洛
- 岸
- 凯
- 陛
- 幺
- 仿
- 氏
- 窝
- 曼
- 挡
- 浩
- 盟
- 轩
- 牺
- 贫
- 绕
- 谎
- 措
- 扶
- 梯
- 炼
- 勤
- 霸
- 横
- 罢
- 呆
- 税
- 桂
- 哎
- 慕
- 植
- 允
- 荡
- 洁
- 肖
- 耗
- 贼
- 艰
- 贺
- 幻
- 饱
- 胃
- 袭
- 廷
- 泥
- 丧
- 缩
- 砸
- 姥
- 拦
- 扮
- 糕
- 肤
- 猴
- 脆
- 炒
- 耀
- 盗
- 邓
- 扩
- 纵
- 振
- 敲
- 鹏
- 姆
- 湿
- 丑
- 召
- 苗
- 伸
- 惑
- 碍
- 萨
- 瘦
- 闯
- 迁
- 坑
- 弯
- 卑
- 尖
- 遥
- 侠
- 犹
- 押
- 冤
- 钻
- 汗
- 闷
- 邻
- 淘
- 抛
- 妆
- 贾
- 侧
- 傲
- 描
- 耍
- 猛
- 薇
- 裤
- 憾
- 督
- 贸
- 墨
- 勒
- 薄
- 嘞
- 渡
- 紫
- 悟
- 锦
- 溜
- 逆
- 惠
- 辉
- 贪
- 圾
- 垃
- 券
- 燃
- 虫
- 悠
- 伪
- 尿
- 懒
- 俊
- 寄
- 歇
- 盒
- 潘
- 储
- 愈
- 脉
- 粗
- 返
- 昌
- 泉
- 蔡
- 愧
- 赤
- 岳
- 婷
- 猎
- 饼
- 肩
- 勾
- 巡
- 竹
- 催
- 陌
- 踩
- 促
- 扭
- 堵
- 酷
- 芳
- 逛
- 陵
- 耽
- 凑
- 寿
- 缝
- 剪
- 郁
- 宅
- 抚
- 筹
- 沿
- 烤
- 奈
- 挨
- 晋
- 崩
- 浮
- 阁
- 彭
- 裂
- 崇
- 眉
- 桑
- 辩
- 漏
- 稀
- 液
- 汪
- 袁
- 掩
- 浑
- 坡
- 晕
- 缠
- 仰
- 挤
- 睁
- 羽
- 岗
- 捡
- 墓
- 综
- 矿
- 妥
- 厕
- 辱
- 惧
- 逗
- 帽
- 寸
- 搁
- 跨
- 渴
- 饰
- 璃
- 琳
- 爽
- 愤
- 饶
- 卧
- 誓
- 滋
- 鉴
- 腐
- 鸭
- 蛇
- 妮
- 莱
- 哟
- 钥
- 甄
- 肠
- 畅
- 慎
- 悬
- 逻
- 胁
- 辰
- 呈
- 棋
- 寨
- 萌
- 覆
- 姚
- 津
- 笨
- 轰
- 乏
- 匙
- 摊
- 陶
- 恼
- 昏
- 抑
- 姿
- 愁
- 誉
- 椅
- 羞
- 澡
- 踢
- 晶
- 萧
- 箭
- 罩
- 宠
- 羡
- 亦
- 祥
- 串
- 昆
- 煮
- 疏
- 纹
- 泄
- 痕
- 喷
- 册
- 跃
- 卢
- 岩
- 跪
- 兽
- 桶
- 飘
- 漠
- 堪
- 哄
- 寂
- 崔
- 腹
- 癌
- 拳
- 驻
- 霍
- 拨
- 诞
- 捐
- 御
- 榜
- 唤
- 荷
- 径
- 署
- 锋
- 玛
- 匆
- 恒
- 吕
- 邮
- 圳
- 黎
- 掏
- 莎
- 寞
- 佐
- 诈
- 牧
- 盐
- 叹
- 尬
- 匹
- 狸
- 膀
- 谨
- 尘
- 驱
- 乳
- 晒
- 宴
- 辜
- 哲
- 铜
- 薪
- 盆
- 割
- 忌
- 旋
- 翼
- 哀
- 咨
- 遵
- 夹
- 侣
- 译
- 胞
- 浅
- 邦
- 俄
- 弗
- 豫
- 甭
- 乃
- 扛
- 杭
- 瓦
- 槽
- 污
- 尴
- 琢
- 枝
- 详
- 柴
- 佑
- 盼
- 抖
- 惩
- 捷
- 葬
- 贡
- 艳
- 塑
- 茫
- 叨
- 浓
- 拐
- 捉
- 憋
- 稿
- 苍
- 葛
- 扑
- 娱
- 赋
- 杆
- 绘
- 聆
- 肌
- 婴
- 摘
- 岂
- 呵
- 冻
- 泳
- 揭
- 坤
- 盈
- 毅
- 撕
- 娇
- 唠
- 宏
- 吊
- 籍
- 楠
- 肃
- 抹
- 玄
- 湘
- 迈
- 酱
- 骄
- 咐
- 扇
- 幽
- 疲
- 邪
- 吞
- 趋
- 尺
- 玻
- 溃
- 诱
- 翠
- 兼
- 辅
- 岭
- 栏
- 柏
- 址
- 寺
- 逢
- 琪
- 慈
- 愣
- 契
- 渠
- 齿
- 薛
- 拟
- 填
- 坛
- 抄
- 痴
- 绳
- 役
- 擅
- 晃
- 斌
- 愉
- 届
- 悦
- 旨
- 砍
- 弥
- 挽
- 肝
- 鸣
- 庙
- 烫
- 聘
- 皆
- 婶
- 舌
- 枉
- 赫
- 蓉
- 瞅
- 阔
- 俱
- 循
- 鸿
- 彪
- 伺
- 堡
- 谦
- 剂
- 洒
- 赴
- 妨
- 磊
- 嘱
- 蝶
- 兆
- 豹
- 绣
- 篮
- 锻
- 陕
- 霉
- 涵
- 疆
- 丸
- 蠢
- 铃
- 浙
- 庞
- 萝
- 泛
- 芝
- 煤
- 甩
- 氛
- 页
- 逸
- 袖
- 携
- 躁
- 夕
- 匠
- 蹈
- 坊
- 雾
- 蹲
- 颠
- 脂
- 塌
- 棵
- 鹰
- 澳
- 哇
- 筋
- 纽
- 脖
- 棉
- 渣
- 寡
- 践
- 侄
- 披
- 魅
- 虹
- 肿
- 胶
- 霞
- 罐
- 晴
- 拓
- 卿
- 耻
- 砖
- 宪
- 歪
- 兜
- 衰
- 捧
- 歹
- 雕
- 穆
- 栋
- 瑶
- 毙
- 衷
- 膜
- 囊
- 莹
- 垫
- 吻
- 嘟
- 舰
- 虾
- 壳
- 穴
- 勉
- 裙
- 旺
- 柯
- 磕
- 贩
- 腻
- 蹦
- 卜
- 茹
- 驴
- 臂
- 删
- 菌
- 妾
- 蜂
- 祭
- 菊
- 咸
- 淑
- 笼
- 涯
- 碧
- 宙
- 骚
- 皓
- 赐
- 晰
- 腔
- 龟
- 泼
- 鹅
- 啪
- 巾
- 炉
- 沾
- 醋
- 澜
- 朴
- 棍
- 伞
- 雀
- 赠
- 妞
- 淋
- 刮
- 汁
- 椒
- 埃
- 嚷
- 盲
- 窃
- 辽
- 贱
- 滩
- 昭
- 贯
- 珊
- 涌
- 辨
- 捞
- 仲
- 拘
- 碑
- 侍
- 剿
- 搅
- 狮
- 藤
- 旭
- 翅
- 滨
- 禀
- 遮
- 瑟
- 斩
- 攒
- 犬
- 挫
- 僧
- 吩
- 渊
- 蒂
- 萍
- 庸
- 蓄
- 鼎
- 咪
- 姬
- 溪
- 郡
- 镖
- 怡
- 杉
- 畏
- 瓷
- 枚
- 煎
- 劣
- 饺
- 妄
- 卓
- 蔽
- 蒸
- 垂
- 嘲
- 慨
- 谊
- 蹭
- 逮
- 锐
- 钉
- 舟
- 沃
- 凝
- 翔
- 颈
- 靖
- 灌
- 膊
- 崖
- 娟
- 胳
- 铭
- 灿
- 亭
- 粒
- 卸
- 咕
- 坎
- 攀
- 婿
- 奢
- 茂
- 趴
- 耿
- 捏
- 怖
- 浴
- 婉
- 煌
- 霖
- 揍
- 昂
- 驰
- 壶
- 械
- 卦
- 粥
- 尹
- 瘾
- 雇
- 翰
- 肆
- 寇
- 曦
- 厢
- 杠
- 屠
- 芒
- 谣
- 沫
- 掘
- 酬
- 讼
- 乾
- 玫
- 瑰
- 逊
- 惦
- 儒
- 肾
- 粹
- 愚
- 渔
- 暑
- 伐
- 潇
- 喘
- 敦
- 翁
- 斥
- 帖
- 纱
- 梳
- 缴
- 茅
- 谭
- 氧
- 遣
- 履
- 刹
- 枕
- 婢
- 徽
- 轿
- 寓
- 咽
- 叉
- 嗓
- 捣
- 裹
- 览
- 拯
- 疚
- 蜀
- 丛
- 框
- 斑
- 宵
- 郝
- 蛙
- 熙
- 祁
- 哑
- 葱
- 唇
- 韦
- 媛
- 魄
- 锤
- 绵
- 炫
- 吨
- 稻
- 碌
- 刊
- 漆
- 搏
- 讶
- 痒
- 枫
- 妒
- 冥
- 郊
- 爵
- 逝
- 栽
- 叠
- 蚁
- 裕
- 帕
- 剥
- 谐
- 巫
- 颇
- 娥
- 廊
- 蕾
- 丘
- 丞
- 葡
- 坠
- 鸦
- 糗
- 虐
- 唬
- 屎
- 顽
- 巷
- 硅
- 罕
- 殖
- 嘿
- 韵
- 歧
- 垮
- 淮
- 馈
- 昊
- 宰
- 钦
- 霜
- 兑
- 萄
- 塘
- 胀
- 樱
- 枯
- 咳
- 窑
- 募
- 缸
- 昧
- 仑
- 恕
- 氓
- 叮
- 吼
- 坟
- 轴
- 贞
- 赎
- 帆
- 嫩
- 蚂
- 僵
- 颖
- 噜
- 咒
- 琐
- 勃
- 芯
- 绸
- 哼
- 仨
- 挪
- 狡
- 禅
- 粘
- 雯
- 扒
- 恳
- 蔬
- 匈
- 钓
- 桐
- 菇
- 哒
- 稚
- 膏
- 纲
- 狄
- 硕
- 廉
- 衙
- 艘
- 廖
- 腊
- 蟹
- 邱
- 缉
- 曝
- 桩
- 啤
- 嫉
- 棚
- 矮
- 汰
- 衍
- 拽
- 削
- 彤
- 斜
- 揉
- 樊
- 馨
- 钩
- 浦
- 肢
- 敷
- 喻
- 鞭
- 瞪
- 耕
- 掐
- 屡
- 榴
- 勋
- 泊
- 竭
- 鹤
- 溢
- 淳
- 倩
- 驳
- 抠
- 捅
- 筒
- 窄
- 鄙
- 嗦
- 袍
- 劈
- 炖
- 裸
- 贬
- 敞
- 嘎
- 淹
- 耶
- 秩
- 舱
- 厦
- 叙
- 孽
- 筷
- 浇
- 饥
- 噩
- 蚊
- 兮
- 皱
- 侃
- 辟
- 弊
- 袜
- 吾
- 俘
- 芸
- 夷
- 芦
- 囚
- 倡
- 琦
- 哨
- 巢
- 烛
- 帐
- 燥
- 讽
- 俞
- 馅
- 柿
- 墅
- 妍
- 瘤
- 沦
- 衬
- 瑜
- 蒜
- 蛛
- 窟
- 勿
- 沛
- 磁
- 狭
- 栈
- 懵
- 酿
- 戈
- 邵
- 龚
- 衫
- 勺
- 哗
- 叽
- 畜
- 爪
- 惫
- 颁
- 浸
- 摧
- 勘
- 惕
- 蔓
- 馒
- 挠
- 陀
- 豁
- 帘
- 淀
- 藩
- 蜡
- 凳
- 蘑
- 琼
- 棺
- 蝴
- 骆
- 掰
- 枣
- 遂
- 飙
- 咧
- 掀
- 梨
- 杏
- 嗑
- 棠
- 绽
- 捆
- 舆
- 肇
- 葩
- 呦
- 膝
- 鹊
- 揣
- 瓣
- 靓
- 卵
- 鲍
- 炭
- 戳
- 颤
- 禄
- 菩
- 崛
- 驸
- 佣
- 眨
- 聂
- 乙
- 嘻
- 拧
- 喵
- 佟
- 靳
- 阎
- 拢
- 厘
- 凰
- 疤
- 螺
- 淇
- 涩
- 拎
- 嗨
- 魁
- 薯
- 歼
- 沪
- 筛
- 谍
- 揪
- 刁
- 秃
- 谜
- 撇
- 肪
- 绊
- 逞
- 滥
- 寝
- 麟
- 奕
- 侮
- 喉
- 柄
- 荆
- 撼
- 窦
- 姗
- 乞
- 艇
- 竖
- 剖
- 嗽
- 捂
- 腕
- 鸽
- 刃
- 弓
- 辙
- 粤
- 泣
- 梗
- 茄
- 茜
- 驼
- 冈
- 倔
- 啃
- 蹄
- 唧
- 祈
- 腺
- 焰
- 睿
- 崽
- A
- 苛
- 窍
- 凿
- 倭
- 骤
- 槛
- 碳
- 诏
- 芽
- 浆
- 隶
- 搂
- 睦
- 彬
- 岔
- 诀
- 嚼
- 掺
- 殷
- 吁
- 啰
- 侈
- 亩
- 纤
- 倦
- 揽
- 媚
- 潭
- 莽
- 赃
- 睹
- 脊
- 逍
- 淼
- 沸
- 峡
- 仆
- 眷
- 屯
- 璐
- 雁
- 澄
- 渗
- 咔
- 啸
- 怂
- 娄
- 惶
- 恍
- 锡
- 秉
- 猾
- 挟
- 舔
- 弦
- 阱
- 俭
- 嚣
- 搓
- 懈
- 诡
- 隙
- 苟
- 倘
- 瘫
- 扁
- 鑫
- 撩
- 蓬
- 铲
- 峥
- 巅
- 葫
- 膳
- 狙
- 晏
- 祠
- 峻
- 尉
- 毯
- 沧
- 熏
- 咯
- 株
- 沐
- 奎
- 锣
- 霄
- 彦
- 叭
- 臻
- 昔
- 灶
- 傍
- 腥
- 屑
- 禾
- 彰
- 冉
- 矫
- 滞
- 瘩
- 匀
- 椎
- 槐
- 岚
- 跷
- 剔
- 倪
- 盏
- 泌
- 灸
- 隧
- 函
- 壤
- 剃
- 蹊
- 葵
- 拌
- 琅
- 炳
- 跋
- 瑾
- 哩
- 蔷
- 鳌
- 莺
- 诵
- 疙
- 吱
- 蓓
- 绎
- 匿
- 铮
- 怼
- 踹
- 嗅
- 焚
- 躯
- 蝇
- 橘
- 祟
- 辖
- 砂
- 韧
- 粪
- 诬
- 擒
- 黏
- 衔
- 溺
- 蜘
- 篷
- 贿
- 闫
- 焕
- 邢
- 兹
- 窖
- 旬
- 铸
- 咚
- 惭
- 佬
- 裴
- 裳
- 犀
- 弘
- 莓
- 钏
- 鄂
- 陋
- 伽
- 鞠
- 氪
- 垒
- 窜
- 橙
- 讳
- 甥
- 淫
- 拱
- 袱
- 坨
- 暧
- 渺
- 蕉
- 晗
- 茬
- 盔
- 妓
- 蚕
- 僻
- 朽
- 呛
- 挚
- 擎
- 绅
- 喇
- 鳄
- 巩
- 蜗
- 遛
- 俯
- 汹
- 猩
- 奠
- 钙
- 悍
- 躬
- 菱
- 翘
- 琉
- 虏
- 凄
- 稼
- 炕
- 皂
- 漱
- 斋
- 撂
- 敛
- 阮
- 芭
- 阀
- 缚
- 懦
- 亨
- 螃
- 侥
- 膨
- 筝
- 惟
- 黛
- 眯
- 茨
- 怠
- 辐
- 捎
- 殴
- 桓
- 瞄
- 冀
- 雍
- 霾
- 酵
- 檬
- 哺
- 裔
- 兢
- 麒
- 烹
- 绒
- 丐
- 娅
- 钞
- 垄
- 笛
- 赣
- 蕊
- 暮
- 噪
- 沮
- 肋
- 庇
- 橡
- 摁
- 痘
- 棘
- 拂
- 绷
- 刨
- 晾
- 蹬
- 鸥
- 璇
- 掠
- 瘟
- 俐
- 糙
- 骏
- 牡
- 撵
- 嘘
- 沥
- 庶
- 赁
- 喧
- 涡
- 瞳
- 迭
- 肘
- 颂
- 珑
- 觅
- 埔
- G
- 跤
- 朔
- 詹
- 梭
- 暇
- 惺
- 甸
- 怯
- 聋
- 赦
- 屉
- 闸
- 坝
- 吟
- 凸
- 拴
- 堤
- 矣
- 斧
- 呸
- 啼
- 韬
- 钧
- 坞
- 纺
- 氢
- 嵩
- 镯
- 髓
- 檐
- 涕
- 剁
- 稽
- 烨
- 钮
- 闽
- 仕
- 驯
- 吭
- 漓
- 眸
- 鞅
- 枢
- 煞
- 昕
- 畔
- 疹
- 矶
- 呱
- 熄
- 吏
- 泻
- 拙
- 蛤
- 禽
- 甫
- 厮
- 乍
- 蝉
- 撬
- 嘀
- 衅
- 鲨
- 萱
- 霹
- 旷
- 辫
- 坷
- 眶
- 蟆
- 呜
- 猬
- 嬷
- 萎
- 靶
- 雳
- 煲
- 溯
- 蚀
- 狈
- 滤
- 恙
- 瑛
- 栓
- 嫣
- 碟
- 祷
- 驿
- 犊
- 灼
- 哆
- 宛
- 榨
- 寥
- 翟
- 栗
- 滔
- 馋
- 杖
- 茉
- 饲
- 庐
- 隋
- 旱
- 崎
- 颅
- 焉
- 墩
- 篱
- 晟
- 扳
- 咎
- 竿
- 僚
- 溶
- 俏
- 霆
- 堕
- 冕
- 叩
- 绰
- 洽
- 襄
- 蛊
- 缅
- 侨
- 伶
- 蕴
- 酥
- 坂
- 拇
- 庚
- 卒
- 诛
- 禧
- 瓢
- 锯
- 扉
- 饷
- 诅
- 烘
- 浏
- 痰
- 榆
- 窥
- 鲸
- 捋
- 戎
- 笋
- 璋
- 诫
- 珈
- 癫
- 囤
- 厥
- 癖
- 翩
- 芹
- 匣
- 噬
- 栖
- 蝎
- 锄
- 玺
- 疮
- 缕
- 猥
- 槿
- 蔑
- 汝
- 珂
- 撮
- 坪
- 蒲
- 倚
- 嗷
- 撰
- 荧
- 芙
- 豚
- 筱
- 敖
- 孵
- 猝
- D
- 弈
- 徊
- 辗
- 赘
- 徘
- 烙
- 娲
- 嚎
- 迢
- 绥
- 羁
- 屌
- 铅
- 澎
- S
- 嬛
- 晦
- 煽
- 逾
- 饵
- 虞
- 筐
- 哧
- 抒
- 醇
- 祀
- 瑕
- 岐
- 潼
- 惚
- C
- 苑
- 靡
- 菠
- 赡
- 惰
- 梓
- 铛
- 澈
- 莞
- 呕
- 驭
- 邝
- 砰
- 轼
- 窒
- 慷
- 绞
- 絮
- 虔
- 惮
- 柬
- 嗡
- 拣
- 羲
- 蹋
- 隘
- 帜
- 卤
- 雌
- 唾
- 邹
- 俑
- 碾
- 婪
- 咏
- 粟
- 崭
- 钝
- 彝
- 陡
- 谛
- 秤
- 磅
- 淌
- 炊
- 鲤
- 羹
- 殉
- 曰
- 萤
- 阐
- 鬟
- 拭
- T
- 沁
- 滇
- 梧
- 烁
- 瞻
- 淤
- 凹
- 撸
- 棕
- 腌
- 缪
- 祺
- 痊
- 忑
- 柠
- 矜
- 忐
- 讹
- 瀚
- 尧
- 昼
- 芊
- 憨
- 鳞
- 匮
- 鸳
- 鸯
- 湃
- 屿
- 馍
- 沽
- 栾
- 蝠
- 窘
- 绛
- 巍
- 悯
- 焊
- 谴
- 浊
- 娴
- 畴
- 湛
- 螂
- 韭
- 哮
- 拷
- 攥
- 凛
- 颓
- 恺
- 蝙
- 襟
- 粑
- 洼
- 笃
- 渝
- 骁
- 殃
- 酌
- 乒
- 臊
- 疵
- 诧
- 谬
- 锈
- 袄
- 膛
- 瘸
- 嫖
- 梢
- 沼
- 棱
- 嚓
- 耸
- 喳
- 舵
- 橱
- 涮
- 檀
- 瞩
- 腑
- 岑
- 痪
- 墟
- 蔚
- 捍
- 徙
- 棣
- 猖
- 掷
- 恬
- 嫦
- 噔
- 饪
- 掂
- 恤
- 叱
- 芷
- 弩
- 楷
- 镶
- 茧
- 诠
- 咙
- 匡
- 擂
- 亵
- 杞
- 乓
- 渤
- 藉
- 憔
- 渭
- 禹
- 睐
- 趾
- 抉
- 悴
- 忒
- 茸
- 纬
- 懊
- 浚
- 溅
- 遏
- 琛
- 靴
- 戮
- 翎
- 谕
- 濒
- 锵
- 嬉
- 籽
- 殆
- 叼
- 苔
- 灏
- 嗖
- 俪
- 亢
- 冶
- 嗜
- 磋
- 汀
- 讪
- 萃
- 菁
- 镑
- 紊
- 脯
- 缆
- 哉
- 赂
- 婊
- B
- 蕃
- 迄
- 蜓
- 舜
- 嚏
- 昱
- 黔
- 犟
- 汐
- 昵
- 嗣
- 唆
- 蛾
- 黯
- 绯
- 瀑
- 憬
- 狩
- 掖
- 崴
- 褪
- 髦
- 酝
- 弧
- 咄
- 吝
- 馄
- 娩
- 窿
- 蜻
- 袒
- 玮
- 阙
- 篡
- 邯
- 朦
- 邑
- 喃
- 粽
- 捶
- 嫔
- 钗
- 穗
- 骼
- 胭
- 寐
- 噎
- M
- 碱
- 荤
- 笙
- 矢
- 芥
- 廓
- 扼
- 厄
- 毋
- 糯
- 惋
- 纶
- 碜
- 胧
- 懿
- 偃
- 沏
- 痹
- 慑
- 鹦
- 娠
- 铐
- 绢
- 傀
- 孜
- 饨
- 儡
- 孰
- 焱
- 峭
- 伎
- 幌
- 椰
- 譬
- 藕
- 坍
- 铝
- 鞍
- 蘸
- 貂
- 猿
- 炙
- 琊
- 峙
- 硝
- 幂
- 钰
- 眩
- 亥
- 簇
- 鹉
- 睫
- 斟
- 簧
- 颐
- 薰
- 癞
- 祛
- 燎
- 缎
- 簸
- 咣
- 绚
- 簿
- 邋
- 嵌
- 肮
- 稷
- 辍
- 闵
- 枸
- 撅
- 曙
- 苇
- K
- 悼
- 汶
- 匕
- 皖
- 腮
- 琶
- 汲
- 鼹
- 礁
- 颊
- 怔
- 汕
- 喀
- 砌
- 釜
- 畸
- 鹃
- 峨
- 奄
- 骡
- 斐
- 芈
- 莘
- 蟑
- 荔
- 缇
- 犒
- 宓
- 汾
- 沌
- 宦
- 憧
- 咤
- 吆
- 攘
- 漩
- 梵
- 阂
- 吒
- 芜
- 缔
- 秧
- 翊
- 晌
- 剐
- 蜕
- 芋
- 彷
- 牟
- 诲
- 臀
- 徨
- Q
- 杵
- 荫
- 榄
- 蹿
- 豌
- 迂
- 琵
- 拗
- 帷
- 楞
- 嘶
- 橄
- 胺
- 圭
- 砚
- 藻
- 凋
- 啄
- 褒
- 嗝
- 殡
- 嫡
- 恃
- 濡
- 缜
- 孺
- 泸
- 妊
- 衩
- 驹
- 榻
- 腆
- 鹂
- 箍
- 璧
- 熔
- 悚
- 遢
- 弛
- 诋
- 羚
- 鹭
- 嘚
- 骸
- 瘪
- 铠
- 瞿
- 屹
- 邸
- 痨
- 辘
- 浒
- 忏
- 钊
- 潦
- 怅
- 肴
- 蚯
- 胚
- 茵
- 蚓
- 戬
- 瘀
- 翡
- 恪
- 卉
- 蝌
- 雏
- 祯
- 谏
- 蚪
- 钵
- 馊
- 嗒
- 犁
- 寅
- V
- 锥
- 娼
- 晖
- 啬
- 纣
- 淆
- 丙
- 夯
- 竣
- 褚
- 褥
- 轧
- 氨
- 褂
- 钳
- 轲
- 竺
- 疡
- 淞
- 胤
- 摹
- 鳅
- 珀
- 偕
- 匾
- 觑
- 扈
- 傣
- 绫
- 枷
- 阑
- 柚
- 烊
- 怦
- 腼
- 珺
- 缀
- 裘
- 碉
- 峪
- 俸
- 羯
- 姊
- 疟
- 砺
- 盎
- 嘣
- 釉
- 溥
- 熠
- 垢
- 摞
- 哽
- 槟
- 囧
- 胰
- 遁
- 痞
- 熹
- 忡
- 稠
- 顷
- 瑚
- 卯
- 渎
- 炅
- 褶
- 烽
- 瞑
- 嘈
- 硫
- 壹
- 悖
- 酪
- 跺
- 阜
- 帛
- 漪
- 蝗
- 迦
- 蟒
- 咀
- 谤
- 睬
- 辕
- 绮
- 搀
- 裆
- 鳖
- 囡
- 羔
- 痣
- 滕
- 佘
- 樟
- 韶
- 霓
- 劾
- 赈
- 唏
- 闰
- 脐
- 沓
- 瓮
- 篓
- 笠
- 暄
- 涅
- 诽
- 洱
- 栅
- 蚱
- 囔
- 攸
- 酣
- 阪
- 榕
- 骇
- 婧
- 陨
- 憎
- 沂
- 磷
- 壕
- 醺
- 惬
- 璀
- 璨
- 喋
- P
- 炽
- 瘁
- 羿
- 褐
- 簪
- 冽
- 驮
- 芮
- 辄
- 咆
- 渍
- 觐
- 炷
- 蛰
- 驷
- 帚
- 蜷
- O
- X
- 邂
- 逅
- 缭
- 秽
- 琰
- 龌
- 龊
- 俨
- 涟
- 噼
- 掇
- 哔
- 炬
- 佯
- 粱
- 霁
- 鱿
- 夭
- 擀
- 陇
- 瞥
- 壑
- 盹
- 馁
- 蚌
- 焖
- 蛟
- 囱
- 蚝
- 抿
- 脓
- 蒿
- 飓
- 渲
- 宸
- 酗
- 荻
- 缥
- 弑
- 偎
- 宕
- 耘
- 瞌
- 瘴
- 溉
- 涝
- 咿
- 垛
- 垦
- 缈
- 苞
- 惆
- 汛
- 鹑
- 町
- 抡
- 慵
- 浣
- 耙
- 砥
- 噱
- 孬
- 札
- 弼
- 酋
- 镳
- 萦
- 泾
- 挞
- 钾
- 讷
- 圃
- 舶
- 穹
- 戾
- 汴
- 锂
- 昀
- 镀
- 眺
- 捺
- 猕
- 阚
- 骋
- 悸
- 蜚
- 咩
- 讥
- 篆
- 鸠
- 哐
- 锚
- 幢
- 翱
- 螳
- 徇
- 踞
- 蔗
- 蔼
- 漉
- 衲
- N
- 漳
- 枭
- 漾
- 歆
- 烬
- 曳
- 岌
- 孚
- 戛
- 呲
- 箫
- 娓
- 桨
- 涓
- 獭
- 芃
- 摒
- 戍
- 踝
- 轱
- 沱
- 锢
- 堰
- 抨
- 昙
- 鹌
- 蔻
- 迸
- 泯
- 龈
- 痔
- 骛
- 淄
- 泵
- 烯
- 蔫
- F
- 胥
- 忱
- 纫
- 搪
- 茎
- 暨
- 泞
- 踵
- 璞
- 佗
- 荃
- 鬓
- 蚣
- 罔
- 臆
- 贻
- 橇
- 麓
- 槌
- 琥
- I
- 纥
- 薅
- 樵
- 苓
- 熨
- 钨
- 骞
- 诣
- 涤
- 踊
- 醛
- 碴
- 蹴
- 缤
- 赊
- 岖
- 戊
- 禺
- 坯
- 戟
- 楂
- 隅
- 酶
- 邃
- 蛀
- 皎
- 炯
- 垣
- 锹
- 镰
- 夙
- 甬
- 叵
- 茁
- 珞
- 妲
- 涸
- 兀
- 嘤
- 谙
- 噗
- 榔
- 稣
- 剽
- 奚
- 啕
- 袅
- 讧
- 钠
- 怄
- 晤
- 肛
- 氰
- 迥
- 唰
- 诩
- 籁
- 砒
- 谩
- 诟
- 斓
- 泷
- 幡
- 爻
- 痫
- 眈
- 漕
- 惘
- 挎
- 噶
- 喱
- 氯
- U
- 跆
- 嗤
- 锏
- 睽
- 缮
- 蟋
- 蠕
- 扪
- 狞
- 飒
- 吮
- 弋
- 奘
- 蟠
- 梆
- 拈
- 帧
- 蟀
- 胯
- 掳
- 蝈
- 帼
- 瞰
- 嵇
- 阉
- 篝
- 笆
- 亘
- L
- 喔
- 愕
- 谚
- 轶
- 岱
- 丕
- 婕
- 羌
- 毡
- 呻
- 鼾
- 蜥
- 偌
- 庵
- 敝
- 蛐
- 麝
- 鞘
- 拮
- 涣
- 葆
- 雹
- 踌
- 蜈
- 馥
- 跻
- 狰
- 桀
- 毗
- 皿
- 缨
- 磐
- 啾
- 牒
- 缰
- 躇
- 踮
- 糠
- 嗲
- 刽
- 咫
- 殇
- 瀛
- 胱
- 炀
- 虱
- 砾
- 獒
- 涎
- 袤
- 鄱
- 瓯
- 锭
- 塾
- 蹉
- 珏
- 豺
- 锌
- 蜿
- 牦
- 瓒
- 莆
- 蜴
- 氮
- 跎
- 咛
- 骜
- 郸
- 搐
- 堑
- 涞
- 寰
- 跛
- 鸵
- 毂
- 妩
- 铤
- 薏
- 烩
- 遐
- 煦
- 仃
- 髅
- 酮
- 榷
- 腋
- 珩
- 臃
- 愫
- 蜒
- 荼
- 侬
- 淬
- 婵
- 偻
- 焯
- 骊
- 恻
- 濮
- 泱
- 庖
- 惴
- 鲫
- 硌
- 肓
- 芪
- 礴
- 磺
- 腱
- 冢
- 谪
- 骷
- 哏
- 腩
- 蓦
- 焙
- 桢
- 阖
- 睾
- 疱
- 郴
- 铿
- 铡
- 祉
- 跄
- 桦
- 椭
- 拄
- 皙
- 膈
- 裱
- 髋
- 伢
- 罹
- 鳍
- 赝
- 嬴
- 痤
- 藿
- 镐
- 铎
- 瘠
- 簌
- 杳
- 铢
- 阡
- 忤
- 舀
- 悻
- 媲
- 茗
- 湍
- 舫
- 瘙
- 瞟
- 擞
- 荀
- 刍
- J
- 潍
- 莴
- 斛
- 郦
- 栩
- 绾
- 蕙
- 黜
- 湄
- 藓
- 躏
- 锱
- 捻
- 佼
- 砝
- E
- 罡
- 忻
- 鹜
- 滟
- 傥
- 蛳
- W
- 铀
- 魇
- 觎
- 蹂
- 佞
- 诃
- 灞
- 镣
- 痱
- 侏
- 峦
- 榛
- 饽
- 龋
- 嗔
- 芍
- 椿
- 璎
- 渥
- 蟾
- 骰
- 吠
- 挛
- 倜
- 鳝
- 糜
- 噢
- 黝
- 藐
- 绡
- 掣
- 鳗
- 璜
- 犷
- 痉
- 膺
- 罄
- 阄
- 纨
- 纭
- 彗
- 嵘
- 埠
- 潢
- 桔
- 耷
- 逵
- 诓
- 怵
- 蚤
- 苯
- 邈
- 谑
- 颌
- 珐
- 踱
- 髻
- 倏
- 啷
- 篑
- 冗
- 蹶
- 荥
- 涧
- 镂
- 踉
- 呷
- 衢
- 荟
- 箴
- 桧
- 恿
- 坳
- 瑙
- 珅
- 莅
- 膘
- 宥
- 氟
- 秆
- 诙
- 蹑
- 茴
- 翳
- 渚
- H
- 唁
- 诿
- 窈
- 窕
- 膻
- 荨
- 蛔
- 筵
- 钛
- 獾
- 琏
- 箩
- 栀
- 隼
- 煸
- 罂
- 蛎
- 咂
- 谗
- 颦
- 佝
- 苣
- 搡
- 仄
- 垠
- 濂
- 泗
- 亟
- 蔺
- 蛆
- 霏
- 榈
- 裟
- 瑁
- 酚
- 蝼
- 怆
- 犄
- 沣
- 揖
- 斡
- 刎
- 鲟
- 峒
- 瞭
- 晁
- 袈
- 蓟
- 镁
- 骥
- 掸
- 玳
- 娑
- 馀
- 跚
- 槃
- 缄
- 猢
- 粕
- 隍
- 佃
- 獗
- 唢
- 菏
- 酰
- 腚
- 笈
- 哙
- 孢
- 飕
- 嘹
- 茱
- 蹒
- 殓
- 柩
- 谀
- 姣
- 戌
- 柑
- 粼
- 淅
- 啧
- 盅
- 鼬
- 啜
- 绉
- 咻
- 锲
- 铆
- Y
- 螨
- 茯
- 憩
- 臼
- 谄
- 讴
- 濠
- 雎
- 噻
- 淦
- 懋
- 尕
- 氦
- 褛
- 颉
- 喆
- 铬
- 褴
- 燮
- 銮
- 侗
- 蹙
- 煜
- 邺
- 锃
- 麋
- 矗
- 娆
- 匐
- 噌
- 潸
- 碘
- 浔
- 檄
- 皈
- 铂
- 遨
- 炜
- 曜
- 饴
- 舷
- 胫
- 叟
- 祎
- 沅
- 潺
- 楣
- 埂
- 瞠
- 幔
- 稞
- 抻
- 匝
- 幄
- 殒
- 瑭
- 袂
- 囫
- 瓴
- 攫
- 鲈
- 箔
- 哝
- 馗
- 蜍
- 痧
- 脘
- 姘
- 苒
- 缢
- 觞
- 蛹
- 饬
- 胄
- 筏
- 鸾
- 儆
- 痿
- 矬
- 酊
- 纾
- 铖
- 荏
- 掬
- 膑
- 贮
- 觊
- 囵
- 泓
- 搔
- 汞
- 蚩
- 婀
- 谧
- 恣
- 霎
- 饕
- 赅
- 鲶
- 梏
- 獠
- 俶
- 龛
- 桅
- 鹄
- 旌
- 鲲
- 姒
- 蠡
- 繇
- 祜
- 诨
- 汩
- 觥
- 孀
- R
- 谥
- 蕨
- 祐
- 榭
- 皑
- 纂
- 獐
- 覃
- 痂
- 孑
- 砧
- 圩
- 桎
- 啵
- 葚
- 嗫
- 浃
- 荠
- 阈
- 遴
- 枇
- 狒
- 秸
- 筠
- 硒
- 卞
- 玷
- 杈
- 狲
- 忿
- 俎
- 拚
- 颍
- 睢
- 颧
- 滦
- 霭
- 雉
- 毽
- 蓑
- 歙
- 鳃
- 鹬
- 墉
- 楔
- 舐
- 绔
- 弭
- 馏
- 挝
- 奂
- 嘭
- 忪
- 箕
- 诌
- 谒
- 颚
- 滂
- 醍
- 洵
- 鹫
- 虢
- 苋
- 玥
- 臾
- 蹩
- Z
- 杷
- 痍
- 酉
- 疸
- 鄢
- 垩
- 烷
- 湮
- 钎
- 樽
- 旮
- 葭
- 邬
- 缱
- 糍
- 亳
- 咦
- 苷
- 伉
- 隽
- 伫
- 聒
- 匍
- 飚
- 桠
- 睑
- 脍
- 焘
- 谶
- 赳
- 萸
- 讣
- 疽
- 臧
- 巽
- 毓
- 鸢
- 纰
- 啐
- 噙
- 舛
- 敕
- 醐
- 痢
- 嚯
- 婺
- 勖
- 岷
- 溧
- 骅
- 犸
- 麾
- 嗟
- 诘
- 懑
- 貔
- 貅
- 啉
- 崂
- 鸩
- 镭
- 绻
- 逑
- 煨
- 褓
- 姝
- 藜
- 溟
- 儋
- 谡
- 欸
- 郢
- 荚
- 疝
- 遽
- 陂
- 饯
- 孪
- 巳
- 荞
- 泔
- 岿
- 谆
- 镍
- 洙
- 佻
- 盂
- 睨
- 铄
- 餮
- 酯
- 癣
- 浜
- 酩
- 焗
- 挲
- 鬃
- 鲠
- 仞
- 诰
- 谔
- 胛
- 萼
- 涿
- 莠
- 珲
- 旯
- 蜢
- 黍
- 肽
- 涪
- 髡
- 氙
- 陉
- 鬶
- 侩
- 糅
- 氤
- 芾
- 砷
- 鳕
- 钣
- 锒
- 闱
- 铵
- 镊
- 玑
- 砀
- 癜
- 颔
- 楹
- 螈
- 醚
- 琮
- 铩
- 笄
- 瓤
- 裨
- 潋
- 悌
- 聿
- 祢
- 郜
- 汨
- 棂
- 氲
- 嶙
- 聩
- 菅
- 腧
- 妯
- 龇
- 谲
- 耄
- 耋
- 囿
- 黢
- 揄
- 鲇
- 仝
- 個
- 忖
- 峋
- 揶
- 迩
- 诳
- 踽
- 骐
- 趸
- 颞
- 撺
- 辇
- 猷
- 铉
- 羸
- 徜
- 徉
- 襁
- 镌
- 孱
- 钒
- 铣
- 呤
- 遑
- 俾
- 皋
- 笕
- 笺
- 趔
- 趄
- 辋
- 鄞
- 殚
- 岫
- 跬
- 嘌
- 苻
- 绶
- 郅
- 瑄
- 萋
- 蘼
- 湎
- 砣
- 钜
- 捭
- 喹
- 恹
- 娌
- 螯
- 锰
- 祚
- 阆
- 矾
- 厩
- 龅
- 炝
- 黠
- 妁
- 濑
- 鞑
- 柒
- 滁
- 淖
- 鸬
- 鬣
- 晔
- 恸
- 赓
- 侉
- 溏
- 還
- 珮
- 鸨
- 嚅
- 笤
- 靥
- 啮
- 滓
- 俚
- 唳
- 苜
- 蓿
- 鹚
- 耦
- 莜
- 麸
- 粳
- 綦
- 盱
- 噤
- 遒
- 玟
- 魍
- 魉
- 旖
- 栉
- 锷
- 醴
- 泮
- 恁
- 甾
- 琬
- 丶
- 擤
- 桉
- 踟
- 誊
- 谟
- 澧
- 玖
- 畿
- 顼
- 兖
- 贰
- 茏
- 愎
- 豇
- 旎
- 蹰
- 蜃
- 屐
- 芡
- 鎏
- 癸
- 卅
- 枥
- 陟
- 琨
- 粝
- 掮
- 妪
- 姹
- 鏖
- 捯
- 钴
- 竽
- 恽
- 佰
- 胗
- 崧
- 磴
- 绺
- 鳏
- 槁
- 啖
- 矍
- 徕
- 忾
- 烃
- 喏
- 囹
- 圄
- 砭
- 邕
- 犍
- 鸮
- 剜
- 琚
- 瘢
- 魑
- 眦
- 锉
- 柘
- 痦
- 苕
- 牯
- 湟
- 厝
- 濛
- 赭
- 馐
- 蜇
- 嶂
- 贲
- 靼
- 臬
- 陲
- 潞
- 芩
- 腓
- 锨
- 寮
- 於
- 洇
- 愠
- 疖
- 鹧
- 鸪
- 茕
- 戕
- 壬
- 庾
- 莒
- 鹈
- 鹕
- 蠹
- 勐
- 疥
- 辎
- 耒
- 嗬
- 沔
- 睥
- 邙
- 篾
- 揩
- 肱
- 胍
- 磬
- 菟
- 豢
- 垓
- 唑
- 剌
- 阗
- 汜
- 佤
- 璟
- 麽
- 鬻
- 怏
- 蕤
- 茭
- 睚
- 淙
- 牍
- 榫
- 濯
- 稹
- 媾
- 悱
- 骶
- 蛭
- 鞣
- 椁
- 槊
- 擢
- 滢
- 佚
- 菡
- 沭
- 扦
- 镆
- 闾
- 缛
- 窠
- 疣
- 骠
- 俅
- 喙
- 蹼
- 硼
- 黩
- 腴
- 醮
- 邛
- 漯
- 豉
- 昶
- 刿
- 凇
- 鲅
- 舸
- 邳
- 俟
- 铰
- 翌
- 鳟
- 葳
- 寤
- 碣
- 秭
- 揠
- 熵
- 燧
- 靛
- 嵊
- 窨
- 鹗
- 芎
- 颢
- 佶
- 骢
- 圜
- 岘
- 燊
- 壅
- 畲
- 萘
- 煊
- 粲
- 倌
- 嗳
- 橹
- 椽
- 夔
- 鲑
- 赧
- 殄
- 沆
- 瀣
- 廪
- 舢
- 狍
- 挈
- 鹳
- 蚜
- 彧
- 羟
- 盥
- 镛
- 痈
- 蜊
- 皲
- 篦
- 喑
- 鲢
- 邡
- 蕲
- 僳
- 秣
- 蛉
- 讫
- 祗
- 鹩
- 撷
- 狎
- 郓
- 镕
- 榉
- 鲷
- 娣
- 淝
- 桷
- 镉
- 郫
- 髌
- 醪
- 僭
- 伧
- 嵬
- 苁
- 鹘
- 徭
- 歃
- 阕
- 鸱
- 貉
- 闳
- 坻
- 缙
- 媪
- 莨
- 菪
- 绦
- 恫
- 崆
- 喟
- 葺
- 逶
- 迤
- 骈
- 馔
- 苎
- 溘
- 垭
- 樯
- 诤
- 魃
- 搽
- 绀
- 蚴
- 澶
- 蒺
- 罘
- 眙
- 怍
- 來
- 荪
- 贶
- 亓
- 唻
- 畈
- 谌
- 芨
- 鲀
- 窸
- 窣
- 荜
- 楫
- 衮
- 趵
- 勰
- 髯
- 椴
- 缶
- 荸
- 秫
- 菖
- 甙
- 翦
- 椟
- 峤
- 掼
- 謇
- 洄
- 鄯
- 妗
- 浐
- 颀
- 箸
- 畦
- 痼
- 橛
- 鲛
- 蝾
- 愍
- 蒹
- 嘁
- 韪
- 劭
- 垅
- 暹
- 僮
- 稗
- 筚
- 煅
- 嬅
- 蜉
- 骝
- 碚
- 冼
- 吶
- 洹
- 郧
- 炴
- 绌
- 泠
- 呓
- 簋
- 溴
- 篁
- 仟
- 锟
- 羧
- 鹞
- 嘬
- 渌
- 笸
- 霰
- 稔
- 钡
- 齁
- 胪
- 衾
- 尻
- 洮
- 蘅
- 鲳
- 殂
- 腭
- 涔
- 蝣
- 孳
- 澍
- 钼
- 蒡
- 枳
- 渑
- 茼
- 馕
- 埙
- 珣
- 菘
- 邰
- 樾
- 铱
- 鳐
- 唔
- 篙
- 箜
- 篌
- 耆
- 啫
- 枞
- 杼
- 嵋
- 舂
- 娉
- 铨
- 崃
- 笳
- 邗
- 逡
- 僖
- 泫
- 疴
- 捱
- 醅
- 堇
- 肄
- 荇
- 虬
- 谯
- 酞
- 桡
- 艮
- 膦
- 艹
- 啻
- 滏
- 茆
- 圪
- 磡
- 麼
- 闼
- 郯
- 仡
- 氐
- 贽
- 俦
- 蓖
- 跹
- 帏
- 氅
- 趿
- 暝
- 缟
- 棹
- 滹
- 毖
- 蝰
- 虻
- 缫
- 诮
- 闩
- ○
- 潴
- 樨
- 瘘
- 襦
- 妤
- 郾
- 衿
- 鸷
- 旰
- 镢
- 傈
- 倨
- 笏
- 蒽
- 醌
- 驽
- 浠
- 涠
- 蓁
- 柞
- 钺
- 蜮
- 诂
- 徵
- 锆
- 椋
- 叻
- 廿
- 藁
- 乜
- 摈
- 這
- 茌
- 辊
- 岬
- 郇
- 杓
- 轳
- 酎
- 蟥
- 時
- 镒
- 蚬
- 澹
- 赟
- 後
- 怿
- 箐
- 囍
- 揆
- 蹁
- 鬄
- 苫
- 蕖
- 卺
- 辔
- 偈
- 俳
- 吲
- 哚
- 瘆
- 蕞
- 笞
- 氩
- 嫘
- 墁
- 帔
- 褡
- 裢
- 乩
- 褊
- 颏
- 喒
- 錾
- 皌
- 戗
- 唪
- 啭
- 伥
- 茔
- 斫
- 齉
- 仵
- 赉
- 吡
- 啶
- 蹇
- 螅
- 汊
- 湓
- 凫
- 珙
- 腈
- 洌
- Ω
- 憷
- 跶
- 抔
- 濞
- 崤
- 殍
- 浥
- 铳
- 酽
- 馑
- 髂
- 隗
- 韫
- 晷
- 诒
- 埭
- 鹪
- 蕻
- 昃
- 瓠
- 萁
- 癔
- 怩
- 疳
- 跖
- 疔
- 簟
- 汆
- 疠
- 卟
- 墒
- 穰
- 铍
- 珥
- 钤
- 隻
- 樓
- 墎
- 鳜
- 沒
- 岀
- 杪
- 単
- 鲧
- 呋
- 彀
- 祇
- 豸
- 胴
- 唷
- 丨
- 燚
- 麴
- 觇
- 缑
- 橐
- 蚡
- 朊
- 俣
- 垡
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
use_preprocessor_valid: false
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_utt_prefix: null
rir_apply_prob: 1.0
noise_scp: null
noise_utt_prefix: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.2a1
distributed: true
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["wenetspeech"]}
|
espnet/pengcheng_guo_wenetspeech_asr_train_asr_raw_zh_char
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:wenetspeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-wenetspeech #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/pengcheng\_guo\_wenetspeech\_asr\_train\_asr\_raw\_zh\_char'
This model was trained by Pengcheng Guo using wenetspeech recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Wed Oct 6 15:11:20 CST 2021'
* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.2a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: ''
+ Commit date: ''
asr\_train\_asr\_conformer\_raw\_zh\_char
-----------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
LM config
---------
expand
|
[
"### 'espnet/pengcheng\\_guo\\_wenetspeech\\_asr\\_train\\_asr\\_raw\\_zh\\_char'\n\n\nThis model was trained by Pengcheng Guo using wenetspeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Oct 6 15:11:20 CST 2021'\n* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.2a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_conformer\\_raw\\_zh\\_char\n-----------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-wenetspeech #license-cc-by-4.0 #region-us \n",
"### 'espnet/pengcheng\\_guo\\_wenetspeech\\_asr\\_train\\_asr\\_raw\\_zh\\_char'\n\n\nThis model was trained by Pengcheng Guo using wenetspeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Oct 6 15:11:20 CST 2021'\n* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.2a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_conformer\\_raw\\_zh\\_char\n-----------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] |
null |
espnet
|
## ESPnet2 ASR model
### `espnet/roshansh_how2_asr_raw_ft_sum_valid.acc`
This model was trained by roshansh-cmu using how2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e6f42a9783a5d9eba0687c19417f933e890722d7
pip install -e .
cd egs2/how2/sum1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 7 15:24:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `04561cdf3b6c3bc1d51edb04c93b953759ef551d`
- Commit date: `Mon Feb 7 09:06:12 2022 -0500`
## asr_raw_ft_sum
|dataset|Snt|Wrd|ROUGE-1|ROUGE-2|ROUGE-L|METEOR|BERTScore|
|---|---|---|---|---|---|---|---|
|decode_sum_asr_model_valid.acc.best/dev5_test_sum|2127|69795|60.72|44.7|56.1|29.36|91.53|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_vid_lf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_raw_ft_sum
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45875
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/asr_raw_utt_conformer/valid.acc.ave_10best.pth:::ctc
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 60000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_vid_sum/train/speech_shape
- exp/asr_stats_raw_vid_sum/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_vid_sum/valid/speech_shape
- exp/asr_stats_raw_vid_sum/valid/text_shape.bpe
batch_type: length
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_2000h_sum_trim/wav.scp
- speech
- sound
- - dump/raw/tr_2000h_sum_trim/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv05_sum_trim/wav.scp
- speech
- sound
- - dump/raw/cv05_sum_trim/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list:
- <blank>
- <unk>
- '[hes]'
- S
- ▁THE
- ▁TO
- ''''
- ▁AND
- ▁YOU
- ▁A
- ▁IT
- T
- ▁THAT
- ▁OF
- ▁I
- ▁IS
- RE
- ▁IN
- ING
- ▁WE
- M
- ▁GOING
- ▁SO
- ▁THIS
- ▁YOUR
- ▁ON
- E
- D
- ▁BE
- ▁CAN
- N
- Y
- O
- ER
- ▁HAVE
- ▁JUST
- ▁FOR
- ▁WITH
- ▁DO
- ED
- ▁ARE
- ▁WANT
- ▁UP
- R
- LL
- P
- ▁
- L
- B
- ▁IF
- C
- ▁ONE
- ▁S
- ▁OR
- A
- ▁GO
- ▁LIKE
- ▁NOW
- ▁HERE
- VE
- LE
- U
- ▁GET
- ▁WHAT
- ▁OUT
- IN
- W
- ▁C
- ▁LITTLE
- ▁THERE
- LY
- ▁AS
- ▁MAKE
- I
- ▁THEY
- ▁MY
- K
- ▁THEN
- ▁BUT
- AL
- G
- ▁ALL
- OR
- ▁BACK
- ▁NOT
- ▁ABOUT
- ▁RIGHT
- ▁OUR
- EN
- ▁SOME
- ▁DOWN
- F
- ▁WHEN
- CH
- ▁F
- ▁HOW
- AR
- ▁WILL
- ▁RE
- CK
- ▁G
- ES
- CE
- ▁TAKE
- ▁AT
- ▁FROM
- ▁WAY
- TER
- ▁SEE
- RA
- ▁USE
- ▁REALLY
- RI
- TH
- ▁TWO
- ▁ME
- ▁VERY
- ▁E
- ▁B
- AT
- ▁THEM
- ▁DON
- ▁AN
- ▁BECAUSE
- ▁MORE
- RO
- H
- 'ON'
- LI
- ▁PUT
- ▁ST
- IL
- ▁BIT
- ▁START
- ▁NEED
- ▁INTO
- UR
- ▁TIME
- ▁OVER
- ▁W
- ▁DE
- ▁LOOK
- ▁THESE
- ▁LET
- ▁GOOD
- ▁ALSO
- AN
- ▁OFF
- ▁HE
- ▁KIND
- ▁SIDE
- ▁CO
- ▁SURE
- ▁AGAIN
- ▁MA
- ▁KNOW
- IT
- ▁WOULD
- IC
- ▁OTHER
- LA
- ▁P
- ▁WHICH
- '-'
- IR
- ▁LA
- ▁HAND
- EL
- ▁LOT
- ▁WHERE
- ▁THREE
- ▁PA
- ION
- LO
- ▁KEEP
- ▁SHOW
- ▁THING
- ▁FIRST
- TE
- ENT
- ATE
- ▁COME
- AD
- ▁GOT
- NG
- ▁NICE
- ▁T
- ET
- ▁MO
- ▁ANY
- ▁ACTUALLY
- ▁DIFFERENT
- ▁SE
- GE
- ▁WORK
- ▁THROUGH
- ▁O
- KE
- V
- ▁AROUND
- ▁BA
- PE
- ▁HI
- ▁BY
- SH
- ATION
- ▁SU
- ▁CA
- ▁D
- ▁LO
- ▁HAS
- ▁LI
- ▁PLAY
- Z
- ▁ADD
- ▁RO
- ▁TA
- AS
- ▁FOUR
- ▁CON
- ▁THOSE
- MP
- NE
- ▁SP
- UT
- ▁GIVE
- ▁WELL
- ▁BALL
- TING
- RY
- X
- ▁HO
- INE
- IVE
- ▁NEXT
- ▁PO
- ▁STEP
- ▁EVEN
- TION
- ▁MI
- MENT
- ▁CUT
- ▁BO
- ▁LINE
- ▁MUCH
- ▁THINGS
- ▁TALK
- UN
- ▁PART
- ▁WAS
- ▁FA
- ▁SOMETHING
- PP
- ANCE
- ND
- DI
- ▁RA
- AGE
- ▁SAME
- ▁EXPERT
- ▁DOING
- ▁LEFT
- IST
- ▁DI
- ▁NO
- RU
- ME
- TA
- UL
- TI
- ▁VILLAGE
- DE
- ERS
- ▁PEOPLE
- ▁TURN
- VER
- ▁FL
- ▁LEG
- ▁ONCE
- ▁COLOR
- ▁PULL
- ▁USING
- VI
- ▁WATER
- ▁SHE
- ▁TOP
- ▁OKAY
- ▁ANOTHER
- ▁THEIR
- ▁SAY
- URE
- ▁HA
- ▁IMPORTANT
- ▁PIECE
- ▁FOOT
- ▁TRA
- ▁SC
- ▁BODY
- ▁SET
- ▁POINT
- ▁HELP
- ▁TODAY
- ▁BRING
- ▁V
- ▁END
- MA
- ▁CH
- ▁MOST
- ▁K
- ▁AHEAD
- ▁HER
- OL
- ▁SA
- AM
- IES
- ▁THINK
- ▁NAME
- ▁TRY
- ▁MOVE
- ONE
- ▁LE
- ▁TOO
- TO
- UM
- ▁PLACE
- ▁COULD
- ▁FIND
- ▁FIVE
- ▁ALWAYS
- ID
- TY
- NT
- ▁FEEL
- ▁HEAD
- ▁THAN
- NA
- ▁EX
- ▁EYE
- ITY
- CI
- OP
- ▁SHOULD
- ▁MIGHT
- ▁HOLD
- ▁CAR
- AND
- ▁GREAT
- ▁RI
- ▁BU
- ▁HIGH
- ▁OPEN
- ▁BEFORE
- US
- ▁FRONT
- ▁LONG
- ▁TOGETHER
- NI
- ▁HAIR
- ▁LIGHT
- ▁TEN
- ▁HIT
- EST
- OUS
- ▁PRETTY
- ▁TYPE
- IP
- CO
- ▁FINGER
- ▁JO
- ▁UN
- ▁PRO
- ▁STRAIGHT
- ▁BEHALF
- ▁TI
- ▁SIX
- ▁CLEAN
- ▁DIS
- ▁DA
- ▁POSITION
- IGHT
- ACT
- ▁CHA
- ▁PE
- GG
- AP
- ▁MEAN
- ▁COMP
- FI
- ▁KNEE
- ▁CALLED
- ▁HANDS
- ▁PRE
- ▁FORWARD
- ▁AREA
- ANT
- ▁TE
- ▁WA
- ▁AFTER
- ▁SMALL
- ▁THROW
- ▁EVERY
- ▁SHOULDER
- NC
- PER
- ▁MAYBE
- ▁ABLE
- ▁BASICALLY
- ▁AM
- ▁READY
- ▁BOTTOM
- IE
- ▁HALF
- FF
- ▁BIG
- ▁EACH
- ▁PUSH
- ▁EIGHT
- ▁NEW
- ▁DONE
- ▁MAY
- ▁GETTING
- HO
- ▁HIS
- ▁HARD
- ▁CLOSE
- ALLY
- ▁SECOND
- ▁FEET
- ICAL
- ▁JA
- ▁PAINT
- ▁LEARN
- ▁SOUND
- HE
- ▁ROLL
- ▁ONLY
- ▁DOESN
- WA
- ▁DRAW
- ▁VI
- ▁DID
- ▁SHA
- ▁CENTER
- CU
- ▁CLIP
- ▁PI
- ▁CARD
- ▁INSIDE
- ▁PERSON
- ▁STILL
- ▁MAKING
- 'NO'
- ▁EVERYTHING
- .
- ▁FUN
- ARD
- ▁REMEMBER
- ▁AWAY
- ATED
- COM
- ▁SEVEN
- ▁BEEN
- ▁MANY
- ABLE
- ▁DAY
- ▁SIT
- IZE
- ▁REAL
- ▁HIP
- ▁BASIC
- ▁KICK
- ▁TU
- ATING
- ▁STICK
- ▁FLAT
- ▁WHO
- END
- HA
- ▁EXP
- ▁PICK
- ▁MIX
- ▁TRI
- ▁BI
- ▁WHOLE
- ▁STRETCH
- ▁BOTH
- ▁PROBABLY
- CA
- ▁HIM
- ▁STRING
- ▁EDGE
- ▁BASE
- ▁COMING
- UGH
- ▁LIFT
- ▁STA
- ▁WORKING
- ▁MU
- ▁QUICK
- ▁SOMETIMES
- ▁HAPPEN
- ▁YOURSELF
- ▁TALKING
- ▁DR
- ▁TELL
- ▁ANYTHING
- ▁BRA
- ▁LOOKING
- ▁SLOW
- ▁NE
- ▁STAND
- NER
- ▁COMES
- ▁GOES
- ISE
- BE
- ▁USED
- ▁UNDER
- ▁BETWEEN
- ▁HU
- ▁CREATE
- ▁NA
- ▁USUALLY
- ▁ARM
- ▁DRY
- ▁RUN
- LING
- ▁BRUSH
- ▁COVER
- ▁HEAR
- ▁DOES
- ▁STAY
- ▁EN
- ▁FOLD
- ▁CHANGE
- ▁LAST
- ▁EASY
- ▁US
- ▁PER
- ▁FACE
- ▁EAR
- ▁TIGHT
- ▁FE
- ▁PIN
- ▁MAN
- ▁BETTER
- ▁CALL
- ▁PRI
- ▁BEST
- ▁KI
- ▁COUPLE
- ▁WHILE
- ▁SHAPE
- ▁GAME
- IV
- ▁SHOT
- ▁PAPER
- ▁OWN
- ▁ALRIGHT
- ▁HAD
- TIC
- ▁BREATH
- ▁TOOL
- '2'
- ▁ENOUGH
- ▁COURSE
- ▁SKIN
- ▁SPIN
- ▁VA
- ▁ARMS
- ▁TEA
- ▁BREAK
- ▁DOG
- ▁1
- QUE
- ▁DROP
- ▁NUMBER
- IG
- ▁RED
- ▁NOTE
- ▁WEIGHT
- WARD
- ▁PLAYING
- ▁FINISH
- ▁MINUTE
- ▁R
- ▁PRESS
- ▁EITHER
- ▁CHE
- ▁PU
- BER
- ▁FEW
- ▁SIZE
- ▁MADE
- ▁LEAVE
- ▁GA
- ▁ALREADY
- ▁GUY
- ▁FAR
- ▁HOME
- ▁BAR
- UP
- ▁GRAB
- ▁MARK
- ▁WHITE
- ▁PROPER
- ▁CAUSE
- ▁OK
- ▁ART
- HI
- ▁SORT
- ▁EXERCISE
- ▁LOWER
- PORT
- ▁PLANT
- ▁BOARD
- ▁CASE
- ▁YEAR
- CENT
- ▁DU
- ▁CHECK
- ▁WHATEVER
- ▁OIL
- ▁IDEA
- ▁SIMPLE
- ▁PRACTICE
- ▁FAST
- '0'
- ▁CONTROL
- ▁J
- ▁KEY
- ▁MIDDLE
- ▁FULL
- ▁GLASS
- ▁OUTSIDE
- ▁LOW
- ▁REST
- ▁STUFF
- ▁ACT
- ▁UNTIL
- ▁BLACK
- ▁POP
- ▁CLICK
- ▁HOLE
- ▁Z
- ▁COUNT
- ▁POT
- ▁ALLOW
- ▁HAVING
- ▁TRYING
- ▁MUSCLE
- ▁GU
- ▁BOX
- ▁NOTICE
- ▁EXAMPLE
- UND
- ▁ALONG
- FUL
- ISH
- ▁STORE
- ▁LU
- ▁FLOOR
- ▁MOVING
- ▁LARGE
- ▁STOP
- ▁PH
- ▁WALK
- '5'
- ▁QU
- ▁TECHNIQUE
- ▁SOFT
- ▁GROUND
- ▁JUMP
- ▁JU
- ▁FILL
- ▁WHY
- ▁BUY
- ▁GREEN
- ▁WALL
- ▁HEEL
- NESS
- ▁LEVEL
- ▁UNDERNEATH
- ▁PATTERN
- ▁BEHIND
- ▁OLD
- ▁TIP
- ▁COMPLETE
- ▁WON
- ▁TEACH
- ▁FIT
- ▁NECK
- ▁REMOVE
- ▁TRICK
- ▁MOVEMENT
- ▁TOWARDS
- ▁PARTICULAR
- ▁CHI
- ▁EFFECT
- J
- ▁FREE
- ▁ACROSS
- ▁BEND
- ▁SAFE
- ▁SLIDE
- ▁PROBLEM
- ▁BLOCK
- ▁PAN
- ▁NATURAL
- ▁TOUCH
- ▁CHILD
- LINE
- ▁CROSS
- ▁REASON
- '4'
- ▁POWER
- ▁APPLY
- ▁FOLLOW
- ▁DESIGN
- ▁SPACE
- ▁ORDER
- ▁WOOD
- ▁RID
- '3'
- ▁COOK
- ▁BEGIN
- ▁WATCH
- ▁STYLE
- QUA
- ▁PRODUCT
- ▁TAKING
- ▁PUTTING
- ▁EXHALE
- ▁THOUGH
- ▁DEEP
- IAN
- ▁REACH
- ▁FOOD
- ▁ALMOST
- ▁COOL
- ▁SECTION
- ▁SAID
- ▁ANGLE
- ▁MUSIC
- ▁RELAX
- ▁CORNER
- ▁DARK
- ▁CHORD
- ▁ESPECIALLY
- ▁SCALE
- ▁WARM
- ▁WITHOUT
- ▁WHEEL
- ▁SEGMENT
- ▁TABLE
- ▁BOOK
- ▁PASS
- ▁ELBOW
- ▁ROUND
- ▁INHALE
- ▁SMOOTH
- ▁ROOM
- /
- ▁NINE
- ▁SHORT
- ▁MEASURE
- ▁LESS
- ▁TWIST
- ▁BALANCE
- ▁PROCESS
- ▁SWITCH
- ▁GENERAL
- ▁CLAY
- ▁CERTAIN
- ▁NEVER
- ▁BLUE
- ▁CUP
- ▁HOUSE
- ▁EXTRA
- ▁MOTION
- ▁PRESSURE
- ▁FIRE
- ▁SIMPLY
- ▁DOUBLE
- ▁TWENTY
- ▁CATCH
- ▁BECOME
- ▁BUILD
- ▁SPEED
- ▁TRANS
- ▁DRUM
- ▁CHEST
- ▁PICTURE
- ▁LENGTH
- ▁CONTINUE
- ▁COMFORTABLE
- ▁FISH
- ▁PHOTO
- ▁LOOSE
- ▁SKI
- ▁LIFE
- ▁DEGREE
- ▁OPTION
- ▁WORD
- ▁SHARP
- ▁SHOOT
- ▁FOUND
- ▁STRONG
- ▁QUITE
- ▁THIRD
- ▁GLUE
- ▁MIND
- ▁DEFINITELY
- ▁EASIER
- GRAPH
- ▁HOOK
- ▁CLEAR
- ▁POSE
- ▁BUTTON
- ▁CHOOSE
- ▁THICK
- ▁SYSTEM
- ▁PERFECT
- ▁BEAUTIFUL
- ▁SPOT
- ▁GROW
- ▁SIGN
- ▁ELSE
- ▁CONNECT
- ▁SELECT
- ▁PUNCH
- ▁DIRECTION
- ▁WRAP
- ▁RELEASE
- QUI
- SIDE
- ▁CAREFUL
- ▁VIDEO
- ▁INSTEAD
- ▁CIRCLE
- ▁WIRE
- ▁NOSE
- ▁AMOUNT
- ▁FOCUS
- ▁NORMAL
- ▁MAJOR
- ▁WHETHER
- ▁SURFACE
- ▁THUMB
- ▁DRIVE
- ▁SCREW
- ▁POSSIBLE
- ▁OBVIOUSLY
- ▁COMMON
- ▁REGULAR
- ▁ADJUST
- ▁WIDE
- ▁BLADE
- ▁FRET
- ▁RECOMMEND
- ▁BOWL
- BOARD
- ▁IMAGE
- ▁DEPENDING
- ▁PROTECT
- ▁CLOTH
- ▁HEALTH
- ▁WRIST
- ▁CLUB
- ▁DRINK
- ▁SINCE
- ▁FRIEND
- '00'
- ▁RUNNING
- ▁ITSELF
- ▁RECORD
- ▁SWING
- ▁DIRECT
- ▁MATERIAL
- ▁YO
- ▁LEAST
- ▁EXACTLY
- ▁BEGINNING
- ▁SLIGHTLY
- ▁TREAT
- ▁CAMERA
- ▁QUARTER
- ▁WINDOW
- '8'
- ▁SOMEBODY
- ▁BURN
- ▁DEMONSTRATE
- ▁DIFFERENCE
- ▁COMPUTER
- IBLE
- ▁SHOE
- ▁PERFORM
- ▁SQUARE
- ▁CONSIDER
- ▁DRILL
- ▁TEXT
- ▁FILE
- ▁RUB
- ▁FABRIC
- ▁HUNDRED
- ▁GRIP
- ▁CHARACTER
- ▁SPECIFIC
- ▁KNOT
- ▁CURL
- ▁STITCH
- ▁BLEND
- ▁FRAME
- ▁THIRTY
- '1'
- ▁HORSE
- ▁ATTACH
- ▁GROUP
- ▁STROKE
- ▁GUITAR
- ▁APART
- ▁MACHINE
- ▁CLASS
- ▁COMB
- ▁ROOT
- ▁HELLO
- ▁ENERGY
- ▁ATTACK
- ▁CORRECT
- ▁EXTEND
- ▁MINOR
- ▁PROFESSIONAL
- ▁MONEY
- ▁STRIP
- ▁FLAVOR
- ▁EVERYBODY
- ▁RULE
- ▁DIFFICULT
- ▁PROJECT
- ▁DISCUSS
- ▁FIGURE
- ▁HOWEVER
- ▁FINAL
- ▁STRENGTH
- ▁ENTIRE
- ▁FIELD
- ▁CONTACT
- ▁SUPPORT
- ▁PALM
- ▁SERIES
- ▁ENJOY
- '6'
- ▁WORLD
- ▁DECIDE
- ▁SPEAK
- ▁SEVERAL
- ▁WRITE
- ▁PROGRAM
- ABILITY
- ▁KNIFE
- ▁PLASTIC
- ▁ORGAN
- '7'
- ▁UNDERSTAND
- ▁FIFTEEN
- ▁FLEX
- ▁INFORMATION
- ▁TWELVE
- ▁DETAIL
- ▁STRIKE
- ▁ACTUAL
- ▁SPRAY
- ▁LOCAL
- ▁MOUTH
- ▁NIGHT
- ▁VEHICLE
- ▁OPPOSITE
- ▁SCHOOL
- '9'
- ▁QUESTION
- ▁SPECIAL
- ▁BIGGER
- ▁DEVELOP
- ▁PEPPER
- ▁PREFER
- Q
- '%'
- ']'
- '['
- '&'
- ','
- _
- '#'
- '='
- '@'
- +
- '*'
- $
- '~'
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.15
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: data/nlsyms
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_vid_sum/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: abs_pos
selfattention_layer_type: lf_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
attention_windows:
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
attention_dilation:
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
attention_mode: tvm
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 512
num_blocks: 6
dropout_rate: 0.15
positional_dropout_rate: 0.15
self_attention_dropout_rate: 0.15
src_attention_dropout_rate: 0.15
required:
- output_dir
- token_list
version: 0.10.0
distributed: true
```
</details>
Please cite the following paper if you use this recipe:
```BibTex
@misc{sharma2022speech,
title={Speech Summarization using Restricted Self-Attention},
author={Roshan Sharma and Shruti Palaskar and Alan W Black and Florian Metze},
year={2022},
eprint={2110.06263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title##3={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass{cs.CL}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-summarization"], "datasets": ["how2"]}
|
espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
| null |
[
"espnet",
"audio",
"automatic-speech-summarization",
"en",
"dataset:how2",
"arxiv:2110.06263",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2110.06263",
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-summarization #en #dataset-how2 #arxiv-2110.06263 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/roshansh\_how2\_asr\_raw\_ft\_sum\_valid.acc'
This model was trained by roshansh-cmu using how2 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Mon Feb 7 15:24:21 EST 2022'
* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.10.1'
* Git hash: '04561cdf3b6c3bc1d51edb04c93b953759ef551d'
+ Commit date: 'Mon Feb 7 09:06:12 2022 -0500'
asr\_raw\_ft\_sum
-----------------
ASR config
----------
expand
Please cite the following paper if you use this recipe:
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/roshansh\\_how2\\_asr\\_raw\\_ft\\_sum\\_valid.acc'\n\n\nThis model was trained by roshansh-cmu using how2 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Mon Feb 7 15:24:21 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '04561cdf3b6c3bc1d51edb04c93b953759ef551d'\n\t+ Commit date: 'Mon Feb 7 09:06:12 2022 -0500'\n\n\nasr\\_raw\\_ft\\_sum\n-----------------\n\n\n\nASR config\n----------\n\n\nexpand\n\nPlease cite the following paper if you use this recipe:",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-summarization #en #dataset-how2 #arxiv-2110.06263 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/roshansh\\_how2\\_asr\\_raw\\_ft\\_sum\\_valid.acc'\n\n\nThis model was trained by roshansh-cmu using how2 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Mon Feb 7 15:24:21 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1'\n* Git hash: '04561cdf3b6c3bc1d51edb04c93b953759ef551d'\n\t+ Commit date: 'Mon Feb 7 09:06:12 2022 -0500'\n\n\nasr\\_raw\\_ft\\_sum\n-----------------\n\n\n\nASR config\n----------\n\n\nexpand\n\nPlease cite the following paper if you use this recipe:",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best, fs=16k, lang=en`
♻️ Imported from <https://zenodo.org/record/3966501#.YOAOUZozZH5>
This model was trained by Shinji Watanabe using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Tue Jul 21 07:58:39 EDT 2020`
- python version: `3.7.3 (default, Mar 27 2019, 22:11:17) [GCC 7.3.0]`
- espnet version: `espnet 0.8.0`
- pytorch version: `pytorch 1.4.0`
- Git hash: `75db853dd26a40d3d4dd979b2ff2457fbbb0cd69`
- Commit date: `Mon Jul 20 10:49:12 2020 -0400`
## asr_train_asr_transformer_e18_raw_bpe_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|54402|97.9|1.8|0.2|0.2|2.3|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|54402|97.9|1.9|0.2|0.3|2.4|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|50948|94.6|4.7|0.7|0.7|6.0|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|50948|94.4|5.0|0.5|0.8|6.3|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|52576|97.7|2.0|0.3|0.3|2.6|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|52576|97.7|2.0|0.2|0.3|2.6|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|52343|94.5|4.8|0.7|0.7|6.2|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|52343|94.3|5.1|0.6|0.8|6.5|50.3|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|288456|99.3|0.3|0.3|0.2|0.9|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|288456|99.3|0.4|0.3|0.2|0.9|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|265951|97.7|1.2|1.1|0.6|2.9|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|265951|97.7|1.3|1.0|0.8|3.0|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|281530|99.3|0.3|0.4|0.3|1.0|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|281530|99.4|0.3|0.3|0.3|0.9|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|272758|97.8|1.1|1.1|0.7|2.9|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|272758|97.9|1.2|0.9|0.8|2.9|50.3|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_dev_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|69307|97.2|1.8|1.0|0.4|3.2|28.2|
|decode_dev_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2703|69307|97.2|1.9|1.0|0.5|3.3|29.5|
|decode_dev_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|64239|93.3|4.4|2.2|1.2|7.9|46.6|
|decode_dev_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2864|64239|93.2|4.9|1.9|1.5|8.3|47.5|
|decode_test_clean_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|66712|97.0|1.9|1.1|0.4|3.3|30.4|
|decode_test_clean_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2620|66712|97.1|1.9|1.0|0.5|3.3|30.1|
|decode_test_other_decode_asr_beam_size20_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|66329|93.1|4.5|2.4|1.0|7.9|49.7|
|decode_test_other_decode_asr_beam_size5_lm_train_lm_adam_bpe_valid.loss.best_asr_model_valid.acc.best|2939|66329|93.1|4.8|2.1|1.4|8.3|50.3|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_transformer_e18_raw_bpe_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_transformer_e18.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_e18_raw_bpe_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/shinji-watanabe-librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us
|
# ESPnet2 ASR pretrained model
## 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL, fs=16k, lang=en'
️ Imported from <URL
This model was trained by Shinji Watanabe using librispeech recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ASR pretrained model",
"## 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL, fs=16k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ASR pretrained model",
"## 'Shinji Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL, fs=16k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Shinji Watanabe using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU pretrained model
### `siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5590204
This model was trained by siddhana using fsc/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc"]}
|
espnet/siddhana_fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-fsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 SLU pretrained model
### 'siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'
️ Imported from URL
This model was trained by siddhana using fsc/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 SLU pretrained model",
"### 'siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-fsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 SLU pretrained model",
"### 'siddhana/fsc_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5656007
This model was trained by siddhana using fsc_challenge/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc_challenge"]}
|
espnet/siddhana_fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_r-truncated-36174d
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc_challenge",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-fsc_challenge #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'
️ Imported from URL
This model was trained by siddhana using fsc_challenge/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc_challenge/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-fsc_challenge #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'siddhana/fsc_challenge_asr_train_asr_hubert_transformer_adam_specaug_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc_challenge/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best`
♻️ Imported from https://zenodo.org/record/5655832
This model was trained by siddhana using fsc_unseen/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc_unseen"]}
|
espnet/siddhana_fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_fine-truncated-ef9dab
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc_unseen",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-fsc_unseen #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best'
️ Imported from URL
This model was trained by siddhana using fsc_unseen/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc_unseen/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-fsc_unseen #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'siddhana/fsc_unseen_asr_train_asr_hubert_transformer_adam_specaug_finetune_raw_en_word_valid.acc.ave_5best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc_unseen/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
This model was trained by Siddhant using slue-voxceleb recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 17758ad804fd7c4b6f88ef5601f475a241dc4605
pip install -e .
cd egs2/slue-voxceleb/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Dec 28 12:28:28 EST 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a2`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `6bf3c2a4f138d35331634d2e879bbc5c32a5266e`
- Commit date: `Mon Dec 22 15:41:32 EST 2021`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent
- ASR config: [conf/train_asr.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
|dataset|Snt|Intent Classification Accuracy (%)|Intent Classification Macro F1 (%)|
|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|955|80.2|29.7|
### Detailed Classification Report
|dataset|Label|Snt|Prec|Recall|F1|
|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|Neutral|784|85|93|89|
|inference_asr_model_valid.acc.ave_10best/devel|Positive|167|40|24|30|
|inference_asr_model_valid.acc.ave_10best/devel|Negative|3|0|0|0|
|inference_asr_model_valid.acc.ave_10best/devel|Mixed|1|0|0|0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- sound
- - dump/raw/devel/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁i
- s
- ▁and
- ''''
- ▁the
- ▁a
- ▁to
- ▁it
- Neutral
- ▁you
- ▁that
- ▁of
- t
- ing
- ▁in
- ▁was
- ed
- ▁uh
- ▁know
- e
- m
- ▁he
- y
- er
- ▁so
- ▁we
- re
- a
- o
- d
- ▁um
- i
- ▁s
- c
- ▁like
- n
- ▁is
- ▁be
- ▁f
- ▁but
- ▁c
- Positive
- en
- l
- ve
- ▁just
- ▁m
- st
- ▁they
- le
- an
- ▁on
- ▁p
- u
- ▁my
- ar
- p
- ▁this
- ▁for
- ▁b
- ▁think
- in
- ▁with
- g
- or
- ▁h
- r
- ly
- w
- ▁me
- ▁d
- ▁e
- ▁have
- ▁she
- it
- ▁t
- ▁what
- b
- ▁st
- al
- es
- ▁there
- ▁really
- ic
- ▁g
- ▁as
- ▁w
- ▁l
- ▁do
- ll
- v
- ▁all
- at
- 'on'
- as
- ▁about
- h
- ▁not
- ▁re
- ▁o
- ▁at
- k
- ▁don
- ▁had
- ▁when
- ou
- ent
- is
- ra
- ▁who
- ri
- ▁go
- se
- f
- ▁out
- ▁get
- ▁an
- ▁people
- nd
- ▁kind
- ▁very
- ce
- ▁because
- ▁are
- ion
- ▁some
- et
- ▁can
- ge
- ▁or
- me
- ▁up
- ▁n
- ▁if
- ▁no
- ▁one
- ▁were
- ct
- ▁mean
- ad
- ▁time
- ▁ch
- ▁then
- ro
- ▁ex
- ▁mo
- ▁her
- ▁every
- ▁would
- ▁co
- ▁work
- ir
- ▁sh
- ay
- ▁se
- ol
- ver
- ▁su
- ▁got
- ▁k
- th
- ▁love
- ▁from
- ld
- ation
- ▁him
- ▁said
- ▁how
- ▁well
- ▁lot
- ▁show
- ch
- ard
- ie
- ▁pro
- ▁de
- ▁gonna
- ▁bo
- ▁say
- ▁see
- ▁li
- one
- ▁his
- ther
- ▁been
- ur
- ▁any
- ▁great
- ▁
- ▁yeah
- pe
- ▁which
- ▁come
- ▁them
- ot
- ▁play
- ab
- ite
- ▁way
- ally
- id
- gh
- ▁r
- ▁sc
- our
- x
- mp
- ers
- ong
- ate
- ▁your
- ss
- ast
- ▁did
- ▁sort
- ▁am
- am
- and
- ▁make
- ant
- ▁thing
- ▁ha
- ▁te
- ▁has
- ess
- ▁v
- ▁something
- ▁back
- ▁where
- ▁things
- red
- ▁al
- ut
- el
- ight
- ment
- un
- ive
- ▁th
- ▁le
- il
- ▁j
- op
- ▁more
- ▁ro
- ill
- ▁fi
- ies
- ▁much
- ck
- ▁ne
- ▁wh
- ▁always
- ▁act
- ine
- pp
- z
- ▁now
- ▁con
- thing
- ▁us
- body
- ▁want
- ▁other
- ort
- ice
- ▁doing
- ▁sa
- ▁feel
- ow
- ▁int
- ne
- ▁these
- ▁could
- ▁good
- ▁cause
- Negative
- ▁actually
- ▁wr
- ▁little
- ain
- ▁being
- ▁look
- ▁into
- ere
- ul
- ▁our
- ▁guy
- ▁first
- ud
- ▁by
- ▁fun
- ▁qu
- ▁didn
- us
- ity
- ▁jo
- od
- ▁u
- ▁part
- ▁off
- ▁pre
- ▁right
- ▁film
- ▁start
- ok
- ▁two
- ving
- ▁never
- pt
- um
- te
- ▁movie
- ▁going
- ff
- nder
- ke
- ▁ag
- ▁en
- ▁try
- ful
- im
- ays
- ▁life
- ▁different
- ach
- are
- ▁di
- ist
- ▁oh
- au
- ▁po
- nt
- ▁com
- all
- ▁lo
- om
- ▁real
- ▁y
- ame
- ▁went
- ry
- ber
- ▁even
- ci
- ▁ho
- ▁years
- ▁their
- ▁happen
- ure
- self
- per
- ▁pl
- ▁those
- ble
- 'no'
- ▁day
- ▁take
- ▁does
- ien
- ▁br
- be
- wn
- ▁thought
- ▁fe
- ght
- ▁tr
- ▁story
- ty
- ▁down
- ous
- ish
- ▁wom
- ▁wanna
- ▁put
- ▁through
- ide
- ▁ab
- ▁new
- ▁also
- ▁big
- ▁call
- ▁around
- ▁character
- ▁read
- iz
- ▁came
- act
- ily
- ath
- ag
- ree
- ▁per
- ▁will
- ▁mu
- ▁talk
- ▁over
- ▁friend
- atch
- ▁bl
- ade
- ▁world
- ▁many
- ▁sp
- sic
- ▁cl
- ▁bit
- ▁man
- ace
- ▁person
- ft
- ip
- ▁than
- ▁wanted
- ▁may
- ven
- ick
- ious
- ▁mar
- ▁before
- ▁rel
- j
- ting
- ▁set
- sh
- ep
- ▁un
- ue
- ▁aw
- ▁find
- ▁kid
- tain
- ▁such
- ter
- ▁end
- ▁tw
- ind
- aking
- ▁after
- ▁fam
- ars
- ig
- ore
- ▁bec
- ak
- art
- reat
- ust
- rou
- ack
- ▁ye
- ould
- ime
- itt
- ▁gu
- qu
- ose
- fe
- ▁wor
- lf
- alk
- ▁charact
- ▁mov
- out
- ich
- ▁happ
- ▁thou
- ith
- <mixed>
- rom
- ake
- ▁diff
- ▁char
- na
- round
- ory
- ink
- ually
- ▁gon
- ▁pe
- right
- ody
- ah
- rie
- riend
- now
- so
- ause
- ▁fil
- ▁pers
- fore
- very
- ▁differe
- rough
- q
- ▁fir
- anna
- ways
- ':'
- '&'
- fter
- <sos/eos>
transcript_token_list: null
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
postdecoder: null
postdecoder_conf: {}
required:
- output_dir
- token_list
version: 0.10.3a2
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["slue-voxceleb"]}
|
espnet/siddhana_slue_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slue-voxceleb",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-slue-voxceleb #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/siddhana\_slue\_asr\_train\_asr\_conformer\_raw\_en\_word\_valid.acc.ave\_10best'
This model was trained by Siddhant using slue-voxceleb recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Dec 28 12:28:28 EST 2021'
* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.3a2'
* pytorch version: 'pytorch 1.8.1+cu102'
* Git hash: '6bf3c2a4f138d35331634d2e879bbc5c32a5266e'
+ Commit date: 'Mon Dec 22 15:41:32 EST 2021'
Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent
----------------------------------------------------------------------------------------------------------------------------------
* ASR config: conf/train\_asr.yaml
* token\_type: word
### Detailed Classification Report
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/siddhana\\_slue\\_asr\\_train\\_asr\\_conformer\\_raw\\_en\\_word\\_valid.acc.ave\\_10best'\n\n\nThis model was trained by Siddhant using slue-voxceleb recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Dec 28 12:28:28 EST 2021'\n* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a2'\n* pytorch version: 'pytorch 1.8.1+cu102'\n* Git hash: '6bf3c2a4f138d35331634d2e879bbc5c32a5266e'\n\t+ Commit date: 'Mon Dec 22 15:41:32 EST 2021'\n\n\nUsing Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent\n----------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/train\\_asr.yaml\n* token\\_type: word",
"### Detailed Classification Report\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-slue-voxceleb #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/siddhana\\_slue\\_asr\\_train\\_asr\\_conformer\\_raw\\_en\\_word\\_valid.acc.ave\\_10best'\n\n\nThis model was trained by Siddhant using slue-voxceleb recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Dec 28 12:28:28 EST 2021'\n* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a2'\n* pytorch version: 'pytorch 1.8.1+cu102'\n* Git hash: '6bf3c2a4f138d35331634d2e879bbc5c32a5266e'\n\t+ Commit date: 'Mon Dec 22 15:41:32 EST 2021'\n\n\nUsing Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with intent\n----------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/train\\_asr.yaml\n* token\\_type: word",
"### Detailed Classification Report\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU (Entity Classification) pretrained model
### `siddhana/slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590204
This model was trained by siddhana using fsc/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["fsc"]}
|
espnet/siddhana_slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:fsc",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-fsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 SLU (Entity Classification) pretrained model
### 'siddhana/slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'
️ Imported from URL
This model was trained by siddhana using fsc/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 SLU (Entity Classification) pretrained model",
"### 'siddhana/slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-fsc #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 SLU (Entity Classification) pretrained model",
"### 'siddhana/slurp_entity_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'\n️ Imported from URL\n\nThis model was trained by siddhana using fsc/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 SLU pretrained model
### `siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best`
♻️ Imported from https://zenodo.org/record/5590384
This model was trained by siddhana using slurp/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["slurp"]}
|
espnet/siddhana_slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:slurp",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-slurp #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 SLU pretrained model
### 'siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'
️ Imported from URL
This model was trained by siddhana using slurp/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 SLU pretrained model",
"### 'siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'\n️ Imported from URL\n\nThis model was trained by siddhana using slurp/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-slurp #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 SLU pretrained model",
"### 'siddhana/slurp_new_asr_train_asr_conformer_raw_en_word_valid.acc.ave_10best'\n️ Imported from URL\n\nThis model was trained by siddhana using slurp/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR model
### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp'
This model was trained by simpleoier using librispeech recipe in espnet.
### Demo: How to use in ESPnet2
## ASR config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR model",
"### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp'\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR model",
"### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_hubert_ll60k_large_raw_en_bpe5000_sp'\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
```
## ASR config
<details><summary>expand</summary>
```
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR model
### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp'
This model was trained by simpleoier using librispeech recipe in espnet.
### Demo: How to use in ESPnet2
## ASR config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR model",
"### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp'\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR model",
"### 'espnet/simpleoier_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp'\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ASR config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp`
This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout b0ff60946ada6753af79423a2e6063984bec2926
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 4 20:52:48 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: ``
- Commit date: ``
## asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|54402|98.4|1.4|0.1|0.2|1.7|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|50948|96.7|3.0|0.3|0.3|3.6|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|52576|98.4|1.5|0.1|0.2|1.8|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|52343|96.7|3.0|0.3|0.4|3.7|37.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|288456|99.7|0.2|0.2|0.2|0.5|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|265951|98.9|0.6|0.4|0.4|1.5|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|272758|99.1|0.5|0.4|0.4|1.3|37.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|68010|98.2|1.4|0.4|0.3|2.1|23.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|63110|96.0|3.1|0.9|0.9|4.9|35.5|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|65818|98.1|1.4|0.5|0.4|2.3|23.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|65101|96.1|2.9|1.0|0.8|4.7|37.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer7_wavlm_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
num_targets: 1
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45342
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 3
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 40000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wavlm_large
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/simpleoier_librispeech_asr_train_asr_conformer7_wavlm_large_raw_en_bpe5000_sp
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/simpleoier\_librispeech\_asr\_train\_asr\_conformer7\_wavlm\_large\_raw\_en\_bpe5000\_sp'
This model was trained by simpleoier using librispeech recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Jan 4 20:52:48 EST 2022'
* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.5a1'
* pytorch version: 'pytorch 1.8.1'
* Git hash: ''
+ Commit date: ''
asr\_train\_asr\_conformer7\_wavlm\_large\_raw\_en\_bpe5000\_sp
---------------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/simpleoier\\_librispeech\\_asr\\_train\\_asr\\_conformer7\\_wavlm\\_large\\_raw\\_en\\_bpe5000\\_sp'\n\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Jan 4 20:52:48 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_conformer7\\_wavlm\\_large\\_raw\\_en\\_bpe5000\\_sp\n---------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/simpleoier\\_librispeech\\_asr\\_train\\_asr\\_conformer7\\_wavlm\\_large\\_raw\\_en\\_bpe5000\\_sp'\n\n\nThis model was trained by simpleoier using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Jan 4 20:52:48 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_conformer7\\_wavlm\\_large\\_raw\\_en\\_bpe5000\\_sp\n---------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `su_openslr36`
♻️ Imported from https://zenodo.org/record/5090135/
This model was trained by su_openslr36 using su_openslr36/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "su", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["su_openslr36"]}
|
espnet/su_openslr36
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"su",
"dataset:su_openslr36",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"su"
] |
TAGS
#espnet #audio #automatic-speech-recognition #su #dataset-su_openslr36 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'su_openslr36'
️ Imported from URL
This model was trained by su_openslr36 using su_openslr36/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'su_openslr36'\n️ Imported from URL\n\nThis model was trained by su_openslr36 using su_openslr36/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #su #dataset-su_openslr36 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'su_openslr36'\n️ Imported from URL\n\nThis model was trained by su_openslr36 using su_openslr36/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/sujay_catslu_map`
This model was trained by Sujay S Kumar using catslu recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e31965d55993766461f0964216a0bb9aea3cfb7a
pip install -e .
cd egs2/catslu/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/sujay_catslu_map
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Oct 3 12:53:16 EDT 2021`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a3`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `b41391336042a4876e30d9fe5c66afb4e4be404c`
- Commit date: `Wed Sep 22 10:02:03 2021 -0400`
## asr_train_asr_smaller_aishell_xlsr_raw_zh_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|11441|46.1|30.1|23.7|2.5|56.4|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|6438|49.4|29.2|21.4|2.7|53.4|79.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_5best/test|1577|45924|74.4|13.0|12.5|3.2|28.8|81.3|
|inference_asr_model_valid.acc.ave_5best/valid|921|26110|77.0|11.9|11.1|2.7|25.7|79.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_smaller_aishell_xlsr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp_train_asr_smaller_aishell_xlsr/asr_train_asr_smaller_aishell_xlsr_raw_zh_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - train
- loss
- min
- - valid
- loss
- min
- - train
- acc
- max
- - valid
- acc
- max
keep_nbest_models: 5
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/train/text_shape.word
valid_shape_file:
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/speech_shape
- exp_train_asr_smaller_aishell_xlsr/asr_stats_raw_zh_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 2500
token_list:
- <blank>
- <unk>
- 航
- 导
- inform_操作_none
- inform_终点名称_none
- 去
- none_none_none
- 我
- 到
- inform_poi名称_none
- unknown
- 要
- 市
- side
- 一
- 个
- 路
- 区
- 第
- 大
- 县
- 你
- inform_序列号_none
- 小
- 城
- 站
- 家
- 南
- 中
- 山
- 州
- 好
- 镇
- 场
- 的
- 院
- 西
- 店
- 东
- 车
- 阳
- 学
- 北
- 园
- dialect
- 安
- 新
- 海
- 回
- 公
- 医
- 二
- 不
- 三
- 广
- 天
- 村
- 有
- 闭
- 开
- 酒
- 下
- 江
- 消
- 人
- 帮
- 金
- 是
- 取
- 花
- 近
- 政
- 民
- 口
- 十
- 里
- 河
- 府
- 请
- 关
- 国
- 了
- 华
- 那
- 高
- robot
- 出
- 平
- 湖
- 在
- 省
- 定
- 号
- 门
- 想
- 街
- 四
- 道
- 水
- 龙
- 京
- 啊
- 地
- 行
- 么
- 五
- 都
- 桥
- 上
- 给
- 明
- 业
- 哪
- 附
- 八
- 宁
- 心
- 长
- 馆
- 百
- 这
- 汽
- 机
- 工
- 庄
- 方
- 商
- 司
- 石
- 确
- 兴
- 火
- 走
- 乡
- 万
- 通
- 加
- 银
- 青
- 发
- 校
- 速
- 交
- 退
- 德
- 际
- 电
- 楼
- 宾
- 找
- 苑
- 和
- 嗯
- 油
- 林
- 乐
- 景
- 打
- 达
- 来
- 七
- 川
- inform_请求类型_none
- 最
- noise
- 兰
- 湾
- 台
- 所
- 保
- 什
- 福
- 建
- 说
- 就
- 沙
- 页
- 宝
- 子
- 厂
- 科
- 尔
- 光
- inform_页码_none
- 六
- 费
- 环
- 成
- 昌
- 吗
- 汉
- 白
- 黄
- 限
- 局
- 泉
- 怎
- 云
- 武
- 源
- 吃
- 前
- 点
- 收
- 物
- 滨
- 溪
- 马
- 贵
- 务
- 世
- 岛
- 没
- 生
- 常
- 理
- 会
- 们
- 重
- 浦
- 名
- 合
- 运
- 顺
- 美
- 儿
- 头
- 乌
- 设
- 厦
- 化
- 郑
- 时
- inform_poi目标_none
- 现
- 农
- 港
- 泰
- 停
- 宜
- 昆
- 九
- 对
- 管
- 看
- 界
- 张
- 庆
- 文
- 博
- 嘉
- 零
- 苏
- 能
- 面
- 客
- 红
- 搜
- 远
- 古
- 津
- 始
- 王
- 呃
- 用
- 瑞
- 后
- 雅
- 带
- 流
- 木
- 之
- 汇
- 夏
- 他
- 还
- 清
- 临
- 服
- 渡
- 日
- 幺
- 济
- 田
- 锦
- 吉
- 呀
- 利
- 神
- 饭
- 香
- 太
- 双
- 永
- 图
- 洲
- 集
- 特
- 吧
- request_位置_none
- 技
- 把
- 寺
- 爱
- 丰
- 春
- 盛
- 罗
- 队
- 也
- 亚
- 线
- 玉
- 哦
- 贸
- 果
- 连
- 正
- 结
- 与
- 米
- 鲁
- 警
- 信
- 捷
- 样
- 温
- 岭
- 丽
- 育
- 凤
- 位
- 听
- 动
- 可
- 原
- 年
- 经
- 纪
- 齐
- 索
- inform_对象_none
- 义
- 多
- 叫
- 况
- 气
- 老
- 派
- 池
- 曲
- 营
- 返
- 置
- 品
- 程
- 同
- 辉
- 批
- 音
- 康
- 威
- 幼
- 斯
- 库
- 拉
- 星
- 团
- 风
- 岗
- 话
- 放
- 泽
- 晋
- 部
- 知
- 外
- 塔
- 沈
- 奇
- 卫
- 月
- 庭
- 眼
- 总
- 梅
- 房
- 千
- 哈
- 自
- 字
- 呢
- 豪
- 直
- 盘
- 屯
- 超
- 祥
- 佳
- 恒
- 过
- 以
- 两
- 蓝
- 修
- 入
- 松
- 铁
- 职
- 珠
- 凯
- 快
- 丹
- 体
- 书
- 游
- 转
- 莱
- 寨
- 克
- 当
- 李
- 钱
- s
- 货
- 惠
- 格
- 岳
- 淮
- 束
- 社
- 莞
- 森
- 堵
- 内
- 蒙
- 分
- 柏
- 富
- 碧
- 凰
- 陵
- 桐
- 边
- 坡
- 胶
- 得
- 力
- 滚
- 喀
- 旗
- 料
- 歌
- 块
- 滩
- 查
- 虹
- 续
- 为
- 驾
- 许
- 峰
- 问
- 真
- 视
- 选
- 接
- 语
- 洪
- 众
- 全
- 徽
- 鄂
- 实
- 未
- 杭
- 尚
- 胜
- 塘
- 产
- 鱼
- 叉
- 岸
- 洛
- 随
- 哎
- 配
- 丁
- 继
- 迪
- 牛
- 坪
- 无
- 深
- 圳
- 韩
- 法
- 灵
- 迁
- 间
- 逼
- 步
- 咸
- 期
- 菜
- 紫
- 邢
- 赣
- 横
- 播
- 鼎
- 进
- 止
- 铜
- 便
- 鸡
- 巴
- 仁
- 财
- 佛
- 桂
- 官
- 英
- 绵
- 奥
- 矿
- 波
- 治
- 元
- 首
- 钟
- 计
- 飞
- 坊
- 阿
- 代
- 周
- 朝
- 固
- 错
- 向
- 潭
- 隆
- 装
- 纳
- 伊
- 将
- 军
- 师
- 途
- 影
- 怀
- 择
- 药
- 术
- 手
- 于
- 离
- 族
- 莲
- 布
- 呼
- 峡
- 迈
- 委
- 叮
- 咚
- 阴
- 宏
- 郡
- 健
- 本
- 洋
- 再
- 支
- 划
- 郊
- 绿
- 妈
- 旅
- 堰
- 肥
- 玛
- 左
- 网
- inform_途经点名称_none
- 拜
- 材
- inform_终点修饰_none
- 辽
- 煤
- 谢
- 则
- 土
- 草
- 埠
- 伦
- 堂
- 卡
- 肉
- 底
- 灯
- 树
- 寻
- 掉
- 展
- 庙
- 赵
- 余
- 见
- 望
- 故
- 事
- 相
- 杨
- inform_终点目标_none
- 馨
- 税
- 属
- 资
- 井
- 艺
- 越
- 微
- 包
- 阜
- 记
- 窗
- 维
- 甲
- 鑫
- 休
- 啥
- 锡
- 渝
- 岩
- 彩
- 少
- 处
- 往
- 从
- 封
- 联
- 觉
- 验
- 容
- 萨
- 普
- 弄
- 干
- 强
- 鲜
- 柳
- 衡
- 规
- request_路况_none
- 靖
- 沃
- 板
- 防
- 约
- 球
- 居
- 至
- 坝
- 翠
- 持
- 具
- 烟
- 榆
- 枫
- 照
- 意
- 目
- t
- 凌
- 邦
- 报
- 码
- 轻
- 欣
- 复
- 买
- 玻
- 璃
- 住
- 恩
- 女
- 嘴
- 级
- 振
- 邵
- 浴
- 茂
- 黔
- 您
- 比
- 显
- 渭
- 钢
- 妇
- 易
- 党
- 版
- 介
- 姐
- 才
- 览
- k
- 崇
- 桃
- 厅
- 虎
- 皮
- 仪
- 赤
- 寓
- 洞
- 绍
- 饰
- 很
- 病
- 度
- 胡
- 像
- 邮
- 又
- 充
- 贤
- 御
- 然
- 潍
- 基
- 启
- 聊
- 驶
- inform_路线偏好_none
- 澄
- 几
- 等
- 塑
- 监
- 办
- 沧
- 亭
- 观
- 螺
- 领
- 秀
- 咋
- 坨
- 奎
- 优
- 半
- 贡
- 唐
- 写
- 今
- 慢
- 傻
- 反
- 次
- 甘
- 肃
- 它
- 泗
- 贺
- 拍
- 咱
- 留
- ktv
- 察
- 顶
- 啦
- 别
- 润
- 谷
- 仙
- 慧
- 朱
- 靠
- 座
- 锅
- 麦
- 雁
- 羊
- 共
- 邓
- 荣
- 食
- 陕
- 邑
- 右
- 铺
- 梁
- 宣
- 幸
- 哥
- 士
- 员
- 招
- 番
- 徐
- 检
- 巷
- 私
- 堡
- 跟
- 器
- 峪
- 立
- 氏
- 教
- 圣
- 购
- 印
- 黑
- 完
- 条
- 唉
- 燕
- 屿
- 闸
- 茶
- 任
- 种
- 蛋
- 荆
- 岔
- inform_value_none
- 黎
- 奉
- 准
- 熟
- 薛
- 朔
- 范
- 械
- 菲
- 雪
- 腾
- 备
- 琼
- 尹
- 垣
- 吴
- 示
- 嫖
- 宫
- 冲
- 毛
- 绘
- 菏
- 嘞
- 浙
- 遵
- 各
- 饶
- 嗷
- 简
- 施
- 俱
- 岚
- 豆
- 栋
- 险
- 岘
- 滇
- 叶
- 卓
- 荔
- 刘
- 滕
- 系
- 统
- e
- 做
- 巡
- 坐
- 研
- 究
- 盐
- 冀
- 象
- 斗
- 娄
- 先
- 陆
- deny_操作_none
- 户
- 额
- 价
- 更
- 拆
- 溧
- 量
- 帝
- 断
- 态
- 智
- 蜀
- 庐
- 舟
- 摄
- 泡
- 洗
- 历
- 咖
- 啡
- 湘
- 甸
- 泾
- 卖
- 朗
- 芜
- 棠
- 凉
- 嵩
- 焦
- 让
- 夫
- 吐
- 童
- 薇
- 旺
- 浩
- 息
- 裕
- 禄
- 睡
- 狮
- 质
- 樱
- 递
- 鸣
- 句
- 韶
- 色
- 典
- 厉
- 测
- 应
- 尉
- 汤
- 己
- 宸
- 漳
- 证
- 沟
- 巩
- 扬
- 笨
- 旁
- 湟
- 主
- 浪
- 殡
- request_前方路况_none
- 竹
- 列
- 季
- 唱
- 冠
- 泥
- 懂
- 秋
- 君
- 祁
- 声
- 拥
- 曹
- 嘛
- 静
- 嗨
- 起
- 刚
- 墨
- 宿
- 络
- 襄
- 葫
- 芦
- 漫
- 峨
- 需
- 眉
- 瓦
- 如
- 根
- 域
- 式
- 何
- 鞍
- 饺
- 票
- 冶
- 喷
- 映
- 组
- 昭
- 延
- 萌
- 角
- 解
- 玲
- 蟹
- 晃
- 瀑
- 纽
- 逸
- 些
- 猪
- 蹄
- 亲
- 野
- 蒋
- 喂
- 荷
- 窝
- 锁
- 试
- 桑
- 沥
- 非
- 制
- 督
- 贝
- 址
- 识
- 侬
- 烧
- 翡
- 堤
- 伟
- 驼
- 昊
- 牌
- 陶
- 室
- 轩
- 鹰
- 钉
- 空
- 着
- 蛳
- 已
- 砖
- 姓
- 顿
- 麓
- 亿
- 售
- 功
- 淄
- 澳
- 斜
- 击
- 活
- 缴
- 输
- 雍
- 鄄
- 降
- 革
- 恢
- 卸
- 承
- 箬
- 澧
- 栈
- 疗
- 传
- 媒
- 血
- 战
- 舞
- 姨
- 婆
- 辆
- 蚌
- 鹅
- 剧
- 湛
- 亳
- b
- 敦
- 煌
- 迎
- 味
- 数
- 妞
- 嫂
- 厚
- hi
- 邹
- 摁
- 榄
- 梨
- 亮
- 纺
- 婚
- 培
- 训
- inform_起点名称_none
- 护
- 霍
- 升
- 考
- m
- 呗
- 摩
- 送
- 段
- 悦
- 餐
- 早
- 议
- 互
- 助
- 抚
- 慈
- 按
- 调
- 杰
- 份
- 兵
- 粥
- 邻
- 墅
- 鬃
- 泳
- 朋
- 良
- 缘
- 鼓
- 赛
- 枝
- 藏
- 鸿
- 冷
- 匀
- 征
- 欢
- 闯
- 汝
- 讲
- 肤
- 响
- 浮
- 录
- 冰
- 圆
- 算
- 思
- 储
- 蓄
- 苗
- 聚
- 湿
- 肇
- 阆
- 拿
- 沣
- 渔
- 铝
- 植
- 托
- 盟
- 宇
- 但
- 渠
- 告
- 丘
- 拓
- 陇
- 鹤
- 操
- 珙
- deny_poi名称_none
- 询
- 攀
- 寿
- 副
- 或
- 假
- 焰
- 夜
- 妓
- 而
- 漆
- 濮
- 胥
- 密
- 志
- 苹
- 彭
- 陪
- 添
- 满
- 章
- 骨
- 栖
- 呦
- 善
- 乖
- 姑
- 爷
- 鸟
- 璧
- 专
- 洧
- 依
- 仔
- 晨
- 沂
- 券
- 晓
- 压
- 涨
- 闻
- 男
- 诊
- 融
- 怡
- 蓬
- 廊
- 殖
- 益
- 必
- 靓
- 蒲
- beyond
- i
- love
- you
- 旋
- 尖
- 驿
- 貂
- 蝉
- 足
- 迹
- 翰
- 杏
- 牡
- 帅
- 雨
- 呈
- 迷
- 哟
- 召
- 娼
- 辛
- 顾
- 殷
- 闵
- 潮
- 脑
- 彗
- 枣
- 杆
- 洁
- 画
- 片
- 认
- 灰
- 鞋
- 宠
- 劫
- 潘
- 烤
- 破
- 隶
- 搞
- 忠
- 仕
- 郴
- 梧
- 酌
- 涵
- 醍
- 候
- 俩
- 馈
- 磨
- 骤
- 翔
- 莘
- 希
- 娅
- 剑
- 权
- 壹
- 冕
- 蛟
- 拨
- 诶
- 盖
- 楠
- 只
- 编
- 虾
- 尽
- 尧
- 晚
- 珍
- 因
- 捆
- 绑
- 端
- 盱
- 眙
- 贩
- 卷
- 养
- 陂
- 晟
- 巧
- 椿
- 毕
- 沭
- 供
- 秒
- 眠
- 状
- 璟
- 受
- 伤
- 萍
- 奔
- 效
- 禽
- 玫
- 瑰
- request_剩余距离_none
- 序
- 鹃
- 齿
- 厕
- 厨
- 忻
- 埔
- 茅
- 芳
- 雕
- 刻
- 蜜
- 筝
- g
- 橄
- 畜
- 牧
- 仑
- 臣
- 溆
- 纱
- 卉
- 群
- 痛
- 疼
- 仟
- 赶
- 紧
- 闫
- 嘶
- 潼
- 烽
- 勾
- 驰
- 麻
- 烦
- 遍
- 樟
- 浜
- 极
- 酷
- 晶
- 穿
- 芽
- 害
- 钓
- 棍
- 核
- 橙
- 琴
- 滋
- 柯
- 箐
- 株
- 陌
- 坤
- 炳
- 槐
- 协
- 湄
- 滏
- 旦
- 策
- 虞
- 陈
- 情
- 潞
- 藁
- 豹
- 若
- 垃
- 圾
- 舰
- 造
- 珥
- 董
- 泼
- 乾
- 瑶
- 龚
- 撤
- 钛
- 责
- 吶
- 喜
- 隔
- 碗
- 倒
- 椰
- 冬
- 伯
- 乳
- 隐
- 尼
- 境
- 圩
- 卧
- 抱
- 使
- 玩
- 饮
- 峤
- 炉
- 终
- 霸
- 晴
- 糕
- 疫
- 弥
- 萧
- 围
- 邬
- 贞
- 逊
- 祠
- 泛
- 逯
- 侯
- 距
- 织
- 谋
- 嵋
- 楚
- 瑜
- 妹
- 误
- 念
- 镜
- 粮
- 涮
- 值
- 鹿
- 捞
- 沅
- 移
- 涉
- 模
- 饿
- 佩
- 汀
- 朐
- 魔
- 细
- 者
- 暖
- 汕
- 谛
- 棣
- 敖
- 此
- 背
- 鲅
- 圈
- 逻
- 绕
- 锋
- 班
- 珲
- 汾
- 著
- 参
- 且
- 摇
- 宕
- 缅
- 柔
- 脂
- 肪
- 变
- 谱
- 积
- 礼
- 凡
- 落
- 羽
- 歇
- 仰
- 聋
- 雷
- 磊
- 繁
- 吭
- 皇
- 晖
- 粤
- 腊
- 习
- 题
- 绅
- 畔
- 啤
- 弋
- 匹
- 订
- 单
- ok
- 灶
- 描
- 婺
- 沿
- 莉
- 弘
- 茵
- 换
- 屏
- 瞎
- 较
- 岁
- 湫
- 塞
- 疏
- 勒
- 涟
- 巫
- 违
- 戈
- 吾
- 脏
- 葛
- 轮
- 胎
- 霞
- 鹭
- 废
- 稍
- 谨
- 慎
- 淡
- 注
- 每
- 既
- 删
- 喝
- 付
- 诸
- 暨
- 戴
- 綦
- 伍
- 诚
- 坦
- 兜
- 残
- 韵
- 喽
- 廖
- 麒
- 麟
- n
- 感
- 籍
- 难
- 死
- 笑
- 哭
- 孩
- 频
- 舍
- 溶
- 垸
- 淀
- 奸
- 改
- 藤
- 狭
- 隧
- 翁
- 陀
- 扎
- 肯
- 揭
- 壁
- 件
- 刷
- 牙
- 节
- 恋
- 淹
- 桦
- 幢
- 棉
- 俺
- 屎
- 彬
- 牟
- 亩
- 傣
- 裴
- 翼
- 辰
- 剪
- 挡
- 凹
- 投
- 碣
- 妆
- 荡
- 驻
- 颍
- 狐
- 享
- 恐
- 汶
- 寅
- 仍
- 睿
- 搁
- 尊
- 泊
- 仲
- 午
- 枞
- 仓
- 卞
- 瀚
- 佰
- 暮
- 拐
- 崔
- 榭
- 棵
- 孕
- 潜
- 俏
- 葡
- 萄
- 采
- 摘
- 癜
- 屑
- 芙
- 蓉
- 咏
- 忙
- 漂
- 父
- 母
- 差
- 彻
- 魏
- 绥
- 闲
- 遥
- 棕
- 榈
- 壶
- 疆
- 苍
- 磁
- 辅
- 泸
- 淅
- a
- 呐
- 燃
- 沱
- 禺
- 宛
- 友
- 俊
- 筑
- 贾
- 宋
- 梯
- 吨
- inform_poi修饰_none
- 础
- 碑
- request_剩余路程_none
- 创
- 孙
- 枢
- 翟
- 浑
- 糖
- 舜
- 橱
- 柜
- 浠
- 莒
- 乔
- 幕
- 磅
- 嘿
- 曼
- 昔
- 衣
- 铭
- 浏
- 喆
- 垦
- 墓
- 戍
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: wav2vec2_xlsr
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 4
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.10.3a3
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
NONE
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["catslu"]}
|
espnet/sujay_catslu_map
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:catslu",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-catslu #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/sujay\_catslu\_map'
This model was trained by Sujay S Kumar using catslu recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Sun Oct 3 12:53:16 EDT 2021'
* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.3a3'
* pytorch version: 'pytorch 1.8.1+cu102'
* Git hash: 'b41391336042a4876e30d9fe5c66afb4e4be404c'
+ Commit date: 'Wed Sep 22 10:02:03 2021 -0400'
asr\_train\_asr\_smaller\_aishell\_xlsr\_raw\_zh\_word
------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
LM config
---------
expand
|
[
"### 'espnet/sujay\\_catslu\\_map'\n\n\nThis model was trained by Sujay S Kumar using catslu recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Oct 3 12:53:16 EDT 2021'\n* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a3'\n* pytorch version: 'pytorch 1.8.1+cu102'\n* Git hash: 'b41391336042a4876e30d9fe5c66afb4e4be404c'\n\t+ Commit date: 'Wed Sep 22 10:02:03 2021 -0400'\n\n\nasr\\_train\\_asr\\_smaller\\_aishell\\_xlsr\\_raw\\_zh\\_word\n------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-catslu #license-cc-by-4.0 #region-us \n",
"### 'espnet/sujay\\_catslu\\_map'\n\n\nThis model was trained by Sujay S Kumar using catslu recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Oct 3 12:53:16 EDT 2021'\n* python version: '3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a3'\n* pytorch version: 'pytorch 1.8.1+cu102'\n* Git hash: 'b41391336042a4876e30d9fe5c66afb4e4be404c'\n\t+ Commit date: 'Wed Sep 22 10:02:03 2021 -0400'\n\n\nasr\\_train\\_asr\\_smaller\\_aishell\\_xlsr\\_raw\\_zh\\_word\n------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand\n\nLM config\n---------\n\n\nexpand"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1`
♻️ Imported from https://zenodo.org/record/5845307/files/asr_conformer_ar_valid.acc.ave.zip?download=1
This model was trained by vectominist using seame/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": ["en", "zh", "multilingual"], "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["seame"]}
|
espnet/vectominist_seame_asr_conformer_bpe5626
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"zh",
"multilingual",
"dataset:seame",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en",
"zh",
"multilingual"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #zh #multilingual #dataset-seame #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'URL
️ Imported from URL
This model was trained by vectominist using seame/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'URL\n️ Imported from URL\n\nThis model was trained by vectominist using seame/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #zh #multilingual #dataset-seame #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'URL\n️ Imported from URL\n\nThis model was trained by vectominist using seame/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en`
This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Fri Aug 6 11:44:39 JST 2021`
- python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.7.0`
- Git hash: `0f7558a716ab830d0c29da8785840124f358d47b`
- Commit date: `Tue Jun 8 15:33:49 2021 -0400`
## asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.5|1.3|0.2|0.2|1.7|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|96.8|2.8|0.4|0.3|3.4|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.4|1.4|0.2|0.2|1.8|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|96.8|2.8|0.4|0.4|3.6|36.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.6|0.2|0.2|0.2|0.6|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.8|0.6|0.6|0.3|1.5|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.6|0.2|0.2|0.2|0.6|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.9|0.5|0.5|0.4|1.4|36.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|98.2|1.3|0.5|0.4|2.2|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|96.0|2.8|1.2|0.6|4.6|33.7|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|98.1|1.3|0.6|0.4|2.3|22.1|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|96.0|2.7|1.3|0.6|4.6|36.0|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
ngpu: 3
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us
|
# ESPnet2 ASR pretrained model
## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en'
This model was trained by Takashi Maekaku using librispeech recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ASR pretrained model",
"## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en'\n\nThis model was trained by Takashi Maekaku using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ASR pretrained model",
"## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp_26epoch, fs=16k, lang=en'\n\nThis model was trained by Takashi Maekaku using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en`
This model was trained by Takashi Maekaku using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Sat Jul 3 23:10:19 JST 2021`
- python version: `3.7.9 (default, Apr 23 2021, 13:48:31) [GCC 5.5.0 20171010]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.7.0`
- Git hash: `0f7558a716ab830d0c29da8785840124f358d47b`
- Commit date: `Tue Jun 8 15:33:49 2021 -0400`
## asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|54402|98.3|1.6|0.2|0.2|1.9|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|50948|95.1|4.3|0.6|0.4|5.4|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|52576|98.1|1.7|0.2|0.2|2.2|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|52343|95.3|4.1|0.6|0.5|5.2|45.8|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|265951|98.1|1.0|0.9|0.5|2.4|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|272758|98.3|0.8|0.9|0.5|2.3|45.8|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_clean|2703|68010|97.8|1.6|0.6|0.4|2.6|24.9|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/dev_other|2864|63110|94.1|4.3|1.6|1.1|7.0|42.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_clean|2620|65818|97.6|1.6|0.8|0.4|2.8|26.8|
|decode_asr_lm_lm_train_lm_transformer2_en_bpe5000_17epoch_asr_model_valid.acc.best/test_other|2939|65101|94.3|4.0|1.8|1.0|6.7|45.8|
```
### Training config
See full config in [`config.yaml`](./exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp/config.yaml)
```yaml
config: conf/tuning/train_asr_conformer7_hubert_960hr_large.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer7_hubert_960hr_large_raw_en_bpe5000_sp
ngpu: 3
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 3
local_rank: 3
dist_master_addr: localhost
dist_master_port: 33643
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"], "inference": false}
|
espnet/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_larg-truncated-5b94d9
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us
|
# ESPnet2 ASR pretrained model
## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en'
This model was trained by Takashi Maekaku using librispeech recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ASR pretrained model",
"## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en'\n\nThis model was trained by Takashi Maekaku using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ASR pretrained model",
"## 'Xuankai Chang/xuankai_chang_librispeech_asr_train_asr_conformer7_wav2vec2_960hr_large_raw_en_bpe5000_sp_25epoch, fs=16k, lang=en'\n\nThis model was trained by Takashi Maekaku using librispeech recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
audio-to-audio
|
espnet
|
# ESPnet2 ENH pretrained model
## `neillu23/dns_ins20_enh_train_enh_blstm_tf_raw_valid.loss.best, fs=16k, lang=en`
♻️ Imported from <https://zenodo.org/record/4923697#.YOAOIpozZH4>.
This model was trained by neillu23 using dns_ins20 recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Wed Jun 9 09:49:34 CST 2021`
- python version: `3.8.10 (default, May 19 2021, 18:05:58) [GCC 7.3.0]`
- espnet version: `espnet 0.9.9`
- pytorch version: `pytorch 1.4.0`
- Git hash: `c1dfefb98bf59f654e0907b9681668eaca8ddfcc`
- Commit date: `Tue Jun 8 17:23:26 2021 +0800`
## enh_train_enh_blstm_tf_raw
config: ./conf/tuning/train_enh_blstm_tf.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_synthetic|0.98|23.87|23.87|0.00|
|enhanced_tt_synthetic_no_reverb|0.96|15.94|15.94|0.00|
|enhanced_tt_synthetic_with_reverb|0.84|11.86|11.86|0.00|
```
### Training config
See full config in [`config.yaml`](./exp/enh_train_enh_blstm_tf_raw/config.yaml)
```yaml
config: ./conf/tuning/train_enh_blstm_tf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_blstm_tf_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45398
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["dns_ins20"], "inference": false}
|
espnet/yen-ju-lu-dns_ins20_enh_train_enh_blstm_tf_raw_valid.loss.best
| null |
[
"espnet",
"audio",
"audio-source-separation",
"audio-to-audio",
"en",
"dataset:dns_ins20",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-dns_ins20 #license-cc-by-4.0 #region-us
|
# ESPnet2 ENH pretrained model
## 'neillu23/dns_ins20_enh_train_enh_blstm_tf_raw_valid.URL, fs=16k, lang=en'
️ Imported from <URL
This model was trained by neillu23 using dns_ins20 recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ENH pretrained model",
"## 'neillu23/dns_ins20_enh_train_enh_blstm_tf_raw_valid.URL, fs=16k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by neillu23 using dns_ins20 recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-dns_ins20 #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ENH pretrained model",
"## 'neillu23/dns_ins20_enh_train_enh_blstm_tf_raw_valid.URL, fs=16k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by neillu23 using dns_ins20 recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
text-generation
| null |
# Bot Edan
|
{"tags": ["conversational"]}
|
estehpanas/pascalbot
| null |
[
"conversational",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#conversational #region-us
|
# Bot Edan
|
[
"# Bot Edan"
] |
[
"TAGS\n#conversational #region-us \n",
"# Bot Edan"
] |
question-answering
|
transformers
|
# camembert-base-squadFR-fquad-piaf
## Description
Question-answering French model, using base [CamemBERT](https://camembert-model.fr/) fine-tuned on a combo of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
## Training hyperparameters
```shell
python run_squad.py \
--model_type camembert \
--model_name_or_path camembert-base \
--do_train --do_eval \
--train_file data/SQuAD+fquad+piaf.json \
--predict_file data/fquad_valid.json \
--per_gpu_train_batch_size 12 \
--learning_rate 3e-5 \
--num_train_epochs 4 \
--max_seq_length 384 \
--doc_stride 128 \
--save_steps 10000
```
## Evaluation results
### FQuAD v1.0 Evaluation
```shell
{"f1": 79.81, "exact_match": 55.14}
```
### SQuAD-FR Evaluation
```shell
{"f1": 80.61, "exact_match": 59.54}
```
## Usage
```python
from transformers import pipeline
nlp = pipeline('question-answering', model='etalab-ia/camembert-base-squadFR-fquad-piaf', tokenizer='etalab-ia/camembert-base-squadFR-fquad-piaf')
nlp({
'question': "Qui est Claude Monet?",
'context': "Claude Monet, né le 14 novembre 1840 à Paris et mort le 5 décembre 1926 à Giverny, est un peintre français et l’un des fondateurs de l'impressionnisme."
})
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"], "widget": [{"text": "Comment s'appelle le portail open data du gouvernement ?", "context": "Etalab est une administration publique fran\u00e7aise qui fait notamment office de Chief Data Officer de l'\u00c9tat et coordonne la conception et la mise en \u0153uvre de sa strat\u00e9gie dans le domaine de la donn\u00e9e (ouverture et partage des donn\u00e9es publiques ou open data, exploitation des donn\u00e9es et intelligence artificielle...). Ainsi, Etalab d\u00e9veloppe et maintient le portail des donn\u00e9es ouvertes du gouvernement fran\u00e7ais data.gouv.fr. Etalab promeut \u00e9galement une plus grande ouverture l'administration sur la soci\u00e9t\u00e9 (gouvernement ouvert) : transparence de l'action publique, innovation ouverte, participation citoyenne... elle promeut l\u2019innovation, l\u2019exp\u00e9rimentation, les m\u00e9thodes de travail ouvertes, agiles et it\u00e9ratives, ainsi que les synergies avec la soci\u00e9t\u00e9 civile pour d\u00e9cloisonner l\u2019administration et favoriser l\u2019adoption des meilleures pratiques professionnelles dans le domaine du num\u00e9rique. \u00c0 ce titre elle \u00e9tudie notamment l\u2019opportunit\u00e9 de recourir \u00e0 des technologies en voie de maturation issues du monde de la recherche. Cette entit\u00e9 charg\u00e9e de l'innovation au sein de l'administration doit contribuer \u00e0 l'am\u00e9lioration du service public gr\u00e2ce au num\u00e9rique. Elle est rattach\u00e9e \u00e0 la Direction interminist\u00e9rielle du num\u00e9rique, dont les missions et l\u2019organisation ont \u00e9t\u00e9 fix\u00e9es par le d\u00e9cret du 30 octobre 2019.\u2009 Dirig\u00e9 par Laure Lucchesi depuis 2016, elle rassemble une \u00e9quipe pluridisciplinaire d'une trentaine de personnes."}]}
|
AgentPublic/camembert-base-squadFR-fquad-piaf
| null |
[
"transformers",
"pytorch",
"tf",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#transformers #pytorch #tf #safetensors #camembert #question-answering #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #endpoints_compatible #region-us
|
# camembert-base-squadFR-fquad-piaf
## Description
Question-answering French model, using base CamemBERT fine-tuned on a combo of three French Q&A datasets:
1. PIAFv1.1
2. FQuADv1.0
3. SQuAD-FR (SQuAD automatically translated to French)
## Training hyperparameters
## Evaluation results
### FQuAD v1.0 Evaluation
### SQuAD-FR Evaluation
## Usage
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
s
### PIAF
### FQuAD
### SQuAD-FR
### CamemBERT
HF model card : URL
|
[
"# camembert-base-squadFR-fquad-piaf",
"## Description\n\nQuestion-answering French model, using base CamemBERT fine-tuned on a combo of three French Q&A datasets:\n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"## Training hyperparameters",
"## Evaluation results",
"### FQuAD v1.0 Evaluation",
"### SQuAD-FR Evaluation",
"## Usage",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\ns",
"### PIAF",
"### FQuAD",
"### SQuAD-FR",
"### CamemBERT\nHF model card : URL"
] |
[
"TAGS\n#transformers #pytorch #tf #safetensors #camembert #question-answering #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #endpoints_compatible #region-us \n",
"# camembert-base-squadFR-fquad-piaf",
"## Description\n\nQuestion-answering French model, using base CamemBERT fine-tuned on a combo of three French Q&A datasets:\n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"## Training hyperparameters",
"## Evaluation results",
"### FQuAD v1.0 Evaluation",
"### SQuAD-FR Evaluation",
"## Usage",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\ns",
"### PIAF",
"### FQuAD",
"### SQuAD-FR",
"### CamemBERT\nHF model card : URL"
] |
null |
transformers
|
# dpr-ctx_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 \
--encoder_model_type fairseq_roberta \
--pretrained_file data/camembert-base \
--seed 12345 \
--sequence_length 256 \
--warmup_steps 1237 \
--batch_size 16 \
--do_lower_case \
--train_file ./data/DPR_FR_train.json \
--dev_file ./data/DPR_FR_dev.json \
--output_dir ./output/ \
--learning_rate 2e-05 \
--num_train_epochs 35 \
--dev_batch_size 16 \
--val_av_rank_start_epoch 30 \
--pretrained_model_cfg ./data/camembert-base/
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-ctx_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"]}
|
AgentPublic/dpr-ctx_encoder-fr_qa-camembert
| null |
[
"transformers",
"pytorch",
"camembert",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"arxiv:2004.04906",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.04906",
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #arxiv-2004.04906 #arxiv-1911.03894 #endpoints_compatible #region-us
|
# dpr-ctx_encoder-fr_qa-camembert
## Description
French DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. PIAFv1.1
2. FQuADv1.0
3. SQuAD-FR (SQuAD automatically translated to French)
### Training
We are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer.
The files are over here.
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.
### Hyperparameters
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).
### DPR
#### FQuAD v1.0 Evaluation
#### SQuAD-FR Evaluation
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
#### SQuAD-FR Evaluation
## Usage
The results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:
And with 'haystack', we use it as a retriever:
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
s
### Datasets
#### PIAF
#### FQuAD
#### SQuAD-FR
### Models
#### CamemBERT
HF model card : URL
#### DPR
|
[
"# dpr-ctx_encoder-fr_qa-camembert",
"## Description\n\nFrench DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A",
"## Data",
"### French Q&A \nWe use a combination of three French Q&A datasets: \n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"### Training\n\n\nWe are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer. \n\nThe files are over here.",
"### Evaluation\n\n\nWe use FQuADv1.0 and French-SQuAD evaluation sets.",
"## Training Script\nWe use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.",
"### Hyperparameters",
"###",
"## Evaluation results\nWe obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).",
"### DPR",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"### BM25\n\n\nFor reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"## Usage\n\nThe results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:\n\n\n\nAnd with 'haystack', we use it as a retriever:",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\n\ns",
"### Datasets",
"#### PIAF",
"#### FQuAD",
"#### SQuAD-FR",
"### Models",
"#### CamemBERT\nHF model card : URL",
"#### DPR"
] |
[
"TAGS\n#transformers #pytorch #camembert #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #arxiv-2004.04906 #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"# dpr-ctx_encoder-fr_qa-camembert",
"## Description\n\nFrench DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A",
"## Data",
"### French Q&A \nWe use a combination of three French Q&A datasets: \n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"### Training\n\n\nWe are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer. \n\nThe files are over here.",
"### Evaluation\n\n\nWe use FQuADv1.0 and French-SQuAD evaluation sets.",
"## Training Script\nWe use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.",
"### Hyperparameters",
"###",
"## Evaluation results\nWe obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).",
"### DPR",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"### BM25\n\n\nFor reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"## Usage\n\nThe results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:\n\n\n\nAnd with 'haystack', we use it as a retriever:",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\n\ns",
"### Datasets",
"#### PIAF",
"#### FQuAD",
"#### SQuAD-FR",
"### Models",
"#### CamemBERT\nHF model card : URL",
"#### DPR"
] |
feature-extraction
|
transformers
|
# dpr-question_encoder-fr_qa-camembert
## Description
French [DPR model](https://arxiv.org/abs/2004.04906) using [CamemBERT](https://arxiv.org/abs/1911.03894) as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. [PIAFv1.1](https://www.data.gouv.fr/en/datasets/piaf-le-dataset-francophone-de-questions-reponses/)
2. [FQuADv1.0](https://fquad.illuin.tech/)
3. [SQuAD-FR (SQuAD automatically translated to French)](https://github.com/Alikabbadj/French-SQuAD)
### Training
We are using 90 562 random questions for `train` and 22 391 for `dev`. No question in `train` exists in `dev`. For each question, we have a single `positive_context` (the paragraph where the answer to this question is found) and around 30 `hard_negtive_contexts`. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates **that do not contain the answer**.
The files are over [here](https://drive.google.com/file/d/1W5Jm3sqqWlsWsx2sFpA39Ewn33PaLQ7U/view?usp=sharing).
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official [Facebook DPR implentation](https://github.com/facebookresearch/DPR) with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found [over here](https://github.com/psorianom/DPR).
### Hyperparameters
```shell
python -m torch.distributed.launch --nproc_per_node=8 train_dense_encoder.py \
--max_grad_norm 2.0 --encoder_model_type hf_bert --pretrained_file data/bert-base-multilingual-uncased \
--seed 12345 --sequence_length 256 --warmup_steps 1237 --batch_size 16 --do_lower_case \
--train_file DPR_FR_train.json \
--dev_file ./data/100_hard_neg_ctxs/DPR_FR_dev.json \
--output_dir ./output/bert --learning_rate 2e-05 --num_train_epochs 35 \
--dev_batch_size 16 --val_av_rank_start_epoch 25 \
--pretrained_model_cfg ./data/bert-base-multilingual-uncased
```
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use [haystack's evaluation script](https://github.com/deepset-ai/haystack/blob/db4151bbc026f27c6d709fefef1088cd3f1e18b9/tutorials/Tutorial5_Evaluation.py) (**we report Retrieval results only**).
### DPR
#### FQuAD v1.0 Evaluation
```shell
For 2764 out of 3184 questions (86.81%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.87
Retriever Mean Avg Precision: 0.57
```
#### SQuAD-FR Evaluation
```shell
For 8945 out of 10018 questions (89.29%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.89
Retriever Mean Avg Precision: 0.63
```
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
```shell
For 2966 out of 3184 questions (93.15%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.74
```
#### SQuAD-FR Evaluation
```shell
For 9353 out of 10018 questions (93.36%), the answer was in the top-20 candidate passages selected by the retriever.
Retriever Recall: 0.93
Retriever Mean Avg Precision: 0.77
```
## Usage
The results reported here are obtained with the `haystack` library. To get to similar embeddings using exclusively HF `transformers` library, you can do the following:
```python
from transformers import AutoTokenizer, AutoModel
query = "Salut, mon chien est-il mignon ?"
tokenizer = AutoTokenizer.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", do_lower_case=True)
input_ids = tokenizer(query, return_tensors='pt')["input_ids"]
model = AutoModel.from_pretrained("etalab-ia/dpr-question_encoder-fr_qa-camembert", return_dict=True)
embeddings = model.forward(input_ids).pooler_output
print(embeddings)
```
And with `haystack`, we use it as a retriever:
```
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="etalab-ia/dpr-question_encoder-fr_qa-camembert",
passage_embedding_model="etalab-ia/dpr-ctx_encoder-fr_qa-camembert",
model_version=dpr_model_tag,
infer_tokenizer_classes=True,
)
```
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
## Citations
### Datasets
#### PIAF
```
@inproceedings{KeraronLBAMSSS20,
author = {Rachel Keraron and
Guillaume Lancrenon and
Mathilde Bras and
Fr{\'{e}}d{\'{e}}ric Allary and
Gilles Moyse and
Thomas Scialom and
Edmundo{-}Pavel Soriano{-}Morales and
Jacopo Staiano},
title = {Project {PIAF:} Building a Native French Question-Answering Dataset},
booktitle = {{LREC}},
pages = {5481--5490},
publisher = {European Language Resources Association},
year = {2020}
}
```
#### FQuAD
```
@article{dHoffschmidt2020FQuADFQ,
title={FQuAD: French Question Answering Dataset},
author={Martin d'Hoffschmidt and Maxime Vidal and Wacim Belblidia and Tom Brendl'e and Quentin Heinrich},
journal={ArXiv},
year={2020},
volume={abs/2002.06071}
}
```
#### SQuAD-FR
```
@MISC{kabbadj2018,
author = "Kabbadj, Ali",
title = "Something new in French Text Mining and Information Extraction (Universal Chatbot): Largest Q&A French training dataset (110 000+) ",
editor = "linkedin.com",
month = "November",
year = "2018",
url = "\url{https://www.linkedin.com/pulse/something-new-french-text-mining-information-chatbot-largest-kabbadj/}",
note = "[Online; posted 11-November-2018]",
}
```
### Models
#### CamemBERT
HF model card : [https://huggingface.co/camembert-base](https://huggingface.co/camembert-base)
```
@inproceedings{martin2020camembert,
title={CamemBERT: a Tasty French Language Model},
author={Martin, Louis and Muller, Benjamin and Su{\'a}rez, Pedro Javier Ortiz and Dupont, Yoann and Romary, Laurent and de la Clergerie, {\'E}ric Villemonte and Seddah, Djam{\'e} and Sagot, Beno{\^\i}t},
booktitle={Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics},
year={2020}
}
```
#### DPR
```
@misc{karpukhin2020dense,
title={Dense Passage Retrieval for Open-Domain Question Answering},
author={Vladimir Karpukhin and Barlas Oğuz and Sewon Min and Patrick Lewis and Ledell Wu and Sergey Edunov and Danqi Chen and Wen-tau Yih},
year={2020},
eprint={2004.04906},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "fr", "datasets": ["piaf", "FQuAD", "SQuAD-FR"]}
|
AgentPublic/dpr-question_encoder-fr_qa-camembert
| null |
[
"transformers",
"pytorch",
"camembert",
"feature-extraction",
"fr",
"dataset:piaf",
"dataset:FQuAD",
"dataset:SQuAD-FR",
"arxiv:2004.04906",
"arxiv:1911.03894",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2004.04906",
"1911.03894"
] |
[
"fr"
] |
TAGS
#transformers #pytorch #camembert #feature-extraction #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #arxiv-2004.04906 #arxiv-1911.03894 #endpoints_compatible #region-us
|
# dpr-question_encoder-fr_qa-camembert
## Description
French DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A
## Data
### French Q&A
We use a combination of three French Q&A datasets:
1. PIAFv1.1
2. FQuADv1.0
3. SQuAD-FR (SQuAD automatically translated to French)
### Training
We are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer.
The files are over here.
### Evaluation
We use FQuADv1.0 and French-SQuAD evaluation sets.
## Training Script
We use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.
### Hyperparameters
###
## Evaluation results
We obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).
### DPR
#### FQuAD v1.0 Evaluation
#### SQuAD-FR Evaluation
### BM25
For reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.
#### FQuAD v1.0 Evaluation
#### SQuAD-FR Evaluation
## Usage
The results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:
And with 'haystack', we use it as a retriever:
## Acknowledgments
This work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224).
s
### Datasets
#### PIAF
#### FQuAD
#### SQuAD-FR
### Models
#### CamemBERT
HF model card : URL
#### DPR
|
[
"# dpr-question_encoder-fr_qa-camembert",
"## Description\n\nFrench DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A",
"## Data",
"### French Q&A \nWe use a combination of three French Q&A datasets: \n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"### Training\n\n\nWe are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer. \n\nThe files are over here.",
"### Evaluation\n\n\nWe use FQuADv1.0 and French-SQuAD evaluation sets.",
"## Training Script\nWe use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.",
"### Hyperparameters",
"###",
"## Evaluation results\nWe obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).",
"### DPR",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"### BM25\n\n\nFor reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"## Usage\n\nThe results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:\n\n\n\nAnd with 'haystack', we use it as a retriever:",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\n\ns",
"### Datasets",
"#### PIAF",
"#### FQuAD",
"#### SQuAD-FR",
"### Models",
"#### CamemBERT\nHF model card : URL",
"#### DPR"
] |
[
"TAGS\n#transformers #pytorch #camembert #feature-extraction #fr #dataset-piaf #dataset-FQuAD #dataset-SQuAD-FR #arxiv-2004.04906 #arxiv-1911.03894 #endpoints_compatible #region-us \n",
"# dpr-question_encoder-fr_qa-camembert",
"## Description\n\nFrench DPR model using CamemBERT as base and then fine-tuned on a combo of three French Q&A",
"## Data",
"### French Q&A \nWe use a combination of three French Q&A datasets: \n\n1. PIAFv1.1\n2. FQuADv1.0\n3. SQuAD-FR (SQuAD automatically translated to French)",
"### Training\n\n\nWe are using 90 562 random questions for 'train' and 22 391 for 'dev'. No question in 'train' exists in 'dev'. For each question, we have a single 'positive_context' (the paragraph where the answer to this question is found) and around 30 'hard_negtive_contexts'. Hard negative contexts are found by querying an ES instance (via bm25 retrieval) and getting the top-k candidates that do not contain the answer. \n\nThe files are over here.",
"### Evaluation\n\n\nWe use FQuADv1.0 and French-SQuAD evaluation sets.",
"## Training Script\nWe use the official Facebook DPR implentation with a slight modification: by default, the code can work with Roberta models, still we changed a single line to make it easier to work with Camembert. This modification can be found over here.",
"### Hyperparameters",
"###",
"## Evaluation results\nWe obtain the following evaluation by using FQuAD and SQuAD-FR evaluation (or validation) sets. To obtain these results, we use haystack's evaluation script (we report Retrieval results only).",
"### DPR",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"### BM25\n\n\nFor reference, BM25 gets the results shown below. As in the original paper, regarding SQuAD-like datasets, the results of DPR are consistently superseeded by BM25.",
"#### FQuAD v1.0 Evaluation",
"#### SQuAD-FR Evaluation",
"## Usage\n\nThe results reported here are obtained with the 'haystack' library. To get to similar embeddings using exclusively HF 'transformers' library, you can do the following:\n\n\n\nAnd with 'haystack', we use it as a retriever:",
"## Acknowledgments\n\nThis work was performed using HPC resources from GENCI–IDRIS (Grant 2020-AD011011224). \n\n\ns",
"### Datasets",
"#### PIAF",
"#### FQuAD",
"#### SQuAD-FR",
"### Models",
"#### CamemBERT\nHF model card : URL",
"#### DPR"
] |
text-classification
|
transformers
|
# Guwen CLS
A Classical Chinese Text Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "text classificatio"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "text-classification", "widget": [{"text": "\u5b50\u66f0\uff1a\u201c\u5f1f\u5b50\u5165\u5219\u5b5d\uff0c\u51fa\u5219\u608c\uff0c\u8c28\u800c\u4fe1\uff0c\u6cdb\u7231\u4f17\uff0c\u800c\u4eb2\u4ec1\u3002\u884c\u6709\u9980\u529b\uff0c\u5219\u4ee5\u5b66\u6587\u3002\u201d"}]}
|
ethanyt/guwen-cls
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"text classificatio",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #text-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #text classificatio #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen CLS
A Classical Chinese Text Classifier.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen CLS\n\nA Classical Chinese Text Classifier.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #text classificatio #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen CLS\n\nA Classical Chinese Text Classifier.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
token-classification
|
transformers
|
# Guwen NER
A Classical Chinese Named Entity Recognizer.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\uff0c\u706d\u5148\u4ee3\u5178\u7c4d\uff0c\u711a\u4e66\u5751\u5112\uff0c\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\uff0c\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u3002\u6c49\u5ba4\u9f99\u5174\uff0c\u5f00\u8bbe\u5b66\u6821\uff0c\u65c1\u6c42\u5112\u96c5\uff0c\u4ee5\u9610\u5927\u7337\u3002\u6d4e\u5357\u4f0f\u751f\uff0c\u5e74\u8fc7\u4e5d\u5341\uff0c\u5931\u5176\u672c\u7ecf\uff0c\u53e3\u4ee5\u4f20\u6388\uff0c\u88c1\u4e8c\u5341\u9980\u7bc7\uff0c\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\uff0c\u8c13\u4e4b\u5c1a\u4e66\u3002\u767e\u7bc7\u4e4b\u4e49\uff0c\u4e16\u83ab\u5f97\u95fb\u3002"}]}
|
ethanyt/guwen-ner
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen NER
A Classical Chinese Named Entity Recognizer.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
Guwen Models.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen NER\n\nA Classical Chinese Named Entity Recognizer.\n\nNote: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to\nGuwen Models.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #jax #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen NER\n\nA Classical Chinese Named Entity Recognizer.\n\nNote: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to\nGuwen Models.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
token-classification
|
transformers
|
# Guwen Punc
A Classical Chinese Punctuation Marker.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "punctuation marker"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\u706d\u5148\u4ee3\u5178\u7c4d\u711a\u4e66\u5751\u5112\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u6c49\u5ba4\u9f99\u5174\u5f00\u8bbe\u5b66\u6821\u65c1\u6c42\u5112\u96c5\u4ee5\u9610\u5927\u7337\u6d4e\u5357\u4f0f\u751f\u5e74\u8fc7\u4e5d\u5341\u5931\u5176\u672c\u7ecf\u53e3\u4ee5\u4f20\u6388\u88c1\u4e8c\u5341\u9980\u7bc7\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\u8c13\u4e4b\u5c1a\u4e66\u767e\u7bc7\u4e4b\u4e49\u4e16\u83ab\u5f97\u95fb"}]}
|
ethanyt/guwen-punc
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"punctuation marker",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #punctuation marker #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen Punc
A Classical Chinese Punctuation Marker.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen Punc\n\nA Classical Chinese Punctuation Marker.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #punctuation marker #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen Punc\n\nA Classical Chinese Punctuation Marker.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
token-classification
|
transformers
|
# Guwen Quote
A Classical Chinese Quotation Detector.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
[Guwen Models](https://github.com/ethan-yt/guwen-models).
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "quotation detection"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u5b50\u66f0\u5b66\u800c\u65f6\u4e60\u4e4b\u4e0d\u4ea6\u8bf4\u4e4e\u6709\u670b\u81ea\u8fdc\u65b9\u6765\u4e0d\u4ea6\u4e50\u4e4e\u4eba\u4e0d\u77e5\u800c\u4e0d\u6120\u4e0d\u4ea6\u541b\u5b50\u4e4e\u6709\u5b50\u66f0\u5176\u4e3a\u4eba\u4e5f\u5b5d\u5f1f\u800c\u597d\u72af\u4e0a\u8005\u9c9c\u77e3\u4e0d\u597d\u72af\u4e0a\u800c\u597d\u4f5c\u4e71\u8005\u672a\u4e4b\u6709\u4e5f\u541b\u5b50\u52a1\u672c\u672c\u7acb\u800c\u9053\u751f\u5b5d\u5f1f\u4e5f\u8005\u5176\u4e3a\u4ec1\u4e4b\u672c\u4e0e\u5b50\u66f0\u5de7\u8a00\u4ee4\u8272\u9c9c\u77e3\u4ec1\u66fe\u5b50\u66f0\u543e\u65e5\u4e09\u7701\u543e\u8eab\u4e3a\u4eba\u8c0b\u800c\u4e0d\u5fe0\u4e4e\u4e0e\u670b\u53cb\u4ea4\u800c\u4e0d\u4fe1\u4e4e\u4f20\u4e0d\u4e60\u4e4e\u5b50\u66f0\u9053\u5343\u4e58\u4e4b\u56fd\u656c\u4e8b\u800c\u4fe1\u8282\u7528\u800c\u7231\u4eba\u4f7f\u6c11\u4ee5\u65f6"}]}
|
ethanyt/guwen-quote
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"quotation detection",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #quotation detection #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen Quote
A Classical Chinese Quotation Detector.
Note: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to
Guwen Models.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen Quote\n\nA Classical Chinese Quotation Detector.\n\nNote: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to\nGuwen Models.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #quotation detection #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen Quote\n\nA Classical Chinese Quotation Detector.\n\nNote: There are some problems with decoding using the default sequence classification model. Use the CRF model to achieve the best results. CRF related code please refer to\nGuwen Models.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
token-classification
|
transformers
|
# Guwen Seg
A Classical Chinese Sentence Segmenter.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "sentence segmentation"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "token-classification", "widget": [{"text": "\u53ca\u79e6\u59cb\u7687\u706d\u5148\u4ee3\u5178\u7c4d\u711a\u4e66\u5751\u5112\u5929\u4e0b\u5b66\u58eb\u9003\u96be\u89e3\u6563\u6211\u5148\u4eba\u7528\u85cf\u5176\u5bb6\u4e66\u4e8e\u5c4b\u58c1\u6c49\u5ba4\u9f99\u5174\u5f00\u8bbe\u5b66\u6821\u65c1\u6c42\u5112\u96c5\u4ee5\u9610\u5927\u7337\u6d4e\u5357\u4f0f\u751f\u5e74\u8fc7\u4e5d\u5341\u5931\u5176\u672c\u7ecf\u53e3\u4ee5\u4f20\u6388\u88c1\u4e8c\u5341\u9980\u7bc7\u4ee5\u5176\u4e0a\u53e4\u4e4b\u4e66\u8c13\u4e4b\u5c1a\u4e66\u767e\u7bc7\u4e4b\u4e49\u4e16\u83ab\u5f97\u95fb"}]}
|
ethanyt/guwen-seg
| null |
[
"transformers",
"pytorch",
"roberta",
"token-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"sentence segmentation",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #sentence segmentation #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen Seg
A Classical Chinese Sentence Segmenter.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen Seg\n\nA Classical Chinese Sentence Segmenter.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #roberta #token-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #sentence segmentation #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen Seg\n\nA Classical Chinese Sentence Segmenter.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
text-classification
|
transformers
|
# Guwen Sent
A Classical Chinese Poem Sentiment Classifier.
See also:
<a href="https://github.com/ethan-yt/guwen-models">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwen-models&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/cclue/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=cclue&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
<a href="https://github.com/ethan-yt/guwenbert/">
<img align="center" width="400" src="https://github-readme-stats.vercel.app/api/pin/?username=ethan-yt&repo=guwenbert&bg_color=30,e96443,904e95&title_color=fff&text_color=fff&icon_color=fff&show_owner=true" />
</a>
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch", "sentiment classificatio"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "text-classification", "widget": [{"text": "\u6eda\u6eda\u957f\u6c5f\u4e1c\u901d\u6c34\uff0c\u6d6a\u82b1\u6dd8\u5c3d\u82f1\u96c4"}, {"text": "\u5bfb\u5bfb\u89c5\u89c5\uff0c\u51b7\u51b7\u6e05\u6e05\uff0c\u51c4\u51c4\u60e8\u60e8\u621a\u621a"}, {"text": "\u6267\u624b\u76f8\u770b\u6cea\u773c\uff0c\u7adf\u65e0\u8bed\u51dd\u564e\uff0c\u5ff5\u53bb\u53bb\uff0c\u5343\u91cc\u70df\u6ce2\uff0c\u66ae\u972d\u6c89\u6c89\u695a\u5929\u9614\u3002"}, {"text": "\u5ffd\u5982\u4e00\u591c\u6625\u98ce\u6765\uff0c\u5e72\u6811\u4e07\u6811\u68a8\u82b1\u5f00"}]}
|
ethanyt/guwen-sent
| null |
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"sentiment classificatio",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #roberta #text-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #sentiment classificatio #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
# Guwen Sent
A Classical Chinese Poem Sentiment Classifier.
See also:
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
<a href="URL
<img align="center" width="400" src="URL />
</a>
|
[
"# Guwen Sent\n\nA Classical Chinese Poem Sentiment Classifier.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
[
"TAGS\n#transformers #pytorch #roberta #text-classification #chinese #classical chinese #literary chinese #ancient chinese #bert #sentiment classificatio #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us \n",
"# Guwen Sent\n\nA Classical Chinese Poem Sentiment Classifier.\n\nSee also: \n\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>\n<a href=\"URL\n <img align=\"center\" width=\"400\" src=\"URL />\n</a>"
] |
fill-mask
|
transformers
|
# GuwenBERT
## Model description

This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-base")
model = AutoModel.from_pretrained("ethanyt/guwenbert-base")
```
## Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
## Training procedure
The models are initialized with `hfl/chinese-roberta-wwm-ext` and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
## Eval results
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
| NE Type | Precision | Recall | F1 |
|:----------:|:-----------:|:------:|:-----:|
| Book Name | 77.50 | 73.73 | 75.57 |
| Other Name | 85.85 | 89.32 | 87.55 |
| Micro Avg. | 83.88 | 85.39 | 84.63 |
## About Us
We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] qq.com
> Created with ❤️ by Tan Yan [](https://github.com/Ethan-yt) and Zewen Chi [](https://github.com/CZWin32768)
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "[MASK]\u592a\u5143\u4e2d\uff0c\u6b66\u9675\u4eba\u6355\u9c7c\u4e3a\u4e1a\u3002"}, {"text": "\u95ee\u5f81\u592b\u4ee5\u524d\u8def\uff0c\u6068\u6668\u5149\u4e4b[MASK]\u5fae\u3002"}, {"text": "\u6d54\u9633\u6c5f\u5934\u591c\u9001\u5ba2\uff0c\u67ab\u53f6[MASK]\u82b1\u79cb\u745f\u745f\u3002"}]}
|
ethanyt/guwenbert-base
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #bert #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
GuwenBERT
=========
Model description
-----------------
!GuwenBERT
This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
How to use
----------
Training data
-------------
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
Training procedure
------------------
The models are initialized with 'hfl/chinese-roberta-wwm-ext' and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 2e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
Eval results
------------
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
About Us
--------
We are from Datahammer, Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] URL
>
> Created with ️ by Tan Yan 
This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-large")
model = AutoModel.from_pretrained("ethanyt/guwenbert-large")
```
## Training data
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
## Training procedure
The models are initialized with `hfl/chinese-roberta-wwm-ext-large` and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 1e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
## Eval results
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
| NE Type | Precision | Recall | F1 |
|:----------:|:-----------:|:------:|:-----:|
| Book Name | 77.50 | 73.73 | 75.57 |
| Other Name | 85.85 | 89.32 | 87.55 |
| Micro Avg. | 83.88 | 85.39 | 84.63 |
## About Us
We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] qq.com
> Created with ❤️ by Tan Yan [](https://github.com/Ethan-yt) and Zewen Chi [](https://github.com/CZWin32768)
|
{"language": ["zh"], "license": "apache-2.0", "tags": ["chinese", "classical chinese", "literary chinese", "ancient chinese", "bert", "pytorch"], "thumbnail": "https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png", "pipeline_tag": "fill-mask", "mask_token": "[MASK]", "widget": [{"text": "[MASK]\u592a\u5143\u4e2d\uff0c\u6b66\u9675\u4eba\u6355\u9c7c\u4e3a\u4e1a\u3002"}, {"text": "\u95ee\u5f81\u592b\u4ee5\u524d\u8def\uff0c\u6068\u6668\u5149\u4e4b[MASK]\u5fae\u3002"}, {"text": "\u6d54\u9633\u6c5f\u5934\u591c\u9001\u5ba2\uff0c\u67ab\u53f6[MASK]\u82b1\u79cb\u745f\u745f\u3002"}]}
|
ethanyt/guwenbert-large
| null |
[
"transformers",
"pytorch",
"jax",
"roberta",
"fill-mask",
"chinese",
"classical chinese",
"literary chinese",
"ancient chinese",
"bert",
"zh",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#transformers #pytorch #jax #roberta #fill-mask #chinese #classical chinese #literary chinese #ancient chinese #bert #zh #license-apache-2.0 #autotrain_compatible #endpoints_compatible #region-us
|
GuwenBERT
=========
Model description
-----------------
!GuwenBERT
This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on.
For more information about RoBERTa, take a look at the RoBERTa's offical repo.
How to use
----------
Training data
-------------
The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang.
76% of them are punctuated.
The total number of characters is 1.7B (1,743,337,673).
All traditional Characters are converted to simplified characters.
The vocabulary is constructed from this data set and the size is 23,292.
Training procedure
------------------
The models are initialized with 'hfl/chinese-roberta-wwm-ext-large' and then pre-trained with a 2-step strategy.
In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training.
The models are trained on 4 V100 GPUs for 120K steps (20K for step#1, 100K for step#2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 1e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
Eval results
------------
### "Gulian Cup" Ancient Books Named Entity Recognition Evaluation
Second place in the competition. Detailed test results:
About Us
--------
We are from Datahammer, Beijing Institute of Technology.
For more cooperation, please contact email: ethanyt [at] URL
>
> Created with ️ by Tan Yan  fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using `aitextgen`. This model was then subsequently further fine-tuned on the [Daily Dialogues](http://yanran.li/dailydialog) dataset for an additional 40k steps, this time with **35** of 36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?
person beta:
no, i don't like
```
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-L-dialogue
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ai-msgbot GPT2-L + daily dialogues
_NOTE: this model card is a WIP_
GPT2-L (774M parameters) fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'. This model was then subsequently further fine-tuned on the Daily Dialogues dataset for an additional 40k steps, this time with 35 of 36 layers frozen.
Designed for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
'script_speaker_name' = 'person alpha'
'script_responder_name' = 'person beta'
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
### Example prompt:
### Resulting output
## citations
|
[
"# ai-msgbot GPT2-L + daily dialogues\n\n_NOTE: this model card is a WIP_\n\nGPT2-L (774M parameters) fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'. This model was then subsequently further fine-tuned on the Daily Dialogues dataset for an additional 40k steps, this time with 35 of 36 layers frozen.\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"### Example prompt:",
"### Resulting output",
"## citations"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ai-msgbot GPT2-L + daily dialogues\n\n_NOTE: this model card is a WIP_\n\nGPT2-L (774M parameters) fine-tuned on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'. This model was then subsequently further fine-tuned on the Daily Dialogues dataset for an additional 40k steps, this time with 35 of 36 layers frozen.\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"### Example prompt:",
"### Resulting output",
"## citations"
] |
text-generation
|
transformers
|
# ai-msgbot GPT2-L
_NOTE: model card is WIP_
GPT2-L (774M parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with 34/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-L
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# ai-msgbot GPT2-L
_NOTE: model card is WIP_
GPT2-L (774M parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'.
Designed for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
'script_speaker_name' = 'person alpha'
'script_responder_name' = 'person beta'
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
|
[
"# ai-msgbot GPT2-L\n\n_NOTE: model card is WIP_\n\nGPT2-L (774M parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'. \n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"## citations"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# ai-msgbot GPT2-L\n\n_NOTE: model card is WIP_\n\nGPT2-L (774M parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 34/36 layers frozen using 'aitextgen'. \n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"## citations"
] |
text-generation
|
transformers
|
# ai-msgbot GPT-2 M Conversational
A GPT-2 M 355M parameter model for usage with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create a chatbot-like tool.
This model was fine-tuned on a parsed version of [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 10,000 steps. 20/24 layers were frozen for the fine-tuning process.
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## usage
### in ai-msgbot
```
python ai_single_response.py --model GPT2_conversational_355M_WoW10k --prompt "hi! what are your hobbies?"
... generating...
finished!
'i like to read.'
```
### examples with Inference API
The model training (and the ai-msgbot scripts) "force" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:
```
person alpha:
hi! what are your hobbies?
```
then model will respond, ideally with person beta: "response text"
---
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` to the **end** of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{}
|
ethzanalytics/ai-msgbot-gpt2-M
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# ai-msgbot GPT-2 M Conversational
A GPT-2 M 355M parameter model for usage with ai-msgbot to create a chatbot-like tool.
This model was fine-tuned on a parsed version of the Wizard of Wikipedia dataset for 10,000 steps. 20/24 layers were frozen for the fine-tuning process.
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.
'script_speaker_name' = 'person alpha'
'script_responder_name' = 'person beta'
## usage
### in ai-msgbot
### examples with Inference API
The model training (and the ai-msgbot scripts) "force" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:
then model will respond, ideally with person beta: "response text"
---
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).
## citations
|
[
"# ai-msgbot GPT-2 M Conversational\n\nA GPT-2 M 355M parameter model for usage with ai-msgbot to create a chatbot-like tool.\n\nThis model was fine-tuned on a parsed version of the Wizard of Wikipedia dataset for 10,000 steps. 20/24 layers were frozen for the fine-tuning process.",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## usage",
"### in ai-msgbot",
"### examples with Inference API\nThe model training (and the ai-msgbot scripts) \"force\" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:\n\n\n\nthen model will respond, ideally with person beta: \"response text\"\n\n---\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"## citations"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# ai-msgbot GPT-2 M Conversational\n\nA GPT-2 M 355M parameter model for usage with ai-msgbot to create a chatbot-like tool.\n\nThis model was fine-tuned on a parsed version of the Wizard of Wikipedia dataset for 10,000 steps. 20/24 layers were frozen for the fine-tuning process.",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. this is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## usage",
"### in ai-msgbot",
"### examples with Inference API\nThe model training (and the ai-msgbot scripts) \"force\" GPT-2 to generate text in a chat-like structure. If you want non-garbage outputs, these need to be specified manually:\n\n\n\nthen model will respond, ideally with person beta: \"response text\"\n\n---\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' to the end of the prompt text. The model is forced to respond to the entered chat prompt instead of adding to the entered prompt and then responding to that (which may cut off the response text due to the Inference API limits).",
"## citations"
] |
text-generation
|
transformers
|
# ai-msgbot: GPT2-XL-dialogue
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`. The resulting model was then **further fine-tuned** on the [Daily Dialogues](http://yanran.li/dailydialog) for 40k steps, with **34**/36 layers frozen.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-generation", "gpt2", "gpt"], "datasets": ["natural_questions"], "widget": [{"text": "Do you like my new haircut?\nperson beta:\n\n", "example_title": "haircut"}, {"text": "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n", "example_title": "teaching"}, {"text": "What's your favorite animal? Mine is the dog? \nperson beta:\n\n", "example_title": "favorite"}, {"text": "how much does it cost?\nperson beta:\n\n", "example_title": "money"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.6, "no_repeat_ngram_size": 3, "do_sample": true, "top_p": 0.85, "top_k": 10, "repetition_penalty": 2.1}}}
|
ethzanalytics/ai-msgbot-gpt2-XL-dialogue
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"gpt",
"en",
"dataset:natural_questions",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #gpt #en #dataset-natural_questions #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# ai-msgbot: GPT2-XL-dialogue
GPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'. The resulting model was then further fine-tuned on the Daily Dialogues for 40k steps, with 34/36 layers frozen.
Designed for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
'script_speaker_name' = 'person alpha'
'script_responder_name' = 'person beta'
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
## citations
|
[
"# ai-msgbot: GPT2-XL-dialogue\n\n\nGPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'. The resulting model was then further fine-tuned on the Daily Dialogues for 40k steps, with 34/36 layers frozen.\n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.",
"## citations"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #gpt #en #dataset-natural_questions #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# ai-msgbot: GPT2-XL-dialogue\n\n\nGPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'. The resulting model was then further fine-tuned on the Daily Dialogues for 40k steps, with 34/36 layers frozen.\n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.",
"## citations"
] |
text-generation
|
transformers
|
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on [the Wizard of Wikipedia dataset](https://parl.ai/projects/wizard_of_wikipedia/) for 40k steps with **33**/36 layers frozen using `aitextgen`.
Designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot) to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
`script_speaker_name` = `person alpha`
`script_responder_name` = `person beta`
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding `person beta` into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
```
do you like to eat beans?
person beta:
```
### Resulting output
```
do you like to eat beans?person beta:
yes, i like fried beans.
person alpha:
i wonder when the first beans were cultivated and how they were processed.
person beta:
nitrogenic bacteria (in
```
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
```
@inproceedings{dinan2019wizard,
author={Emily Dinan and Stephen Roller and Kurt Shuster and Angela Fan and Michael Auli and Jason Weston},
title={{W}izard of {W}ikipedia: Knowledge-powered Conversational Agents},
booktitle = {Proceedings of the International Conference on Learning Representations (ICLR)},
year={2019},
}
@inproceedings{li-etal-2017-dailydialog,
title = "{D}aily{D}ialog: A Manually Labelled Multi-turn Dialogue Dataset",
author = "Li, Yanran and
Su, Hui and
Shen, Xiaoyu and
Li, Wenjie and
Cao, Ziqiang and
Niu, Shuzi",
booktitle = "Proceedings of the Eighth International Joint Conference on Natural Language Processing (Volume 1: Long Papers)",
month = nov,
year = "2017",
address = "Taipei, Taiwan",
publisher = "Asian Federation of Natural Language Processing",
url = "https://aclanthology.org/I17-1099",
pages = "986--995",
abstract = "We develop a high-quality multi-turn dialog dataset, \textbf{DailyDialog}, which is intriguing in several aspects. The language is human-written and less noisy. The dialogues in the dataset reflect our daily communication way and cover various topics about our daily life. We also manually label the developed dataset with communication intention and emotion information. Then, we evaluate existing approaches on DailyDialog dataset and hope it benefit the research field of dialog systems. The dataset is available on \url{http://yanran.li/dailydialog}",
}
```
|
{"language": ["en"], "license": "mit", "tags": ["text-generation", "gpt2", "gpt"], "datasets": ["natural questions"], "widget": [{"text": "Do you like my new haircut?\nperson beta:\n\n", "example_title": "haircut"}, {"text": "I love to learn new things.. are you willing to teach me something?\nperson beta:\n\n", "example_title": "teaching"}, {"text": "What's your favorite animal? Mine is the dog? \nperson beta:\n\n", "example_title": "favorite"}, {"text": "how much does it cost?\nperson beta:\n\n", "example_title": "money"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.6, "no_repeat_ngram_size": 3, "do_sample": true, "top_p": 0.85, "top_k": 10, "repetition_penalty": 2.1}}}
|
ethzanalytics/ai-msgbot-gpt2-XL
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"gpt",
"en",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#transformers #pytorch #gpt2 #text-generation #gpt #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# ai-msgbot GPT2-XL
_NOTE: model card is WIP_
GPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'.
Designed for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).
## conversation data
The dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.
'script_speaker_name' = 'person alpha'
'script_responder_name' = 'person beta'
## examples
- the default inference API examples should work _okay_
- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.
### Example prompt:
### Resulting output
_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after "(in"_
## citations
|
[
"# ai-msgbot GPT2-XL\n\n_NOTE: model card is WIP_\n\nGPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'. \n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.",
"### Example prompt:",
"### Resulting output\n\n\n\n_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after \"(in\"_",
"## citations"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #gpt #en #license-mit #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# ai-msgbot GPT2-XL\n\n_NOTE: model card is WIP_\n\nGPT2-XL (~1.5 B parameters) trained on the Wizard of Wikipedia dataset for 40k steps with 33/36 layers frozen using 'aitextgen'. \n\n\nDesigned for use with ai-msgbot to create an open-ended chatbot (of course, if other use cases arise, have at it).",
"## conversation data\n\nThe dataset was tokenized and fed to the model as a conversation between two speakers, whose names are below. This is relevant for writing prompts and filtering/extracting text from responses.\n\n'script_speaker_name' = 'person alpha'\n\n'script_responder_name' = 'person beta'",
"## examples\n\n- the default inference API examples should work _okay_\n- an ideal test would be explicitly adding 'person beta' into the prompt text the model is forced to respond to instead of adding onto the entered prompt.",
"### Example prompt:",
"### Resulting output\n\n\n\n_Note: the Inference API cuts off generation due to length, if run elsewhere you would see what comes after \"(in\"_",
"## citations"
] |
text-generation
|
transformers
|
# distilgpt2-tiny-conversational
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with [ai-msgbot](https://github.com/pszemraj/ai-msgbot).
It achieves the following results on the evaluation set:
- Loss: 2.2461
## Model description
- a basic dialogue model for conversation. It can be used as a chatbot.
- check out a [simple demo here](https://huggingface.co/spaces/ethzanalytics/dialogue-demo)
## Intended uses & limitations
- usage is designed for integrating with this repo: [ai-msgbot](https://github.com/pszemraj/ai-msgbot)
- the main specific information to know is that the model generates whole conversations between two entities, `person alpha` and `person beta`. These entity names are used functionally as custom `<bos>` tokens to extract when one response ends and another begins.
## Training and evaluation data
- [wizard of Wikipedia](https://parl.ai/projects/wizard_of_wikipedia/) parsed, from parlAI
## Training procedure
- deepspeed + huggingface trainer, an example notebook is in [ai-msgbot](https://github.com/pszemraj/ai-msgbot)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.05
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| No log | 1.0 | 418 | 2.7793 |
| 2.9952 | 2.0 | 836 | 2.6914 |
| 2.7684 | 3.0 | 1254 | 2.6348 |
| 2.685 | 4.0 | 1672 | 2.5938 |
| 2.6243 | 5.0 | 2090 | 2.5625 |
| 2.5816 | 6.0 | 2508 | 2.5332 |
| 2.5816 | 7.0 | 2926 | 2.5098 |
| 2.545 | 8.0 | 3344 | 2.4902 |
| 2.5083 | 9.0 | 3762 | 2.4707 |
| 2.4793 | 10.0 | 4180 | 2.4551 |
| 2.4531 | 11.0 | 4598 | 2.4395 |
| 2.4269 | 12.0 | 5016 | 2.4238 |
| 2.4269 | 13.0 | 5434 | 2.4102 |
| 2.4051 | 14.0 | 5852 | 2.3945 |
| 2.3777 | 15.0 | 6270 | 2.3848 |
| 2.3603 | 16.0 | 6688 | 2.3711 |
| 2.3394 | 17.0 | 7106 | 2.3613 |
| 2.3206 | 18.0 | 7524 | 2.3516 |
| 2.3206 | 19.0 | 7942 | 2.3398 |
| 2.3026 | 20.0 | 8360 | 2.3301 |
| 2.2823 | 21.0 | 8778 | 2.3203 |
| 2.2669 | 22.0 | 9196 | 2.3105 |
| 2.2493 | 23.0 | 9614 | 2.3027 |
| 2.2334 | 24.0 | 10032 | 2.2930 |
| 2.2334 | 25.0 | 10450 | 2.2852 |
| 2.2194 | 26.0 | 10868 | 2.2754 |
| 2.2014 | 27.0 | 11286 | 2.2695 |
| 2.1868 | 28.0 | 11704 | 2.2598 |
| 2.171 | 29.0 | 12122 | 2.2539 |
| 2.1597 | 30.0 | 12540 | 2.2461 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["text-generation", "chatbot", "dialogue", "distilgpt2", "gpt2", "ai-msgbot"], "widget": [{"text": "I know you're tired, but can we go for another walk this evening?\nperson beta:\n\n", "example_title": "walk"}, {"text": "Have you done anything exciting lately?\nperson beta:\n\n", "example_title": "activities"}, {"text": "hey - do you have a favorite grocery store around here?\nperson beta:\n\n", "example_title": "grocery"}, {"text": "Can you take me for dinner somewhere nice this time?\nperson beta:\n\n", "example_title": "dinner"}, {"text": "What's your favorite form of social media?\nperson beta:\n\n", "example_title": "social media"}, {"text": "Hi, how are you?\nperson beta:\n\n", "example_title": "greeting"}, {"text": "I am the best; my sister is the worst. What am I?\nperson beta:\n\n", "example_title": "sister"}, {"text": "What do you call an alligator who's just had surgery to remove his left arm?\nperson beta:\n\n", "example_title": "alligator"}, {"text": "A man walks into a bar and asks for a drink. The bartender asks for $10, and he pays him $1. What did he pay him with?\nperson beta:\n\n", "example_title": "dollar"}, {"text": "What did I say was in the mailbox when it was actually in the cabinet?\nperson beta:\n\n", "example_title": "mailbox"}, {"text": "My friend says that she knows every language, but she doesn't speak any of them.. what's wrong with her?\nperson beta:\n\n", "example_title": "language"}], "inference": {"parameters": {"min_length": 2, "max_length": 64, "length_penalty": 0.7, "no_repeat_ngram_size": 2, "do_sample": true, "top_p": 0.95, "top_k": 20, "temperature": 0.3, "repetition_penalty": 3.5}}}
|
ethzanalytics/distilgpt2-tiny-conversational
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"chatbot",
"dialogue",
"distilgpt2",
"ai-msgbot",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #chatbot #dialogue #distilgpt2 #ai-msgbot #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
distilgpt2-tiny-conversational
==============================
This model is a fine-tuned version of distilgpt2 on a parsed version of Wizard of Wikipedia. Persona alpha/beta framework designed for use with ai-msgbot.
It achieves the following results on the evaluation set:
* Loss: 2.2461
Model description
-----------------
* a basic dialogue model for conversation. It can be used as a chatbot.
* check out a simple demo here
Intended uses & limitations
---------------------------
* usage is designed for integrating with this repo: ai-msgbot
* the main specific information to know is that the model generates whole conversations between two entities, 'person alpha' and 'person beta'. These entity names are used functionally as custom '' tokens to extract when one response ends and another begins.
Training and evaluation data
----------------------------
* wizard of Wikipedia parsed, from parlAI
Training procedure
------------------
* deepspeed + huggingface trainer, an example notebook is in ai-msgbot
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* distributed\_type: multi-GPU
* gradient\_accumulation\_steps: 4
* total\_train\_batch\_size: 128
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: cosine
* lr\_scheduler\_warmup\_ratio: 0.05
* num\_epochs: 30
### Training results
### Framework versions
* Transformers 4.16.1
* Pytorch 1.10.0+cu111
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #chatbot #dialogue #distilgpt2 #ai-msgbot #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* distributed\\_type: multi-GPU\n* gradient\\_accumulation\\_steps: 4\n* total\\_train\\_batch\\_size: 128\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: cosine\n* lr\\_scheduler\\_warmup\\_ratio: 0.05\n* num\\_epochs: 30",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.1\n* Pytorch 1.10.0+cu111\n* Tokenizers 0.11.0"
] |
text-generation
|
transformers
|
#blabla
|
{"tags": ["conversational"]}
|
ethzhou/newJooby
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
#blabla
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
null |
transformers
|
# Attention in Attention Network for Image Super-Resolution (A2N)
A2N model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Attention in Attention Network for Image Super-Resolution](https://arxiv.org/abs/2104.09497) by Chen et al. (2021) and first released in [this repository](https://github.com/haoyuc/A2N).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.

## Model description
The A2N model proposes an attention in attention network (A2N) for highly accurate image SR. Specifically, the A2N consists of a non-attention branch and a coupling attention branch. Attention dropout module is proposed to generate dynamic attention weights for these two branches based on input features that can suppress unwanted attention adjustments. This allows attention modules to specialize to beneficial examples without otherwise penalties and thus greatly improve the capacity of the attention network with little parameter overhead.
More importantly the model is lightweight and fast to train (~1.5m parameters, ~4mb).
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import A2nModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = A2nModel.from_pretrained('eugenesiow/a2n', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, A2nModel, A2nConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = A2nConfig(
scale=4, # train a model to upscale 4x
)
model = A2nModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |A2N |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.87/0.9602** |
|Set5 |3x |30.39/0.8678 |**34.8/0.9387** |
|Set5 |4x |28.42/0.8101 |**32.07/0.8933** |
|Set14 |2x |30.22/0.8683 |**33.45/0.9162** |
|Set14 |3x |27.53/0.7737 |**30.94/0.8568** |
|Set14 |4x |25.99/0.7023 |**28.56/0.7801** |
|BSD100 |2x |29.55/0.8425 |**32.11/0.8987** |
|BSD100 |3x |27.20/0.7382 |**29.56/0.8173** |
|BSD100 |4x |25.96/0.6672 |**27.54/0.7342** |
|Urban100 |2x |26.66/0.8408 |**31.71/0.9240** |
|Urban100 |3x | |**28.95/0.8671** |
|Urban100 |4x |23.14/0.6573 |**25.89/0.7787** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{chen2021attention,
title={Attention in Attention Network for Image Super-Resolution},
author={Haoyu Chen and Jinjin Gu and Zhi Zhang},
year={2021},
eprint={2104.09497},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/a2n
| null |
[
"transformers",
"A2N",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2104.09497",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.09497",
"2104.07566"
] |
[] |
TAGS
#transformers #A2N #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.09497 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Attention in Attention Network for Image Super-Resolution (A2N)
===============================================================
A2N model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Attention in Attention Network for Image Super-Resolution by Chen et al. (2021) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.
!Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 4
Model description
-----------------
The A2N model proposes an attention in attention network (A2N) for highly accurate image SR. Specifically, the A2N consists of a non-attention branch and a coupling attention branch. Attention dropout module is proposed to generate dynamic attention weights for these two branches based on input features that can suppress unwanted attention adjustments. This allows attention modules to specialize to beneficial examples without otherwise penalties and thus greatly improve the capacity of the attention network with little parameter overhead.
More importantly the model is lightweight and fast to train (~1.5m parameters, ~4mb).
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #A2N #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.09497 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x2 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null | null |
# AniCharaGAN: Anime Character Generation with StyleGAN2
[](https://github.com/eugenesiow/practical-ml)
This model uses the awesome lucidrains’s [stylegan2-pytorch](https://github.com/lucidrains/stylegan2-pytorch) library to train a model on a private anime character dataset to generate full-body 256x256 female anime characters.
Here are some samples:

## Model description
The model generates 256x256, square, white background, full-body anime characters. It is trained using [stylegan2-pytorch](https://github.com/lucidrains/stylegan2-pytorch). It is trained to 150 epochs.
## Intended uses & limitations
You can use the model for generating anime characters and than use a super resolution library like [super_image](https://github.com/eugenesiow/super-image) to upscale.
### How to use
[](https://colab.research.google.com/github/eugenesiow/practical-ml/blob/master/notebooks/Anime_Character_Generation_with_StyleGAN2.ipynb "Open in Colab")
Install the dependencies:
```bash
pip install -q stylegan2_pytorch==1.5.10
```
Here is how to generate images:
```python
import torch
from torchvision.utils import save_image
from stylegan2_pytorch import ModelLoader
from pathlib import Path
Path('./models/ani-chara-gan/').mkdir(parents=True, exist_ok=True)
torch.hub.download_url_to_file('https://huggingface.co/eugenesiow/ani-chara-gan/resolve/main/model.pt',
'./models/ani-chara-gan/model_150.pt')
torch.hub.download_url_to_file('https://huggingface.co/eugenesiow/ani-chara-gan/resolve/main/.config.json',
'./models/ani-chara-gan/.config.json')
loader = ModelLoader(
base_dir = './', name = 'ani-chara-gan'
)
noise = torch.randn(1, 256).cuda() # noise
styles = loader.noise_to_styles(noise, trunc_psi = 0.7) # pass through mapping network
images = loader.styles_to_images(styles) # call the generator on intermediate style vectors
save_image(images, './sample.jpg')
```
## BibTeX entry and citation info
The model is part of the [practical-ml](https://github.com/eugenesiow/practical-ml) repository.
[](https://github.com/eugenesiow/practical-ml)
|
{"license": "apache-2.0", "tags": ["stylegan2", "image-generation"]}
|
eugenesiow/ani-chara-gan
| null |
[
"stylegan2",
"image-generation",
"license:apache-2.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#stylegan2 #image-generation #license-apache-2.0 #region-us
|
# AniCharaGAN: Anime Character Generation with StyleGAN2

Install the dependencies:
Here is how to generate images:
## BibTeX entry and citation info
The model is part of the practical-ml repository.
\n\nInstall the dependencies:\n\nHere is how to generate images:",
"## BibTeX entry and citation info\n\nThe model is part of the practical-ml repository.\n\n\n\nInstall the dependencies:\n\nHere is how to generate images:",
"## BibTeX entry and citation info\n\nThe model is part of the practical-ml repository.\n\n
AWSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Lightweight Image Super-Resolution with Adaptive Weighted Learning Network](https://arxiv.org/abs/1904.02358) by Wang et al. (2019) and first released in [this repository](https://github.com/ChaofWang/AWSRN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Deep learning has been successfully applied to the single-image super-resolution (SISR) task with great performance in recent years. However, most convolutional neural network based SR models require heavy computation, which limit their real-world applications. In this work, a lightweight SR network, named Adaptive Weighted Super-Resolution Network (AWSRN), is proposed for SISR to address this issue. A novel local fusion block (LFB) is designed in AWSRN for efficient residual learning, which consists of stacked adaptive weighted residual units (AWRU) and a local residual fusion unit (LRFU). Moreover, an adaptive weighted multi-scale (AWMS) module is proposed to make full use of features in reconstruction layer. AWMS consists of several different scale convolutions, and the redundancy scale branch can be removed according to the contribution of adaptive weights in AWMS for lightweight network. The experimental results on the commonly used datasets show that the proposed lightweight AWSRN achieves superior performance on ×2, ×3, ×4, and ×8 scale factors to state-of-the-art methods with similar parameters and computational overhead.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import AwsrnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = AwsrnModel.from_pretrained('eugenesiow/awsrn-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, AwsrnModel, AwsrnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = AwsrnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = AwsrnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |awsrn-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.99/0.9606** |
|Set5 |3x |30.39/0.8678 |**35.05/0.9403** |
|Set5 |4x |28.42/0.8101 |**32.13/0.8947** |
|Set14 |2x |30.22/0.8683 |**33.66/0.918** |
|Set14 |3x |27.53/0.7737 |**31.01/0.8581** |
|Set14 |4x |25.99/0.7023 |**28.75/0.7851** |
|BSD100 |2x |29.55/0.8425 |**33.76/0.9253** |
|BSD100 |3x |27.20/0.7382 |**29.63/0.8188** |
|BSD100 |4x |25.96/0.6672 |**28.51/0.7647** |
|Urban100 |2x |26.66/0.8408 |**31.95/0.9265** |
|Urban100 |3x | |**29.14/0.871** |
|Urban100 |4x |23.14/0.6573 |**26.03/0.7838** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@article{wang2019lightweight,
title={Lightweight Image Super-Resolution with Adaptive Weighted Learning Network},
author={Wang, Chaofeng and Li, Zhen and Shi, Jun},
journal={arXiv preprint arXiv:1904.02358},
year={2019
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/awsrn-bam
| null |
[
"transformers",
"AWSRN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1904.02358",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.02358",
"2104.07566"
] |
[] |
TAGS
#transformers #AWSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1904.02358 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Lightweight Image Super-Resolution with Adaptive Weighted Learning Network (AWSRN)
==================================================================================
AWSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Lightweight Image Super-Resolution with Adaptive Weighted Learning Network by Wang et al. (2019) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
Deep learning has been successfully applied to the single-image super-resolution (SISR) task with great performance in recent years. However, most convolutional neural network based SR models require heavy computation, which limit their real-world applications. In this work, a lightweight SR network, named Adaptive Weighted Super-Resolution Network (AWSRN), is proposed for SISR to address this issue. A novel local fusion block (LFB) is designed in AWSRN for efficient residual learning, which consists of stacked adaptive weighted residual units (AWRU) and a local residual fusion unit (LRFU). Moreover, an adaptive weighted multi-scale (AWMS) module is proposed to make full use of features in reconstruction layer. AWMS consists of several different scale convolutions, and the redundancy scale branch can be removed according to the contribution of adaptive weights in AWMS for lightweight network. The experimental results on the commonly used datasets show that the proposed lightweight AWSRN achieves superior performance on ×2, ×3, ×4, and ×8 scale factors to state-of-the-art methods with similar parameters and computational overhead.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #AWSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1904.02358 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
text2text-generation
|
transformers
|
# BART Paraphrase Model (Large)
A large BART seq2seq (text2text generation) model fine-tuned on 3 paraphrase datasets.
## Model description
The BART model was proposed in [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. (2019).
- Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
- The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
- BART is particularly effective when fine tuned for text generation. This model is fine-tuned on 3 paraphrase datasets (Quora, PAWS and MSR paraphrase corpus).
The original BART code is from this [repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
## Intended uses & limitations
You can use the pre-trained model for paraphrasing an input sentence.
### How to use
```python
import torch
from transformers import BartForConditionalGeneration, BartTokenizer
input_sentence = "They were there to enjoy us and they were there to pray for us."
model = BartForConditionalGeneration.from_pretrained('eugenesiow/bart-paraphrase')
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = model.to(device)
tokenizer = BartTokenizer.from_pretrained('eugenesiow/bart-paraphrase')
batch = tokenizer(input_sentence, return_tensors='pt')
generated_ids = model.generate(batch['input_ids'])
generated_sentence = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
print(generated_sentence)
```
### Output
```
['They were there to enjoy us and to pray for us.']
```
## Training data
The model was fine-tuned on a pretrained [`facebook/bart-large`](https://huggingface.co/facebook/bart-large), using the [Quora](https://huggingface.co/datasets/quora), [PAWS](https://huggingface.co/datasets/paws) and [MSR paraphrase corpus](https://www.microsoft.com/en-us/download/details.aspx?id=52398).
## Training procedure
We follow the training procedure provided in the [simpletransformers](https://github.com/ThilinaRajapakse/simpletransformers) seq2seq [example](https://github.com/ThilinaRajapakse/simpletransformers/blob/master/examples/seq2seq/paraphrasing/train.py).
## BibTeX entry and citation info
```bibtex
@misc{lewis2019bart,
title={BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension},
author={Mike Lewis and Yinhan Liu and Naman Goyal and Marjan Ghazvininejad and Abdelrahman Mohamed and Omer Levy and Ves Stoyanov and Luke Zettlemoyer},
year={2019},
eprint={1910.13461},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "apache-2.0", "tags": ["transformers", "bart", "paraphrase", "seq2seq"], "datasets": ["quora", "paws"]}
|
eugenesiow/bart-paraphrase
| null |
[
"transformers",
"pytorch",
"safetensors",
"bart",
"text2text-generation",
"paraphrase",
"seq2seq",
"en",
"dataset:quora",
"dataset:paws",
"arxiv:1910.13461",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1910.13461"
] |
[
"en"
] |
TAGS
#transformers #pytorch #safetensors #bart #text2text-generation #paraphrase #seq2seq #en #dataset-quora #dataset-paws #arxiv-1910.13461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
# BART Paraphrase Model (Large)
A large BART seq2seq (text2text generation) model fine-tuned on 3 paraphrase datasets.
## Model description
The BART model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. (2019).
- Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).
- The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.
- BART is particularly effective when fine tuned for text generation. This model is fine-tuned on 3 paraphrase datasets (Quora, PAWS and MSR paraphrase corpus).
The original BART code is from this repository.
## Intended uses & limitations
You can use the pre-trained model for paraphrasing an input sentence.
### How to use
### Output
## Training data
The model was fine-tuned on a pretrained 'facebook/bart-large', using the Quora, PAWS and MSR paraphrase corpus.
## Training procedure
We follow the training procedure provided in the simpletransformers seq2seq example.
## BibTeX entry and citation info
|
[
"# BART Paraphrase Model (Large)\nA large BART seq2seq (text2text generation) model fine-tuned on 3 paraphrase datasets.",
"## Model description\nThe BART model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. (2019).\n\n- Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n- The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n- BART is particularly effective when fine tuned for text generation. This model is fine-tuned on 3 paraphrase datasets (Quora, PAWS and MSR paraphrase corpus).\n\nThe original BART code is from this repository.",
"## Intended uses & limitations\nYou can use the pre-trained model for paraphrasing an input sentence.",
"### How to use",
"### Output",
"## Training data\nThe model was fine-tuned on a pretrained 'facebook/bart-large', using the Quora, PAWS and MSR paraphrase corpus.",
"## Training procedure\n\nWe follow the training procedure provided in the simpletransformers seq2seq example.",
"## BibTeX entry and citation info"
] |
[
"TAGS\n#transformers #pytorch #safetensors #bart #text2text-generation #paraphrase #seq2seq #en #dataset-quora #dataset-paws #arxiv-1910.13461 #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"# BART Paraphrase Model (Large)\nA large BART seq2seq (text2text generation) model fine-tuned on 3 paraphrase datasets.",
"## Model description\nThe BART model was proposed in BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension by Lewis et al. (2019).\n\n- Bart uses a standard seq2seq/machine translation architecture with a bidirectional encoder (like BERT) and a left-to-right decoder (like GPT).\n- The pretraining task involves randomly shuffling the order of the original sentences and a novel in-filling scheme, where spans of text are replaced with a single mask token.\n- BART is particularly effective when fine tuned for text generation. This model is fine-tuned on 3 paraphrase datasets (Quora, PAWS and MSR paraphrase corpus).\n\nThe original BART code is from this repository.",
"## Intended uses & limitations\nYou can use the pre-trained model for paraphrasing an input sentence.",
"### How to use",
"### Output",
"## Training data\nThe model was fine-tuned on a pretrained 'facebook/bart-large', using the Quora, PAWS and MSR paraphrase corpus.",
"## Training procedure\n\nWe follow the training procedure provided in the simpletransformers seq2seq example.",
"## BibTeX entry and citation info"
] |
null |
transformers
|
# Cascading Residual Network (CARN)
CARN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network](https://arxiv.org/abs/1803.08664) by Ahn et al. (2018) and first released in [this repository](https://github.com/nmhkahn/CARN-pytorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The CARN model proposes an architecture that implements a cascading mechanism upon a residual network for accurate and lightweight image super-resolution.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import CarnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = CarnModel.from_pretrained('eugenesiow/carn-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, CarnModel, CarnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = CarnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = CarnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |carn-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.83/0.96** |
|Set5 |3x |30.39/0.8678 |**34.82/0.9385** |
|Set5 |4x |28.42/0.8101 |**32.0/0.8923** |
|Set14 |2x |30.22/0.8683 |**33.51/0.9166** |
|Set14 |3x |27.53/0.7737 |**30.9/0.8558** |
|Set14 |4x |25.99/0.7023 |**28.62/0.7822** |
|BSD100 |2x |29.55/0.8425 |**33.64/0.924** |
|BSD100 |3x |27.20/0.7382 |**29.54/0.8166** |
|BSD100 |4x |25.96/0.6672 |**28.41/0.7614** |
|Urban100 |2x |26.66/0.8408 |**31.53/0.922** |
|Urban100 |3x | |**28.84/0.8648** |
|Urban100 |4x |23.14/0.6573 |**25.77/0.7741** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@article{ahn2018fast,
title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network},
author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah},
journal={arXiv preprint arXiv:1803.08664},
year={2018}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/carn-bam
| null |
[
"transformers",
"CARN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1803.08664",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1803.08664",
"2104.07566"
] |
[] |
TAGS
#transformers #CARN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1803.08664 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Cascading Residual Network (CARN)
=================================
CARN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network by Ahn et al. (2018) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The CARN model proposes an architecture that implements a cascading mechanism upon a residual network for accurate and lightweight image super-resolution.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #CARN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1803.08664 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Cascading Residual Network (CARN)
CARN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network](https://arxiv.org/abs/1803.08664) by Ahn et al. (2018) and first released in [this repository](https://github.com/nmhkahn/CARN-pytorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The CARN model proposes an architecture that implements a cascading mechanism upon a residual network for accurate and lightweight image super-resolution.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import CarnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = CarnModel.from_pretrained('eugenesiow/carn', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, CarnModel, CarnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = CarnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = CarnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |carn |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.89/0.9602** |
|Set5 |3x |30.39/0.8678 |**34.88/0.9391** |
|Set5 |4x |28.42/0.8101 |**32.05/0.8931** |
|Set14 |2x |30.22/0.8683 |**33.53/0.9173** |
|Set14 |3x |27.53/0.7737 |**30.93/0.8566** |
|Set14 |4x |25.99/0.7023 |**28.67/0.7828** |
|BSD100 |2x |29.55/0.8425 |**33.66/0.9242** |
|BSD100 |3x |27.20/0.7382 |**29.56/0.8173** |
|BSD100 |4x |25.96/0.6672 |**28.44/0.7625** |
|Urban100 |2x |26.66/0.8408 |**31.62/0.9229** |
|Urban100 |3x | |**28.95/0.867** |
|Urban100 |4x |23.14/0.6573 |**25.85/0.7768** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@article{ahn2018fast,
title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network},
author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah},
journal={arXiv preprint arXiv:1803.08664},
year={2018}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/carn
| null |
[
"transformers",
"CARN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1803.08664",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1803.08664",
"2104.07566"
] |
[] |
TAGS
#transformers #CARN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1803.08664 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Cascading Residual Network (CARN)
=================================
CARN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network by Ahn et al. (2018) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The CARN model proposes an architecture that implements a cascading mechanism upon a residual network for accurate and lightweight image super-resolution.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #CARN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1803.08664 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Densely Residual Laplacian Super-Resolution (DRLN)
DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Densely Residual Laplacian Super-resolution](https://arxiv.org/abs/1906.12021) by Anwar et al. (2020) and first released in [this repository](https://github.com/saeed-anwar/DRLN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import DrlnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = DrlnModel.from_pretrained('eugenesiow/drln-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, DrlnModel, DrlnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = DrlnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = DrlnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |drln-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.23/0.9614** |
|Set5 |3x |30.39/0.8678 |**35.3/0.9422** |
|Set5 |4x |28.42/0.8101 |**32.49/0.8986** |
|Set14 |2x |30.22/0.8683 |**33.95/0.9206** |
|Set14 |3x |27.53/0.7737 |**31.27/0.8624** |
|Set14 |4x |25.99/0.7023 |**28.94/0.7899** |
|BSD100 |2x |29.55/0.8425 |**33.95/0.9269** |
|BSD100 |3x |27.20/0.7382 |**29.78/0.8224** |
|BSD100 |4x |25.96/0.6672 |**28.63/0.7686** |
|Urban100 |2x |26.66/0.8408 |**32.81/0.9339** |
|Urban100 |3x | |**29.82/0.8828** |
|Urban100 |4x |23.14/0.6573 |**26.53/0.7991** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@misc{anwar2019densely,
title={Densely Residual Laplacian Super-Resolution},
author={Saeed Anwar and Nick Barnes},
year={2019},
eprint={1906.12021},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/drln-bam
| null |
[
"transformers",
"DRLN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1906.12021",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1906.12021",
"2104.07566"
] |
[] |
TAGS
#transformers #DRLN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1906.12021 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us
|
Densely Residual Laplacian Super-Resolution (DRLN)
==================================================
DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Densely Residual Laplacian Super-resolution by Anwar et al. (2020) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #DRLN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1906.12021 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Densely Residual Laplacian Super-Resolution (DRLN)
DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Densely Residual Laplacian Super-resolution](https://arxiv.org/abs/1906.12021) by Anwar et al. (2020) and first released in [this repository](https://github.com/saeed-anwar/DRLN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import DrlnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = DrlnModel.from_pretrained('eugenesiow/drln', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, DrlnModel, DrlnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = DrlnConfig(
scale=4, # train a model to upscale 4x
)
model = DrlnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |drln |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.22/0.9614** |
|Set5 |3x |30.39/0.8678 |**35.31/0.9423** |
|Set5 |4x |28.42/0.8101 |**32.55/0.899** |
|Set14 |2x |30.22/0.8683 |**34.01/0.9211** |
|Set14 |3x |27.53/0.7737 |**31.21/0.8619** |
|Set14 |4x |25.99/0.7023 |**28.96/0.7901** |
|BSD100 |2x |29.55/0.8425 |**33.93/0.9269** |
|BSD100 |3x |27.20/0.7382 |**29.77/0.8223** |
|BSD100 |4x |25.96/0.6672 |**28.65/0.7692** |
|Urban100 |2x |26.66/0.8408 |**32.82/0.934** |
|Urban100 |3x | |**29.79/0.8825** |
|Urban100 |4x |23.14/0.6573 |**26.56/0.7998** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{anwar2019densely,
title={Densely Residual Laplacian Super-Resolution},
author={Saeed Anwar and Nick Barnes},
year={2019},
eprint={1906.12021},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/drln
| null |
[
"transformers",
"DRLN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1906.12021",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1906.12021",
"2104.07566"
] |
[] |
TAGS
#transformers #DRLN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1906.12021 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Densely Residual Laplacian Super-Resolution (DRLN)
==================================================
DRLN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Densely Residual Laplacian Super-resolution by Anwar et al. (2020) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
Super-Resolution convolutional neural networks have recently demonstrated high-quality restoration for single images. However, existing algorithms often require very deep architectures and long training times. Furthermore, current convolutional neural networks for super-resolution are unable to exploit features at multiple scales and weigh them equally, limiting their learning capability. In this exposition, we present a compact and accurate super-resolution algorithm namely, Densely Residual Laplacian Network (DRLN). The proposed network employs cascading residual on the residual structure to allow the flow of low-frequency information to focus on learning high and mid-level features. In addition, deep supervision is achieved via the densely concatenated residual blocks settings, which also helps in learning from high-level complex features. Moreover, we propose Laplacian attention to model the crucial features to learn the inter and intra-level dependencies between the feature maps. Furthermore, comprehensive quantitative and qualitative evaluations on low-resolution, noisy low-resolution, and real historical image benchmark datasets illustrate that our DRLN algorithm performs favorably against the state-of-the-art methods visually and accurately.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #DRLN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1906.12021 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2.

## Model description
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import EdsrModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = EdsrModel.from_pretrained('eugenesiow/edsr-base', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = EdsrConfig(
scale=4, # train a model to upscale 4x
)
model = EdsrModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |edsr-base |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.02/0.9607** |
|Set5 |3x |30.39/0.8678 |**35.04/0.9403** |
|Set5 |4x |28.42/0.8101 |**32.12/0.8947** |
|Set14 |2x |30.22/0.8683 |**33.57/0.9172** |
|Set14 |3x |27.53/0.7737 |**30.93/0.8567** |
|Set14 |4x |25.99/0.7023 |**28.60/0.7815** |
|BSD100 |2x |29.55/0.8425 |**32.21/0.8999** |
|BSD100 |3x |27.20/0.7382 |**29.65/0.8204** |
|BSD100 |4x |25.96/0.6672 |**27.61/0.7363** |
|Urban100 |2x |26.66/0.8408 |**32.04/0.9276** |
|Urban100 |3x | |**29.23/0.8723** |
|Urban100 |4x |23.14/0.6573 |**26.02/0.7832** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/edsr-base
| null |
[
"transformers",
"EDSR",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1707.02921",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1707.02921",
"2104.07566"
] |
[] |
TAGS
#transformers #EDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
========================================================================
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Enhanced Deep Residual Networks for Single Image Super-Resolution by Lim et al. (2017) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2.
!Comparing Bicubic upscaling against EDSR x2 upscaling on Set5 Image 4
Model description
-----------------
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #EDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against x2 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2.

## Model description
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import EdsrModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = EdsrModel.from_pretrained('eugenesiow/edsr', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, EdsrModel, EdsrConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = EdsrConfig(
scale=4, # train a model to upscale 4x
)
model = EdsrModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |edsr |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.19/0.9612** |
|Set5 |3x |30.39/0.8678 |**35.31/0.9421** |
|Set5 |4x |28.42/0.8101 |**32.5/0.8986** |
|Set14 |2x |30.22/0.8683 |**33.99/0.9215** |
|Set14 |3x |27.53/0.7737 |**31.18/0.862** |
|Set14 |4x |25.99/0.7023 |**28.92/0.7899** |
|BSD100 |2x |29.55/0.8425 |**33.89/0.9266** |
|BSD100 |3x |27.20/0.7382 |**29.77/0.8224** |
|BSD100 |4x |25.96/0.6672 |**28.62/0.7689** |
|Urban100 |2x |26.66/0.8408 |**32.68/0.9331** |
|Urban100 |3x | |**29.75/0.8825** |
|Urban100 |4x |23.14/0.6573 |**26.53/0.7995** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@InProceedings{Lim_2017_CVPR_Workshops,
author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
month = {July},
year = {2017}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/edsr
| null |
[
"transformers",
"EDSR",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1707.02921",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1707.02921",
"2104.07566"
] |
[] |
TAGS
#transformers #EDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR)
========================================================================
EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Enhanced Deep Residual Networks for Single Image Super-Resolution by Lim et al. (2017) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
EDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This is a base model (~5mb vs ~100mb) that includes just 16 ResBlocks and 64 channels.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #EDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Holistic Attention Network (HAN)
HAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Single Image Super-Resolution via a Holistic Attention Network](https://arxiv.org/abs/2008.08767) by Niu et al. (2020) and first released in [this repository](https://github.com/wwlCape/HAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super- resolution approaches.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import HanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = HanModel.from_pretrained('eugenesiow/han', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, HanModel, HanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = HanConfig(
scale=4, # train a model to upscale 4x
)
model = HanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |han |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**** |
|Set5 |3x |30.39/0.8678 |**** |
|Set5 |4x |28.42/0.8101 |**31.21/0.8778** |
|Set14 |2x |30.22/0.8683 |**** |
|Set14 |3x |27.53/0.7737 |**** |
|Set14 |4x |25.99/0.7023 |**28.18/0.7712** |
|BSD100 |2x |29.55/0.8425 |**** |
|BSD100 |3x |27.20/0.7382 |**** |
|BSD100 |4x |25.96/0.6672 |**28.09/0.7533** |
|Urban100 |2x |26.66/0.8408 |**** |
|Urban100 |3x | |**** |
|Urban100 |4x |23.14/0.6573 |**25.1/0.7497** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{niu2020single,
title={Single Image Super-Resolution via a Holistic Attention Network},
author={Ben Niu and Weilei Wen and Wenqi Ren and Xiangde Zhang and Lianping Yang and Shuzhen Wang and Kaihao Zhang and Xiaochun Cao and Haifeng Shen},
year={2020},
eprint={2008.08767},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/han
| null |
[
"transformers",
"HAN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2008.08767",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2008.08767",
"2104.07566"
] |
[] |
TAGS
#transformers #HAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2008.08767 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Holistic Attention Network (HAN)
================================
HAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Single Image Super-Resolution via a Holistic Attention Network by Niu et al. (2020) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
Informative features play a crucial role in the single image super-resolution task. Channel attention has been demonstrated to be effective for preserving information-rich features in each layer. However, channel attention treats each convolution layer as a separate process that misses the correlation among different layers. To address this problem, we propose a new holistic attention network (HAN), which consists of a layer attention module (LAM) and a channel-spatial attention module (CSAM), to model the holistic interdependencies among layers, channels, and positions. Specifically, the proposed LAM adaptively emphasizes hierarchical features by considering correlations among layers. Meanwhile, CSAM learns the confidence at all the positions of each channel to selectively capture more informative features. Extensive experiments demonstrate that the proposed HAN performs favorably against the state-of-the-art single image super- resolution approaches.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #HAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2008.08767 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Multi-Scale Deep Super-Resolution System (MDSR)
MDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import MdsrModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = MdsrModel.from_pretrained('eugenesiow/mdsr-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, MdsrModel, MdsrConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = MdsrConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = MdsrModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |mdsr-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38/0.9607** |
|Set5 |3x |30.39/0.8678 |**35.07/0.9402** |
|Set5 |4x |28.42/0.8101 |**32.19/0.8949** |
|Set14 |2x |30.22/0.8683 |**33.68/0.9182** |
|Set14 |3x |27.53/0.7737 |**31.04/0.8582** |
|Set14 |4x |25.99/0.7023 |**28.73/0.7847** |
|BSD100 |2x |29.55/0.8425 |**33.77/0.9253** |
|BSD100 |3x |27.20/0.7382 |**29.62/0.8188** |
|BSD100 |4x |25.96/0.6672 |**28.5/0.7645** |
|Urban100 |2x |26.66/0.8408 |**32.04/0.9272** |
|Urban100 |3x | |**29.16/0.8717** |
|Urban100 |4x |23.14/0.6573 |**26.02/0.7834** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@article{ahn2018fast,
title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network},
author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah},
journal={arXiv preprint arXiv:1803.08664},
year={2018}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/mdsr-bam
| null |
[
"transformers",
"MDSR",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1707.02921",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1707.02921",
"2104.07566"
] |
[] |
TAGS
#transformers #MDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us
|
Multi-Scale Deep Super-Resolution System (MDSR)
===============================================
MDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Enhanced Deep Residual Networks for Single Image Super-Resolution by Lim et al. (2017) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #MDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Multi-Scale Deep Super-Resolution System (MDSR)
MDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import MdsrModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = MdsrModel.from_pretrained('eugenesiow/mdsr', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, MdsrModel, MdsrConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = MdsrConfig(
scale=4, # train a model to upscale 4x
)
model = MdsrModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |mdsr |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.04/0.9608** |
|Set5 |3x |30.39/0.8678 |**35.11/0.9406** |
|Set5 |4x |28.42/0.8101 |**32.26/0.8953** |
|Set14 |2x |30.22/0.8683 |**33.71/0.9184** |
|Set14 |3x |27.53/0.7737 |**31.06/0.8593** |
|Set14 |4x |25.99/0.7023 |**28.77/0.7856** |
|BSD100 |2x |29.55/0.8425 |**33.79/0.9256** |
|BSD100 |3x |27.20/0.7382 |**29.66/0.8196** |
|BSD100 |4x |25.96/0.6672 |**28.53/0.7653** |
|Urban100 |2x |26.66/0.8408 |**32.14/0.9283** |
|Urban100 |3x | |**29.29/0.8738** |
|Urban100 |4x |23.14/0.6573 |**26.07/0.7851** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@article{ahn2018fast,
title={Fast, Accurate, and Lightweight Super-Resolution with Cascading Residual Network},
author={Ahn, Namhyuk and Kang, Byungkon and Sohn, Kyung-Ah},
journal={arXiv preprint arXiv:1803.08664},
year={2018}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/mdsr
| null |
[
"transformers",
"MDSR",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1707.02921",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1707.02921",
"2104.07566"
] |
[] |
TAGS
#transformers #MDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Multi-Scale Deep Super-Resolution System (MDSR)
===============================================
MDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Enhanced Deep Residual Networks for Single Image Super-Resolution by Lim et al. (2017) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The MDSR is a model that uses both deeper and wider architecture (32 ResBlocks and 256 channels) to improve performance. It uses both global and local skip connections, and up-scaling is done at the end of the network. It doesn't use batch normalization layers (input and output have similar distributions, normalizing intermediate features may not be desirable) instead it uses constant scaling layers to ensure stable training. An L1 loss function (absolute error) is used instead of L2 (MSE), the authors showed better performance empirically and it requires less computation.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #MDSR #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1707.02921 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Multi-scale Residual Network for Image Super-Resolution (MSRN)
MSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Multi-scale Residual Network for Image Super-Resolution](https://openaccess.thecvf.com/content_ECCV_2018/html/Juncheng_Li_Multi-scale_Residual_Network_ECCV_2018_paper.html) by Li et al. (2018) and first released in [this repository](https://github.com/MIVRC/MSRN-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.

## Model description
The MSRN model proposes a feature extraction structure called the multi-scale residual block. This module can "adaptively detect image features at different scales" and "exploit the potential features of the image".
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import MsrnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = MsrnModel.from_pretrained('eugenesiow/msrn-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, MsrnModel, MsrnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = MsrnConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = MsrnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |msrn-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.02/0.9608** |
|Set5 |3x |30.39/0.8678 |**35.13/0.9408** |
|Set5 |4x |28.42/0.8101 |**32.26/0.8955** |
|Set14 |2x |30.22/0.8683 |**33.73/0.9186** |
|Set14 |3x |27.53/0.7737 |**31.06/0.8588** |
|Set14 |4x |25.99/0.7023 |**28.78/0.7859** |
|BSD100 |2x |29.55/0.8425 |**33.78/0.9253** |
|BSD100 |3x |27.20/0.7382 |**29.65/0.8196** |
|BSD100 |4x |25.96/0.6672 |**28.51/0.7651** |
|Urban100 |2x |26.66/0.8408 |**32.08/0.9276** |
|Urban100 |3x | |**29.26/0.8736** |
|Urban100 |4x |23.14/0.6573 |**26.10/0.7857** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@InProceedings{Li_2018_ECCV,
author = {Li, Juncheng and Fang, Faming and Mei, Kangfu and Zhang, Guixu},
title = {Multi-scale Residual Network for Image Super-Resolution},
booktitle = {The European Conference on Computer Vision (ECCV)},
month = {September},
year = {2018}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/msrn-bam
| null |
[
"transformers",
"MSRN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.07566"
] |
[] |
TAGS
#transformers #MSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us
|
Multi-scale Residual Network for Image Super-Resolution (MSRN)
==============================================================
MSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Multi-scale Residual Network for Image Super-Resolution by Li et al. (2018) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The MSRN model proposes a feature extraction structure called the multi-scale residual block. This module can "adaptively detect image features at different scales" and "exploit the potential features of the image".
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #MSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Multi-scale Residual Network for Image Super-Resolution (MSRN)
MSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Multi-scale Residual Network for Image Super-Resolution](https://openaccess.thecvf.com/content_ECCV_2018/html/Juncheng_Li_Multi-scale_Residual_Network_ECCV_2018_paper.html) by Li et al. (2018) and first released in [this repository](https://github.com/MIVRC/MSRN-PyTorch).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.

## Model description
The MSRN model proposes a feature extraction structure called the multi-scale residual block. This module can "adaptively detect image features at different scales" and "exploit the potential features of the image".
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import MsrnModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = MsrnModel.from_pretrained('eugenesiow/msrn', scale=4) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_4x.png') # save the output 4x scaled image to `./scaled_4x.png`
ImageLoader.save_compare(inputs, preds, './scaled_4x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, MsrnModel, MsrnConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = MsrnConfig(
scale=4, # train a model to upscale 4x
)
model = MsrnModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |msrn |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**38.08/0.9609** |
|Set5 |3x |30.39/0.8678 |**35.12/0.9409** |
|Set5 |4x |28.42/0.8101 |**32.19/0.8951** |
|Set14 |2x |30.22/0.8683 |**33.75/0.9183** |
|Set14 |3x |27.53/0.7737 |**31.08/0.8593** |
|Set14 |4x |25.99/0.7023 |**28.78/0.7862** |
|BSD100 |2x |29.55/0.8425 |**33.82/0.9258** |
|BSD100 |3x |27.20/0.7382 |**29.67/0.8198** |
|BSD100 |4x |25.96/0.6672 |**28.53/0.7657** |
|Urban100 |2x |26.66/0.8408 |**32.14/0.9287** |
|Urban100 |3x | |**29.31/0.8743** |
|Urban100 |4x |23.14/0.6573 |**26.12/0.7866** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@InProceedings{Agustsson_2017_CVPR_Workshops,
author = {Agustsson, Eirikur and Timofte, Radu},
title = {NTIRE 2017 Challenge on Single Image Super-Resolution: Dataset and Study},
booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
url = "http://www.vision.ee.ethz.ch/~timofter/publications/Agustsson-CVPRW-2017.pdf",
month = {July},
year = {2017}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/msrn
| null |
[
"transformers",
"MSRN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2104.07566"
] |
[] |
TAGS
#transformers #MSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Multi-scale Residual Network for Image Super-Resolution (MSRN)
==============================================================
MSRN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Multi-scale Residual Network for Image Super-Resolution by Li et al. (2018) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and model upscaling x2.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The MSRN model proposes a feature extraction structure called the multi-scale residual block. This module can "adaptively detect image features at different scales" and "exploit the potential features of the image".
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #MSRN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Pixel Attention Network (PAN)
PAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Efficient Image Super-Resolution Using Pixel Attention](https://arxiv.org/abs/2010.01073) by Zhao et al. (2020) and first released in [this repository](https://github.com/zhaohengyuan1/PAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
The model is very lightweight with the model being just 260k to 270k parameters (~1mb).
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import PanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = PanModel.from_pretrained('eugenesiow/pan-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, PanModel, PanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = PanConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = PanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |pan-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.7/0.9596** |
|Set5 |3x |30.39/0.8678 |**34.62/0.9371** |
|Set5 |4x |28.42/0.8101 |**31.9/0.8911** |
|Set14 |2x |30.22/0.8683 |**33.4/0.9161** |
|Set14 |3x |27.53/0.7737 |**30.83/0.8545** |
|Set14 |4x |25.99/0.7023 |**28.54/0.7795** |
|BSD100 |2x |29.55/0.8425 |**33.6/0.9234** |
|BSD100 |3x |27.20/0.7382 |**29.47/0.8153** |
|BSD100 |4x |25.96/0.6672 |**28.32/0.7591** |
|Urban100 |2x |26.66/0.8408 |**31.35/0.92** |
|Urban100 |3x | |**28.64/0.861** |
|Urban100 |4x |23.14/0.6573 |**25.6/0.7691** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@misc{zhao2020efficient,
title={Efficient Image Super-Resolution Using Pixel Attention},
author={Hengyuan Zhao and Xiangtao Kong and Jingwen He and Yu Qiao and Chao Dong},
year={2020},
eprint={2010.01073},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/pan-bam
| null |
[
"transformers",
"PAN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2010.01073",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.01073",
"2104.07566"
] |
[] |
TAGS
#transformers #PAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2010.01073 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us
|
Pixel Attention Network (PAN)
=============================
PAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Efficient Image Super-Resolution Using Pixel Attention by Zhao et al. (2020) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
The model is very lightweight with the model being just 260k to 270k parameters (~1mb).
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #PAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2010.01073 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Pixel Attention Network (PAN)
PAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Efficient Image Super-Resolution Using Pixel Attention](https://arxiv.org/abs/2010.01073) by Zhao et al. (2020) and first released in [this repository](https://github.com/zhaohengyuan1/PAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results.
The model is very lightweight with the model being just 260k to 270k parameters (~1mb).
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import PanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = PanModel.from_pretrained('eugenesiow/pan', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, PanModel, PanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = PanConfig(
scale=4, # train a model to upscale 4x
)
model = PanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |pan |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**37.77/0.9599** |
|Set5 |3x |30.39/0.8678 |**34.64/0.9376** |
|Set5 |4x |28.42/0.8101 |**31.92/0.8915** |
|Set14 |2x |30.22/0.8683 |**33.42/0.9162** |
|Set14 |3x |27.53/0.7737 |**30.8/0.8544** |
|Set14 |4x |25.99/0.7023 |**28.57/0.7802** |
|BSD100 |2x |29.55/0.8425 |**33.6/0.9235** |
|BSD100 |3x |27.20/0.7382 |**29.47/0.815** |
|BSD100 |4x |25.96/0.6672 |**28.35/0.7595** |
|Urban100 |2x |26.66/0.8408 |**31.31/0.9197** |
|Urban100 |3x | |**28.61/0.8603** |
|Urban100 |4x |23.14/0.6573 |**25.63/0.7692** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{zhao2020efficient,
title={Efficient Image Super-Resolution Using Pixel Attention},
author={Hengyuan Zhao and Xiangtao Kong and Jingwen He and Yu Qiao and Chao Dong},
year={2020},
eprint={2010.01073},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/pan
| null |
[
"transformers",
"PAN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:2010.01073",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"2010.01073",
"2104.07566"
] |
[] |
TAGS
#transformers #PAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2010.01073 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
Pixel Attention Network (PAN)
=============================
PAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Efficient Image Super-Resolution Using Pixel Attention by Zhao et al. (2020) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
The PAN model proposes a a lightweight convolutional neural network for image super resolution. Pixel attention (PA) is similar to channel attention and spatial attention in formulation. PA however produces 3D attention maps instead of a 1D attention vector or a 2D map. This attention scheme introduces fewer additional parameters but generates better SR results.
The model is very lightweight with the model being just 260k to 270k parameters (~1mb).
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #PAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-2010.01073 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #has_space #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
null |
transformers
|
# Residual Channel Attention Networks (RCAN)
RCAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Image Super-Resolution Using Very Deep Residual Channel Attention Networks](https://arxiv.org/abs/1807.02758) by Zhang et al. (2018) and first released in [this repository](https://github.com/yulunzhang/RCAN).
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.

## Model description
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
This model also applies the balanced attention (BAM) method invented by [Wang et al. (2021)](https://arxiv.org/abs/2104.07566) to further improve the results.
## Intended uses & limitations
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library:
```bash
pip install super-image
```
Here is how to use a pre-trained model to upscale your image:
```python
from super_image import RcanModel, ImageLoader
from PIL import Image
import requests
url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg'
image = Image.open(requests.get(url, stream=True).raw)
model = RcanModel.from_pretrained('eugenesiow/rcan-bam', scale=2) # scale 2, 3 and 4 models available
inputs = ImageLoader.load_image(image)
preds = model(inputs)
ImageLoader.save_image(preds, './scaled_2x.png') # save the output 2x scaled image to `./scaled_2x.png`
ImageLoader.save_compare(inputs, preds, './scaled_2x_compare.png') # save an output comparing the super-image with a bicubic scaling
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Upscale_Images_with_Pretrained_super_image_Models.ipynb "Open in Colab")
## Training data
The models for 2x, 3x and 4x image super resolution were pretrained on [DIV2K](https://huggingface.co/datasets/eugenesiow/Div2k), a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
## Training procedure
### Preprocessing
We follow the pre-processing and training method of [Wang et al.](https://arxiv.org/abs/2104.07566).
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface [datasets](https://huggingface.co/datasets?filter=task_ids:other-other-image-super-resolution) library to download the data:
```bash
pip install datasets
```
The following code gets the data and preprocesses/augments the data.
```python
from datasets import load_dataset
from super_image.data import EvalDataset, TrainDataset, augment_five_crop
augmented_dataset = load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='train')\
.map(augment_five_crop, batched=True, desc="Augmenting Dataset") # download and augment the data with the five_crop method
train_dataset = TrainDataset(augmented_dataset) # prepare the train dataset for loading PyTorch DataLoader
eval_dataset = EvalDataset(load_dataset('eugenesiow/Div2k', 'bicubic_x4', split='validation')) # prepare the eval dataset for the PyTorch DataLoader
```
### Pretraining
The model was trained on GPU. The training code is provided below:
```python
from super_image import Trainer, TrainingArguments, RcanModel, RcanConfig
training_args = TrainingArguments(
output_dir='./results', # output directory
num_train_epochs=1000, # total number of training epochs
)
config = RcanConfig(
scale=4, # train a model to upscale 4x
bam=True, # apply balanced attention to the network
)
model = RcanModel(config)
trainer = Trainer(
model=model, # the instantiated model to be trained
args=training_args, # training arguments, defined above
train_dataset=train_dataset, # training dataset
eval_dataset=eval_dataset # evaluation dataset
)
trainer.train()
```
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Train_super_image_Models.ipynb "Open in Colab")
## Evaluation results
The evaluation metrics include [PSNR](https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio#Quality_estimation_with_PSNR) and [SSIM](https://en.wikipedia.org/wiki/Structural_similarity#Algorithm).
Evaluation datasets include:
- Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5)
- Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14)
- BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100)
- Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100)
The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline.
|Dataset |Scale |Bicubic |rcan-bam |
|--- |--- |--- |--- |
|Set5 |2x |33.64/0.9292 |**** |
|Set5 |3x |30.39/0.8678 |**** |
|Set5 |4x |28.42/0.8101 |**30.8/0.8701** |
|Set14 |2x |30.22/0.8683 |**** |
|Set14 |3x |27.53/0.7737 |**** |
|Set14 |4x |25.99/0.7023 |**27.91/0.7648** |
|BSD100 |2x |29.55/0.8425 |**** |
|BSD100 |3x |27.20/0.7382 |**** |
|BSD100 |4x |25.96/0.6672 |**27.91/0.7477** |
|Urban100 |2x |26.66/0.8408 |**** |
|Urban100 |3x | |**** |
|Urban100 |4x |23.14/0.6573 |**24.75/0.7346** |

You can find a notebook to easily run evaluation on pretrained models below:
[](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
## BibTeX entry and citation info
```bibtex
@misc{wang2021bam,
title={BAM: A Lightweight and Efficient Balanced Attention Mechanism for Single Image Super Resolution},
author={Fanyi Wang and Haotian Hu and Cheng Shen},
year={2021},
eprint={2104.07566},
archivePrefix={arXiv},
primaryClass={eess.IV}
}
```
```bibtex
@misc{zhang2018image,
title={Image Super-Resolution Using Very Deep Residual Channel Attention Networks},
author={Yulun Zhang and Kunpeng Li and Kai Li and Lichen Wang and Bineng Zhong and Yun Fu},
year={2018},
eprint={1807.02758},
archivePrefix={arXiv},
primaryClass={cs.CV}
}
```
|
{"license": "apache-2.0", "tags": ["super-image", "image-super-resolution"], "datasets": ["eugenesiow/Div2k", "eugenesiow/Set5", "eugenesiow/Set14", "eugenesiow/BSD100", "eugenesiow/Urban100"], "metrics": ["pnsr", "ssim"]}
|
eugenesiow/rcan-bam
| null |
[
"transformers",
"RCAN",
"super-image",
"image-super-resolution",
"dataset:eugenesiow/Div2k",
"dataset:eugenesiow/Set5",
"dataset:eugenesiow/Set14",
"dataset:eugenesiow/BSD100",
"dataset:eugenesiow/Urban100",
"arxiv:1807.02758",
"arxiv:2104.07566",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1807.02758",
"2104.07566"
] |
[] |
TAGS
#transformers #RCAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1807.02758 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us
|
Residual Channel Attention Networks (RCAN)
==========================================
RCAN model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper Image Super-Resolution Using Very Deep Residual Channel Attention Networks by Zhang et al. (2018) and first released in this repository.
The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling and model upscaling.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4
Model description
-----------------
Convolutional neural network (CNN) depth is of crucial importance for image super-resolution (SR). However, we observe that deeper networks for image SR are more difficult to train. The low-resolution inputs and features contain abundant low-frequency information, which is treated equally across channels, hence hindering the representational ability of CNNs. To solve these problems, we propose the very deep residual channel attention networks (RCAN). Specifically, we propose a residual in residual (RIR) structure to form very deep network, which consists of several residual groups with long skip connections. Each residual group contains some residual blocks with short skip connections. Meanwhile, RIR allows abundant low-frequency information to be bypassed through multiple skip connections, making the main network focus on learning high-frequency information. Furthermore, we propose a channel attention mechanism to adaptively rescale channel-wise features by considering interdependencies among channels. Extensive experiments show that our RCAN achieves better accuracy and visual improvements against state-of-the-art methods.
This model also applies the balanced attention (BAM) method invented by Wang et al. (2021) to further improve the results.
Intended uses & limitations
---------------------------
You can use the pre-trained models for upscaling your images 2x, 3x and 4x. You can also use the trainer to train a model on your own dataset.
### How to use
The model can be used with the super\_image library:
Here is how to use a pre-trained model to upscale your image:

Training data
-------------
The models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).
Training procedure
------------------
### Preprocessing
We follow the pre-processing and training method of Wang et al..
Low Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.
During training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.
Data augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.
We need the huggingface datasets library to download the data:
The following code gets the data and preprocesses/augments the data.
### Pretraining
The model was trained on GPU. The training code is provided below:

Evaluation results
------------------
The evaluation metrics include PSNR and SSIM.
Evaluation datasets include:
* Set5 - Bevilacqua et al. (2012)
* Set14 - Zeyde et al. (2010)
* BSD100 - Martin et al. (2001)
* Urban100 - Huang et al. (2015)
The results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.
!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2
You can find a notebook to easily run evaluation on pretrained models below:

BibTeX entry and citation info
------------------------------
|
[
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
[
"TAGS\n#transformers #RCAN #super-image #image-super-resolution #dataset-eugenesiow/Div2k #dataset-eugenesiow/Set5 #dataset-eugenesiow/Set14 #dataset-eugenesiow/BSD100 #dataset-eugenesiow/Urban100 #arxiv-1807.02758 #arxiv-2104.07566 #license-apache-2.0 #endpoints_compatible #region-us \n",
"### How to use\n\n\nThe model can be used with the super\\_image library:\n\n\nHere is how to use a pre-trained model to upscale your image:\n\n\n\n\n\nTraining data\n-------------\n\n\nThe models for 2x, 3x and 4x image super resolution were pretrained on DIV2K, a dataset of 800 high-quality (2K resolution) images for training, augmented to 4000 images and uses a dev set of 100 validation images (images numbered 801 to 900).\n\n\nTraining procedure\n------------------",
"### Preprocessing\n\n\nWe follow the pre-processing and training method of Wang et al..\nLow Resolution (LR) images are created by using bicubic interpolation as the resizing method to reduce the size of the High Resolution (HR) images by x2, x3 and x4 times.\nDuring training, RGB patches with size of 64×64 from the LR input are used together with their corresponding HR patches.\nData augmentation is applied to the training set in the pre-processing stage where five images are created from the four corners and center of the original image.\n\n\nWe need the huggingface datasets library to download the data:\n\n\nThe following code gets the data and preprocesses/augments the data.",
"### Pretraining\n\n\nThe model was trained on GPU. The training code is provided below:\n\n\n\n\n\nEvaluation results\n------------------\n\n\nThe evaluation metrics include PSNR and SSIM.\n\n\nEvaluation datasets include:\n\n\n* Set5 - Bevilacqua et al. (2012)\n* Set14 - Zeyde et al. (2010)\n* BSD100 - Martin et al. (2001)\n* Urban100 - Huang et al. (2015)\n\n\nThe results columns below are represented below as 'PSNR/SSIM'. They are compared against a Bicubic baseline.\n\n\n\n!Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2\n\n\nYou can find a notebook to easily run evaluation on pretrained models below:\n\n\n\n\n\nBibTeX entry and citation info\n------------------------------"
] |
feature-extraction
|
transformers
|
korean Mental Health BERT
kcBERT를 아래의 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 문제 해결에 도움이 될만한 데이터셋이라고 판단하여 domain-adaptation하였고, 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
이후 공개될 예정인 더 큰 규모의 데이터셋까지 Dapt할 예정입니다.
datasets from AIhub
웰니스 대화 스크립트 데이터셋1 & 2 (중복 제거 약 2만9천개)
@inproceedings{lee2020kcbert, title={KcBERT: Korean Comments BERT}, author={Lee, Junbum}, booktitle={Proceedings of the 32nd Annual Conference on Human and Cognitive Language Technology}, pages={437--440}, year={2020} }
|
{}
|
eunjin/koMHBERT-kcbert-based-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
|
korean Mental Health BERT
kcBERT를 아래의 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 문제 해결에 도움이 될만한 데이터셋이라고 판단하여 domain-adaptation하였고, 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
이후 공개될 예정인 더 큰 규모의 데이터셋까지 Dapt할 예정입니다.
datasets from AIhub
웰니스 대화 스크립트 데이터셋1 & 2 (중복 제거 약 2만9천개)
@inproceedings{lee2020kcbert, title={KcBERT: Korean Comments BERT}, author={Lee, Junbum}, booktitle={Proceedings of the 32nd Annual Conference on Human and Cognitive Language Technology}, pages={437--440}, year={2020} }
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
korean Mental Health BERT -v2
huggingface에 공개된 kcbert-base BERT를 정신건강의학신문을 크롤링한 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 발화 관련 데이터를 모을 수 없는 상황에서 이를 대체할만한 데이터로 제시합니다. 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
정신건강의학신문: http://www.psychiatricnews.net
|
{}
|
eunjin/koMHBERT-kcbert-based-v2
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
|
korean Mental Health BERT -v2
huggingface에 공개된 kcbert-base BERT를 정신건강의학신문을 크롤링한 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 발화 관련 데이터를 모을 수 없는 상황에서 이를 대체할만한 데이터로 제시합니다. 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
정신건강의학신문: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
korean Mental Health BERT
huggingface에 공개된 KR-Medium BERT를 아래의 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 문제 해결에 도움이 될만한 데이터셋이라고 판단하여 domain-adaptation하였고, 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다. 이후 공개될 예정인 더 큰 규모의 데이터셋까지 Dapt할 예정입니다.
datasets from AIhub
웰니스 대화 스크립트 데이터셋1 & 2 (중복 제거 약 2만9천개)
|
{}
|
eunjin/koMHBERT-krbert-based-v1
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us
|
korean Mental Health BERT
huggingface에 공개된 KR-Medium BERT를 아래의 dataset으로 MLM fine-tuining한 Bert Model입니다. 정신건강 문제 해결에 도움이 될만한 데이터셋이라고 판단하여 domain-adaptation하였고, 향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다. 이후 공개될 예정인 더 큰 규모의 데이터셋까지 Dapt할 예정입니다.
datasets from AIhub
웰니스 대화 스크립트 데이터셋1 & 2 (중복 제거 약 2만9천개)
|
[] |
[
"TAGS\n#transformers #pytorch #jax #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
feature-extraction
|
transformers
|
korean Mental Health BERT -v2
huggingface에 공개된 KR-Medium BERT를 정신건강의학신문을 크롤링한 dataset으로 MLM fine-tuining한 Bert Model입니다.
정신건강 발화 관련 데이터를 모을 수 없는 상황에서 이를 대체할만한 데이터로 제시합니다.
향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
정신건강의학신문: http://www.psychiatricnews.net
|
{}
|
eunjin/koMHBERT-krbert-based-v2
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us
|
korean Mental Health BERT -v2
huggingface에 공개된 KR-Medium BERT를 정신건강의학신문을 크롤링한 dataset으로 MLM fine-tuining한 Bert Model입니다.
정신건강 발화 관련 데이터를 모을 수 없는 상황에서 이를 대체할만한 데이터로 제시합니다.
향후 정신건강 관련 감정 및 상태 classification 및 그에 따른 chatbot 구현에 사용할 수 있습니다.
정신건강의학신문: URL
|
[] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #endpoints_compatible #region-us \n"
] |
text-generation
|
transformers
|
* skt/kogpt2-base-v2에 wellness 및 일상챗봇 데이터를 fine-tuning한 모델입니다.
* 유사한 정신건강 상담 도메인에서 바로 사용 가능합니다.
* 깃허브 사이트를 참조해주세요! https://github.com/eunjiinkim/WellnessChatbot
|
{}
|
eunjin/kogpt2-finetuned-wellness
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
* skt/kogpt2-base-v2에 wellness 및 일상챗봇 데이터를 fine-tuning한 모델입니다.
* 유사한 정신건강 상담 도메인에서 바로 사용 가능합니다.
* 깃허브 사이트를 참조해주세요! URL
|
[] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310788
- CO2 Emissions (in grams): 6.826886567147602
## Validation Metrics
- Loss: 0.20949310064315796
- Accuracy: 0.9578392621870883
- Precision: 0.9476190476190476
- Recall: 0.9045454545454545
- AUC: 0.9714032720526227
- F1: 0.9255813953488372
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310788
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310788", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["evandrodiniz/autonlp-data-api-boamente"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 6.826886567147602}
|
evandrodiniz/autonlp-api-boamente-417310788
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:evandrodiniz/autonlp-data-api-boamente",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-evandrodiniz/autonlp-data-api-boamente #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310788
- CO2 Emissions (in grams): 6.826886567147602
## Validation Metrics
- Loss: 0.20949310064315796
- Accuracy: 0.9578392621870883
- Precision: 0.9476190476190476
- Recall: 0.9045454545454545
- AUC: 0.9714032720526227
- F1: 0.9255813953488372
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 417310788\n- CO2 Emissions (in grams): 6.826886567147602",
"## Validation Metrics\n\n- Loss: 0.20949310064315796\n- Accuracy: 0.9578392621870883\n- Precision: 0.9476190476190476\n- Recall: 0.9045454545454545\n- AUC: 0.9714032720526227\n- F1: 0.9255813953488372",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-evandrodiniz/autonlp-data-api-boamente #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 417310788\n- CO2 Emissions (in grams): 6.826886567147602",
"## Validation Metrics\n\n- Loss: 0.20949310064315796\n- Accuracy: 0.9578392621870883\n- Precision: 0.9476190476190476\n- Recall: 0.9045454545454545\n- AUC: 0.9714032720526227\n- F1: 0.9255813953488372",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
text-classification
|
transformers
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310793
- CO2 Emissions (in grams): 9.446754273734577
## Validation Metrics
- Loss: 0.25755178928375244
- Accuracy: 0.9407114624505929
- Precision: 0.8600823045267489
- Recall: 0.95
- AUC: 0.9732501264968797
- F1: 0.9028077753779697
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoNLP"}' https://api-inference.huggingface.co/models/evandrodiniz/autonlp-api-boamente-417310793
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("evandrodiniz/autonlp-api-boamente-417310793", use_auth_token=True)
inputs = tokenizer("I love AutoNLP", return_tensors="pt")
outputs = model(**inputs)
```
|
{"language": "unk", "tags": "autonlp", "datasets": ["evandrodiniz/autonlp-data-api-boamente"], "widget": [{"text": "I love AutoNLP \ud83e\udd17"}], "co2_eq_emissions": 9.446754273734577}
|
evandrodiniz/autonlp-api-boamente-417310793
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"autonlp",
"unk",
"dataset:evandrodiniz/autonlp-data-api-boamente",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"unk"
] |
TAGS
#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-evandrodiniz/autonlp-data-api-boamente #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us
|
# Model Trained Using AutoNLP
- Problem type: Binary Classification
- Model ID: 417310793
- CO2 Emissions (in grams): 9.446754273734577
## Validation Metrics
- Loss: 0.25755178928375244
- Accuracy: 0.9407114624505929
- Precision: 0.8600823045267489
- Recall: 0.95
- AUC: 0.9732501264968797
- F1: 0.9028077753779697
## Usage
You can use cURL to access this model:
Or Python API:
|
[
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 417310793\n- CO2 Emissions (in grams): 9.446754273734577",
"## Validation Metrics\n\n- Loss: 0.25755178928375244\n- Accuracy: 0.9407114624505929\n- Precision: 0.8600823045267489\n- Recall: 0.95\n- AUC: 0.9732501264968797\n- F1: 0.9028077753779697",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #autonlp #unk #dataset-evandrodiniz/autonlp-data-api-boamente #co2_eq_emissions #autotrain_compatible #endpoints_compatible #region-us \n",
"# Model Trained Using AutoNLP\n\n- Problem type: Binary Classification\n- Model ID: 417310793\n- CO2 Emissions (in grams): 9.446754273734577",
"## Validation Metrics\n\n- Loss: 0.25755178928375244\n- Accuracy: 0.9407114624505929\n- Precision: 0.8600823045267489\n- Recall: 0.95\n- AUC: 0.9732501264968797\n- F1: 0.9028077753779697",
"## Usage\n\nYou can use cURL to access this model:\n\n\n\nOr Python API:"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Afrikaans-AfriBooms
| Feature | Description |
| --- | --- |
| **Name** | `af_udv25_afrikaansafribooms_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (455 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `AOA`, `AOP`, `ASA`, `ASP`, `AVA`, `AVP`, `BO`, `BS`, `BV`, `KN`, `KO`, `LB`, `LO`, `NA`, `NEE`, `NM`, `NME`, `NSE`, `NSED`, `NSM`, `PA`, `PB`, `PDHEB`, `PDHEDP`, `PDHENP`, `PDHEW`, `PDMB`, `PDMP`, `PDMW`, `PDOENP`, `PDOEW`, `PDVEB`, `PDVEDP`, `PDVENP`, `PDVEW`, `PEEB`, `PEEDP`, `PEENP`, `PEMB`, `PEMP`, `PEMW`, `PO`, `PTEB`, `PTEDP`, `PTENP`, `PTEW`, `PTMP`, `PV`, `PW`, `RA`, `RK`, `RL`, `RO`, `RS`, `RSF`, `RV`, `RWD`, `SVS`, `THAB`, `THAO`, `THBB`, `THBO`, `THNB`, `THPB`, `THPO`, `TRAB`, `TRAO`, `TRBB`, `UPB`, `UPD`, `UPI`, `UPO`, `UPS`, `UPV`, `UPW`, `UXD`, `VTHOG`, `VTHOK`, `VTHOO`, `VTHOV`, `VTHSG`, `VTHSO`, `VTUOA`, `VTUOM`, `VTUOP`, `VUOT`, `VVHOG`, `VVHOK`, `VVHOO`, `VVUOM`, `VVUOP`, `ZE`, `ZM`, `ZPL`, `ZPR` |
| **`morphologizer`** | `Definite=Def\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `AdpType=Prep\|POS=ADP`, `AdjType=Attr\|Case=Nom\|Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Fin,Inf\|VerbType=Cop`, `Definite=Ind\|POS=DET\|PronType=Art`, `POS=NUM`, `POS=PART\|PartType=Inf`, `POS=VERB\|Subcat=Tran\|Tense=Pres\|VerbForm=Fin,Inf`, `POS=PRON\|PronType=Rel`, `POS=AUX\|Tense=Pres\|VerbForm=Fin,Inf\|VerbType=Pas`, `POS=PUNCT`, `POS=CCONJ`, `POS=SCONJ`, `POS=VERB\|Subcat=Intr\|Tense=Pres\|VerbForm=Fin,Inf`, `POS=VERB\|Subcat=Intr\|Tense=Past\|VerbForm=Part`, `POS=AUX\|Tense=Past\|VerbForm=Fin\|VerbType=Pas`, `Degree=Pos\|POS=ADV`, `POS=AUX\|Tense=Pres\|VerbForm=Fin,Inf\|VerbType=Mod`, `POS=DET\|PronType=Ind`, `POS=X`, `Number=Sing\|POS=PROPN`, `POS=PRON\|PronType=Ind`, `POS=PART\|PartType=Neg`, `POS=VERB\|Subcat=Tran\|Tense=Past\|VerbForm=Part`, `AdjType=Pred\|Case=Nom\|Degree=Pos\|POS=ADJ`, `POS=DET\|PronType=Dem`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=SYM`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PART\|PartType=Gen`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Degree=Sup\|POS=ADV`, `Degree=Dim\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PRON\|PronType=Int`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `AdjType=Attr\|Case=Nom\|Degree=Sup\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `AdjType=Pred\|Case=Nom\|Degree=Cmp\|POS=ADJ`, `POS=VERB\|Subcat=Prep\|Tense=Pres\|VerbForm=Fin,Inf`, `POS=AUX\|Tense=Pres\|VerbForm=Fin,Inf\|VerbType=Aux`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PRON\|PronType=Rcp`, `POS=AUX\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Fin\|VerbType=Cop`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `AdjType=Attr\|Case=Nom\|Degree=Cmp\|POS=ADJ`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `AdjType=Pred\|Case=Nom\|Degree=Sup\|POS=ADJ` |
| **`parser`** | `ROOT`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `punct`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `7`, `8`, `10`, `12`, `14`, `16`, `18`, `21`, `24`, `26`, `28`, `31`, `32`, `34`, `37`, `39`, `40`, `42`, `44`, `46`, `47`, `49`, `51`, `53`, `54`, `56`, `57`, `58`, `59`, `61`, `64`, `66`, `68`, `69`, `72`, `74`, `75`, `77`, `78`, `81`, `83`, `84`, `85`, `86`, `87`, `90`, `92`, `94`, `96`, `99`, `101`, `103`, `105`, `108`, `110`, `113`, `116`, `117`, `118`, `121`, `123`, `124`, `125`, `127`, `128`, `129`, `133`, `136`, `138`, `141`, `143`, `145`, `147`, `151`, `153`, `154`, `156`, `158`, `159`, `160`, `162`, `164`, `165`, `167`, `168`, `170`, `172`, `174`, `176`, `178`, `179`, `180`, `181`, `183`, `185`, `189`, `190`, `191`, `192`, `194`, `195`, `197`, `198`, `201`, `202`, `203`, `204`, `206`, `207`, `209`, `213`, `214`, `216`, `217`, `218`, `220`, `221`, `222`, `223`, `225`, `226`, `228`, `229`, `231`, `233`, `234`, `236`, `238`, `240`, `241`, `244`, `247`, `248`, `249`, `250`, `252`, `253`, `255`, `256`, `257`, `258`, `261`, `262`, `263`, `265`, `267`, `269`, `270`, `271`, `273`, `275`, `276`, `278`, `279`, `281`, `283`, `285`, `287`, `289`, `291`, `294`, `296`, `297`, `298`, `299`, `300`, `301`, `302`, `303`, `305`, `306`, `307`, `309`, `310`, `311`, `313`, `314`, `315`, `317`, `320`, `321`, `323`, `325`, `326`, `327`, `328`, `329`, `330`, `332`, `333`, `335`, `336`, `337`, `338`, `339`, `340`, `341`, `343`, `344`, `347`, `348`, `349`, `351`, `353`, `355`, `357`, `359`, `360`, `361`, `362`, `365`, `366`, `367`, `369`, `371`, `373`, `374`, `375`, `377`, `379`, `381`, `383`, `386`, `388`, `390`, `392`, `393`, `395`, `397`, `398`, `400`, `401`, `402`, `403`, `405`, `406`, `408`, `409`, `411`, `412`, `414`, `417`, `215`, `418`, `419`, `420`, `421`, `422`, `424`, `425`, `426`, `427`, `429`, `431`, `432`, `433`, `434`, `436`, `438`, `439`, `440`, `442`, `443`, `444`, `447`, `449`, `450`, `452` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.92 |
| `TOKEN_P` | 99.89 |
| `TOKEN_R` | 99.94 |
| `TOKEN_ACC` | 100.00 |
| `SENTS_F` | 100.00 |
| `SENTS_P` | 100.00 |
| `SENTS_R` | 100.00 |
| `TAG_ACC` | 96.01 |
| `POS_ACC` | 98.52 |
| `MORPH_ACC` | 97.52 |
| `DEP_UAS` | 90.78 |
| `DEP_LAS` | 87.50 |
| `LEMMA_ACC` | 97.87 |
|
{"language": ["af"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/af_udv25_afrikaansafribooms_trf
| null |
[
"spacy",
"token-classification",
"af",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"af"
] |
TAGS
#spacy #token-classification #af #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Afrikaans-AfriBooms
### Label Scheme
View label scheme (455 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (455 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #af #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (455 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Danish-DDT
| Feature | Description |
| --- | --- |
| **Name** | `da_udv25_danishddt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1316 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `AdpType=Prep\|POS=ADP`, `Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=PROPN`, `Definite=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `POS=CCONJ`, `Definite=Ind\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=ADJ`, `POS=PRON\|PartType=Inf`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Definite=Ind\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Pos\|POS=ADV`, `Definite=Def\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=PRON\|PronType=Dem`, `NumType=Card\|POS=NUM`, `Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `NumType=Ord\|POS=ADJ`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `POS=ADP\|PartType=Inf`, `Degree=Pos\|POS=ADJ`, `Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Sing\|POS=NOUN`, `POS=AUX\|VerbForm=Inf\|Voice=Act`, `Definite=Ind\|Degree=Pos\|Gender=Com\|Number=Sing\|POS=ADJ`, `Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=PART\|PartType=Inf`, `Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Com\|POS=PRON\|PronType=Ind`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Imp\|POS=VERB`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Definite=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Gender=Com\|Number=Plur\|POS=NOUN`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Sing\|POS=NOUN`, `Degree=Cmp\|POS=ADV`, `POS=ADV\|PartType=Inf`, `Degree=Sup\|POS=ADV`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|POS=PROPN`, `POS=ADP`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Definite=Def\|Degree=Sup\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Gender=Com\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Cmp\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=INTJ`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Acc\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Definite=Def\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Com\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `POS=SYM`, `Case=Nom\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Degree=Sup\|POS=ADJ`, `Number=Plur\|POS=DET\|PronType=Ind\|Style=Arch`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Dem`, `Foreign=Yes\|POS=X`, `POS=DET\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=PRON\|PronType=Int,Rel`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Dem`, `Abbr=Yes\|POS=X`, `Case=Gen\|Definite=Ind\|Gender=Com\|Number=Plur\|POS=NOUN`, `Definite=Def\|Degree=Abs\|POS=ADJ`, `Definite=Ind\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Definite=Ind\|POS=NOUN`, `Gender=Com\|Number=Plur\|POS=NOUN`, `Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Com\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Degree=Abs\|POS=ADV`, `POS=VERB\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Def\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Style=Form`, `Case=Gen\|Definite=Def\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Gender=Com\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=VERB\|Tense=Pres`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Ind`, `Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PRON\|Person=2\|Polite=Form\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Com\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Definite=Ind\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Mood=Imp\|POS=AUX`, `Gender=Com\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Number[psor]=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Definite=Def\|Gender=Com\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Com\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|POS=NOUN`, `Number[psor]=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Dem`, `Definite=Def\|Number=Plur\|POS=NOUN` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `dep`, `det`, `discourse`, `expl`, `fixed`, `flat`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:loc`, `obl:tmod`, `punct`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `7`, `9`, `11`, `13`, `15`, `17`, `19`, `21`, `23`, `27`, `31`, `33`, `35`, `37`, `39`, `42`, `44`, `45`, `5`, `47`, `49`, `51`, `53`, `55`, `57`, `59`, `63`, `67`, `69`, `73`, `75`, `77`, `79`, `81`, `83`, `85`, `87`, `89`, `91`, `93`, `95`, `97`, `101`, `103`, `104`, `106`, `109`, `113`, `115`, `116`, `117`, `118`, `119`, `122`, `124`, `127`, `130`, `133`, `134`, `135`, `138`, `140`, `141`, `144`, `146`, `148`, `149`, `151`, `153`, `154`, `156`, `157`, `158`, `159`, `160`, `164`, `166`, `169`, `172`, `175`, `177`, `179`, `181`, `183`, `185`, `188`, `6`, `190`, `192`, `195`, `197`, `199`, `201`, `203`, `205`, `207`, `209`, `212`, `214`, `216`, `217`, `220`, `221`, `222`, `224`, `227`, `228`, `229`, `230`, `232`, `234`, `236`, `238`, `239`, `241`, `243`, `244`, `247`, `248`, `249`, `250`, `252`, `253`, `254`, `255`, `257`, `258`, `262`, `264`, `270`, `274`, `277`, `278`, `280`, `282`, `284`, `286`, `289`, `290`, `292`, `293`, `294`, `295`, `296`, `297`, `298`, `301`, `302`, `304`, `305`, `306`, `308`, `310`, `312`, `314`, `315`, `317`, `319`, `323`, `324`, `326`, `328`, `330`, `332`, `334`, `336`, `339`, `341`, `342`, `344`, `345`, `346`, `348`, `350`, `353`, `356`, `357`, `359`, `362`, `363`, `365`, `366`, `368`, `369`, `370`, `372`, `374`, `375`, `376`, `378`, `380`, `381`, `385`, `387`, `388`, `392`, `394`, `398`, `401`, `402`, `403`, `405`, `406`, `407`, `408`, `409`, `410`, `411`, `414`, `415`, `416`, `419`, `422`, `423`, `426`, `430`, `431`, `432`, `433`, `436`, `437`, `438`, `439`, `440`, `441`, `442`, `443`, `445`, `446`, `448`, `449`, `450`, `451`, `452`, `453`, `456`, `457`, `460`, `462`, `468`, `469`, `471`, `472`, `473`, `474`, `476`, `478`, `480`, `481`, `484`, `485`, `486`, `488`, `489`, `491`, `492`, `493`, `494`, `495`, `496`, `498`, `500`, `502`, `505`, `507`, `508`, `510`, `511`, `512`, `514`, `515`, `517`, `519`, `521`, `522`, `524`, `525`, `528`, `530`, `532`, `533`, `535`, `536`, `537`, `539`, `542`, `543`, `546`, `547`, `550`, `551`, `553`, `554`, `556`, `557`, `558`, `561`, `562`, `563`, `564`, `567`, `569`, `570`, `573`, `575`, `576`, `577`, `578`, `579`, `580`, `582`, `583`, `584`, `585`, `587`, `588`, `590`, `591`, `593`, `597`, `598`, `600`, `601`, `602`, `603`, `605`, `606`, `607`, `608`, `609`, `610`, `612`, `614`, `617`, `618`, `621`, `623`, `625`, `626`, `627`, `628`, `629`, `630`, `631`, `633`, `634`, `635`, `636`, `638`, `639`, `640`, `641`, `642`, `643`, `645`, `646`, `647`, `649`, `650`, `651`, `653`, `656`, `657`, `659`, `660`, `661`, `662`, `664`, `665`, `667`, `670`, `671`, `672`, `674`, `675`, `676`, `677`, `678`, `679`, `680`, `681`, `683`, `685`, `686`, `688`, `689`, `690`, `691`, `692`, `693`, `694`, `696`, `697`, `698`, `699`, `701`, `702`, `703`, `704`, `705`, `706`, `707`, `709`, `711`, `714`, `715`, `717`, `720`, `721`, `722`, `723`, `725`, `728`, `730`, `731`, `732`, `734`, `736`, `738`, `740`, `742`, `746`, `747`, `748`, `750`, `752`, `753`, `754`, `758`, `759`, `763`, `764`, `766`, `768`, `769`, `773`, `775`, `776`, `778`, `779`, `780`, `781`, `782`, `785`, `788`, `789`, `790`, `791`, `795`, `796`, `797`, `798`, `800`, `801`, `803`, `805`, `806`, `807`, `808`, `810`, `812`, `813`, `815`, `816`, `818`, `821`, `822`, `823`, `825`, `827`, `830`, `832`, `836`, `837`, `838`, `840`, `841`, `844`, `846`, `848`, `850`, `851`, `852`, `854`, `856`, `858`, `860`, `861`, `863`, `864`, `865`, `866`, `867`, `868`, `870`, `872`, `873`, `874`, `875`, `880`, `882`, `884`, `885`, `886`, `887`, `889`, `891`, `892`, `893`, `894`, `895`, `896`, `898`, `902`, `903`, `905`, `907`, `908`, `909`, `911`, `912`, `913`, `914`, `915`, `917`, `918`, `919`, `920`, `922`, `923`, `924`, `926`, `927`, `928`, `929`, `931`, `934`, `935`, `936`, `938`, `939`, `940`, `941`, `942`, `944`, `945`, `947`, `949`, `951`, `952`, `954`, `955`, `956`, `958`, `960`, `961`, `962`, `969`, `970`, `974`, `975`, `977`, `978`, `979`, `980`, `981`, `983`, `984`, `987`, `988`, `989`, `993`, `995`, `998`, `1000`, `1001`, `1002`, `1004`, `1007`, `1011`, `1012`, `1014`, `1017`, `1018`, `1020`, `1021`, `1022`, `1023`, `1025`, `1026`, `1027`, `1029`, `1030`, `1031`, `1032`, `1033`, `1034`, `1036`, `1037`, `1038`, `1040`, `1042`, `1044`, `1045`, `1048`, `1050`, `1051`, `1053`, `1054`, `1056`, `1057`, `1058`, `1059`, `1060`, `1061`, `1062`, `1064`, `1066`, `1067`, `1069`, `1070`, `1072`, `1073`, `1076`, `1078`, `1080`, `1081`, `1085`, `1086`, `1087`, `1088`, `1089`, `1090`, `1092`, `1093`, `1094`, `1096`, `1097`, `1098`, `1100`, `1101`, `1102`, `1106`, `1109`, `1110`, `1111`, `1113`, `1114`, `1116`, `1117`, `1119`, `1120`, `1122`, `1123`, `1125`, `1127`, `1128`, `1131`, `1132`, `1133`, `1134`, `1135`, `1136`, `1137`, `1138`, `1141`, `831`, `1142`, `1143`, `1144`, `1146`, `1148`, `1150`, `1152`, `1153`, `1155`, `1157`, `1158`, `1160`, `1161`, `1162`, `1163`, `1168`, `1170`, `1171`, `1174`, `1175`, `1176`, `1178`, `1181`, `1182`, `1183`, `1185`, `1186`, `1189`, `1191`, `1192`, `1193`, `1194`, `1195`, `1196`, `1198`, `1199`, `1201`, `1203`, `1204`, `1205`, `1206`, `1207`, `1208`, `1209`, `1210`, `1211`, `1212`, `1213`, `1214`, `1215`, `1218`, `1219`, `1220`, `1222`, `1223`, `1224`, `1225`, `1226`, `1227`, `1229`, `1231`, `1232`, `1235`, `1236`, `1238`, `1239`, `1242`, `1244`, `1247`, `1248`, `1249`, `1250`, `1251`, `1253`, `1255`, `1257`, `1258`, `1259`, `1261`, `1263`, `1265`, `1266`, `1267`, `1269`, `1271`, `1272`, `1273`, `1274`, `1276`, `1277`, `1278`, `1280`, `1281`, `1282`, `1283`, `1285`, `1286`, `1287`, `1288`, `1289`, `1291`, `1293`, `1294`, `1295`, `1297`, `1298`, `1299`, `1300`, `1303`, `1305`, `1307`, `1309`, `1310`, `1311`, `1312`, `1315`, `1316`, `1318`, `1321`, `1322`, `1323`, `1324`, `1325`, `1326`, `1327`, `1329`, `1330`, `1331`, `1332`, `1333`, `1334`, `1335`, `1336`, `1337`, `1338`, `1339`, `1341`, `1342`, `1343`, `1344`, `1345`, `1346`, `1347`, `1348`, `1349`, `1351`, `1352`, `1353`, `1354`, `1355`, `1357`, `1358`, `1359`, `1360`, `1362`, `1364`, `1365`, `1367`, `1368`, `1369`, `1370`, `1371`, `1372`, `1374`, `1376`, `1377`, `1379`, `1380`, `1382`, `1383`, `1384`, `1386`, `1387`, `1389`, `1390`, `1391`, `1392`, `1394`, `1396`, `1398`, `1399`, `1400`, `1401`, `1403`, `1404`, `1405`, `1406`, `1407`, `1408`, `1409`, `1410`, `1147`, `1411`, `1413`, `1414`, `1415`, `1418`, `1420`, `1421`, `1422`, `1423`, `1426`, `1427`, `1428`, `1430`, `1431`, `1433`, `1438`, `1439`, `1440`, `1441`, `1442`, `1444`, `1446`, `1448`, `1449`, `1453`, `1454`, `1456`, `1457`, `1459`, `1463`, `1465`, `1466`, `1468`, `1469`, `1470`, `1472`, `1476`, `1478`, `1479`, `1480`, `1481`, `1482`, `1483`, `1485`, `1486`, `1487`, `1488`, `1490`, `1491`, `1493`, `1494`, `1496`, `1498`, `1500`, `1502`, `1503`, `1504`, `1505`, `1506`, `1508`, `1509`, `1511`, `1512`, `1513`, `1514`, `1516`, `1518`, `1519`, `1521`, `1522`, `1524`, `1525`, `1527`, `1533`, `1534`, `1535`, `1536`, `1538`, `1540`, `1541`, `1544`, `1545`, `1547`, `1548`, `1549`, `1550`, `1551`, `1552`, `1556`, `1557`, `1559`, `1560`, `1561`, `1562`, `1563`, `1564`, `1568`, `1569`, `1571`, `1572`, `1574`, `1577`, `1578`, `1579`, `1580`, `1581`, `1583`, `1585`, `1586`, `1587`, `1588`, `1589`, `1590`, `1591`, `1594`, `1595`, `1596`, `1597`, `1598`, `1599`, `1602`, `1603`, `1605`, `1606`, `1608`, `1610`, `1612`, `1613`, `1614`, `1616`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1626`, `1627`, `1629`, `1630`, `1631`, `1632`, `1634`, `1636`, `1637`, `1638`, `1639`, `1640`, `1641`, `1642`, `1644`, `1645`, `1647`, `1649`, `1651`, `1653`, `1656`, `1657`, `1658`, `1659`, `1660`, `1661`, `1663`, `1665`, `1666`, `1667`, `1668`, `1670`, `1673`, `1674`, `1676`, `1677`, `1678`, `1679`, `1680`, `1681`, `1684`, `1685`, `1687`, `1688`, `1689`, `1690`, `1692`, `1693`, `1643`, `1694`, `1695`, `1696`, `1697`, `1699`, `1701`, `1702`, `1704`, `1706`, `1708`, `1710`, `1711`, `1712`, `1714`, `1715`, `1717`, `1719`, `1720`, `1721`, `1722`, `1723`, `1724`, `1725`, `1726`, `1727`, `1728`, `1729`, `1730`, `1732`, `1734`, `1735`, `1737`, `1739`, `1741`, `1742`, `1743`, `1745`, `1747`, `1749`, `1750`, `1751`, `1753`, `1754`, `1756`, `1758`, `1759`, `1760`, `1761`, `1762`, `1764`, `1766`, `1768`, `1769`, `1770`, `1771`, `1772`, `1773`, `1774` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.96 |
| `TOKEN_P` | 99.95 |
| `TOKEN_R` | 99.96 |
| `TOKEN_ACC` | 100.00 |
| `SENTS_F` | 96.89 |
| `SENTS_P` | 97.15 |
| `SENTS_R` | 96.63 |
| `TAG_ACC` | 98.49 |
| `POS_ACC` | 98.48 |
| `MORPH_ACC` | 98.20 |
| `DEP_UAS` | 89.67 |
| `DEP_LAS` | 87.29 |
| `LEMMA_ACC` | 97.55 |
|
{"language": ["da"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/da_udv25_danishddt_trf
| null |
[
"spacy",
"token-classification",
"da",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"da"
] |
TAGS
#spacy #token-classification #da #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Danish-DDT
### Label Scheme
View label scheme (1316 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (1316 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #da #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (1316 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_German-HDT
| Feature | Description |
| --- | --- |
| **Name** | `de_udv25_germanhdt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (62832 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `$(`, `$,`, `$.`, `ADJA`, `ADJD`, `ADV`, `APPO`, `APPR`, `APPRART`, `APZR`, `ART`, `CARD`, `FM`, `ITJ`, `KOKOM`, `KON`, `KOUI`, `KOUS`, `NE`, `NN`, `PDAT`, `PDS`, `PIAT`, `PIDAT`, `PIS`, `PPER`, `PPOSAT`, `PPOSS`, `PRELAT`, `PRELS`, `PRF`, `PROAV`, `PTKA`, `PTKANT`, `PTKNEG`, `PTKVZ`, `PTKZU`, `PWAT`, `PWAV`, `PWS`, `TRUNC`, `VAFIN`, `VAIMP`, `VAINF`, `VAPP`, `VMFIN`, `VMINF`, `VMPP`, `VVFIN`, `VVIMP`, `VVINF`, `VVIZU`, `VVPP`, `XY` |
| **`morphologizer`** | `AdpType=Prep\|Case=Dat\|POS=ADP`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Sing\|POS=PROPN\|Person=3`, `Foreign=Yes\|POS=X\|Person=3`, `POS=PUNCT\|PunctType=Comm`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN\|Person=3`, `Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `AdpType=Prep\|POS=ADP`, `Gender=Neut\|Number=Plur\|POS=NOUN\|Person=3`, `POS=CCONJ`, `POS=PUNCT\|PunctType=Peri`, `NumType=Card\|Number=Plur\|POS=NUM\|Person=3`, `Gender=Fem\|Number=Plur\|POS=NOUN\|Person=3`, `AdpType=Prep\|Case=Dat\|POS=ADP\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `POS=PUNCT\|PunctType=Brck`, `POS=PROPN\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=ADV`, `POS=SCONJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|VerbForm=Inf`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN\|Person=3`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Sing\|POS=PROPN\|Person=3`, `Degree=Cmp\|POS=ADJ\|Variant=Short`, `POS=ADP\|PartType=Vbp`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `AdpType=Prep\|Case=Acc\|POS=ADP`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART\|Polarity=Neg`, `Degree=Cmp\|POS=ADV`, `ConjType=Comp\|POS=CCONJ`, `Degree=Pos\|POS=ADJ\|Variant=Short`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=PROPN\|Person=3`, `Case=Nom\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Aspect=Perf\|POS=VERB\|VerbForm=Part`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Acc\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=DET\|Person=3`, `Degree=Sup\|POS=ADJ\|Variant=Short`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Hyph=Yes\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN\|Person=3`, `POS=PART\|PartType=Inf`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=NOUN\|Person=3`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `POS=AUX\|VerbForm=Inf`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Inf\|VerbType=Mod`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `AdpType=Prep\|Case=Dat\|Gender=Fem\|POS=ADP\|PronType=Art`, `Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `POS=ADJ`, `Degree=Cmp\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Art`, `POS=ADV\|PronType=Int`, `Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=DET\|Person=3`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Number=Plur\|POS=NOUN\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Gender=Fem\|Number=Sing\|POS=PROPN\|Person=3`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Degree=Cmp\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `AdpType=Prep\|Case=Gen\|POS=ADP`, `Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `AdpType=Post\|Case=Dat\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Aspect=Perf\|POS=AUX\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|Number=Plur\|POS=NOUN\|Person=3`, `Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Degree=Sup\|POS=ADV`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Sup\|Number=Plur\|POS=ADJ\|Person=3`, `Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `NumType=Card\|Number=Sing\|POS=NUM\|Person=3`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3`, `Case=Dat\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Number=Plur\|POS=PROPN\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `AdpType=Prep\|Case=Acc\|Gender=Neut\|POS=ADP\|PronType=Art`, `Case=Gen\|Number=Sing\|POS=PROPN\|Person=3`, `Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=ADJ\|Person=3`, `POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3`, `POS=ADJ\|Person=3`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Dem`, `AdpType=Circ\|POS=ADP`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `AdpType=Prep\|Case=Nom\|POS=ADP`, `Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3`, `Case=Dat\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Number=Plur\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Foreign=Yes\|POS=X`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|POS=PROPN\|Person=3`, `Case=Dat\|Number=Plur\|POS=DET\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=ADJ\|Person=3`, `POS=DET`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=X`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=ADJ`, `AdpType=Post\|Case=Acc\|POS=ADP`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Gen\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Degree=Sup\|Number=Plur\|POS=ADJ`, `POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Acc\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person=3`, `NumType=Card\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=3`, `Degree=Pos\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Acc\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3`, `Degree=Sup\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=DET\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Acc\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `POS=ADJ\|Variant=Short`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Foreign=Yes\|Number=Sing\|POS=X`, `Case=Nom\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Aspect=Perf\|POS=AUX\|VerbForm=Part\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Gender=Masc\|POS=NOUN\|Person=3`, `Case=Acc\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Sing\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Nom\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=INTJ\|PartType=Res`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Foreign=Yes\|Gender=Neut\|Number=Sing\|POS=X\|Person=3`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Int`, `Case=Acc\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `Gender=Neut\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Nom\|POS=NOUN\|Person=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Dat\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Rel`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM\|Person=3`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN\|Person=3`, `POS=PROPN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|POS=NOUN\|Person=3`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Nom\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN\|Person=3`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Number=Plur\|POS=DET\|Person=3`, `Case=Nom\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Hyph=Yes\|Number=Plur\|POS=NOUN\|Person=3`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|POS=PROPN\|Person=3`, `Case=Gen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Pos\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Degree=Pos\|Number=Sing\|POS=ADJ\|Person=3`, `POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Dat\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|POS=PRON\|PronType=Ind,Neg,Tot`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN\|Person=3`, `Case=Dat\|Degree=Sup\|Number=Plur\|POS=ADJ`, `POS=PRON\|PronType=Int`, `Degree=Pos\|Number=Plur\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Hyph=Yes\|POS=NOUN\|Person=3`, `Degree=Pos\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Dat\|NumType=Card\|Number=Plur\|POS=NUM\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|Degree=Pos\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|POS=SCONJ`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Number=Sing\|POS=DET\|Person=3\|PronType=Art`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN\|Person=3`, `AdpType=Post\|Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Int`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=ADV`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Case=Acc\|POS=PROPN\|Person=3`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=DET\|PronType=Ind,Neg,Tot`, `Degree=Pos\|POS=ADJ\|Person=3`, `Case=Acc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|POS=PROPN\|Person=3`, `Case=Nom\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN\|Person=3`, `AdpType=Prep\|Case=Acc\|Gender=Fem\|POS=ADP\|PronType=Art`, `Degree=Pos\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Case=Nom\|POS=PRON\|PronType=Rel`, `Case=Acc\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN\|Person=3`, `AdpType=Prep\|Case=Dat\|Gender=Neut\|POS=ADP\|PronType=Art`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Int`, `Case=Dat\|POS=NOUN\|Person=3`, `Degree=Pos\|POS=VERB\|VerbForm=Inf`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Masc\|Number=Sing\|POS=ADJ\|Person=3\|Variant=Short`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN\|Person=3`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Neut\|Number=Sing\|POS=SCONJ\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=NOUN\|Person=3`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `Number=Sing\|POS=DET\|PronType=Art`, `Case=Nom\|POS=DET\|PronType=Art`, `Degree=Pos\|Number=Plur\|POS=PRON\|Person=3\|PronType=Ind,Neg,Tot`, `AdpType=Prep\|POS=ADP\|PronType=Art`, `Number=Sing\|POS=PRON\|PronType=Ind,Neg,Tot`, `Degree=Sup\|Number=Plur\|POS=DET\|Person=3`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|PronType=Ind,Neg,Tot`, `Number=Sing\|POS=DET`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|VerbType=Mod`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|VerbType=Mod`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=ADJ\|Person=3`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=NOUN\|Person=3`, `AdpType=Prep\|Case=Dat\|Gender=Masc\|POS=ADP\|PronType=Art`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Person=3\|PronType=Dem`, `Degree=Pos\|Gender=Neut\|POS=ADJ`, `Gender=Fem\|POS=ADJ`, `Degree=Pos\|Gender=Fem\|POS=ADJ`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|VerbType=Mod`, `POS=DET\|Person=3`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|VerbType=Mod`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound:prt`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `expl`, `expl:pv`, `flat`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `parataxis`, `punct`, `reparandum`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | -- |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 100.00 |
| `TOKEN_P` | 100.00 |
| `TOKEN_R` | 100.00 |
| `TOKEN_ACC` | 100.00 |
| `SENTS_F` | 99.75 |
| `SENTS_P` | 99.74 |
| `SENTS_R` | 99.76 |
| `TAG_ACC` | 97.84 |
| `POS_ACC` | 97.82 |
| `MORPH_ACC` | 78.11 |
| `DEP_UAS` | 97.28 |
| `DEP_LAS` | 95.88 |
| `LEMMA_ACC` | 92.04 |
|
{"language": ["de"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/de_udv25_germanhdt_trf
| null |
[
"spacy",
"token-classification",
"de",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"de"
] |
TAGS
#spacy #token-classification #de #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_German-HDT
### Label Scheme
View label scheme (62832 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (62832 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #de #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (62832 labels for 6 components)",
"### Accuracy"
] |
text-classification
|
spacy
|
# Welcome to Healthsea ✨
Create better access to health with machine learning and natural language processing. This is the trained healthsea pipeline for analyzing user reviews to supplements by extracting their effects on health. This pipeline features a trained NER model and a custom Text Classification model with Clause Segmentation and Blinding capabilities.
> Read more in the [blog post](https://explosion.ai/blog/healthsea) and visit the [healthsea repository](https://github.com/explosion/healthsea) for all training workflows, custom components and training data.
| Feature | Description |
| --- | --- |
| **Name** | `en_healthsea` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.2.0,<3.3.0` |
| **Default Pipeline** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Components** | `sentencizer`, `tok2vec`, `ner`, `benepar`, `segmentation`, `clausecat`, `aggregation` |
| **Vectors** | 684830 keys, 684830 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | MIT |
| **Author** | [Explosion](explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 2 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BENEFIT`, `CONDITION` |
| **`clausecat`** | `POSITIVE`, `NEUTRAL`, `NEGATIVE`, `ANAMNESIS` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 80.34 |
| `ENTS_P` | 80.77 |
| `ENTS_R` | 79.92 |
| `CATS_SCORE` | 74.87 |
| `CATS_MICRO_P` | 82.17 |
| `CATS_MICRO_R` | 80.85 |
| `CATS_MICRO_F` | 81.51 |
| `CATS_MACRO_P` | 78.01 |
| `CATS_MACRO_R` | 72.41 |
| `CATS_MACRO_F` | 74.87 |
| `CATS_MACRO_AUC` | 92.76 |
| `CATS_LOSS` | 297.22 |
|
{"language": ["en"], "tags": ["spacy", "token-classification", "text-classification"]}
|
explosion/en_healthsea
| null |
[
"spacy",
"token-classification",
"text-classification",
"en",
"model-index",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #text-classification #en #model-index #has_space #region-us
|
Welcome to Healthsea
====================
Create better access to health with machine learning and natural language processing. This is the trained healthsea pipeline for analyzing user reviews to supplements by extracting their effects on health. This pipeline features a trained NER model and a custom Text Classification model with Clause Segmentation and Blinding capabilities.
>
> Read more in the blog post and visit the healthsea repository for all training workflows, custom components and training data.
>
>
>
### Label Scheme
View label scheme (6 labels for 2 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (6 labels for 2 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #text-classification #en #model-index #has_space #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (6 labels for 2 components)",
"### Accuracy"
] |
text-classification
|
spacy
|
# 🪐 spaCy Project: Categorization of emotions in Reddit posts (Text Classification) This project uses spaCy to train a text classifier on the [GoEmotions dataset](https://github.com/google-research/google-research/tree/master/goemotions)
| Feature | Description |
| --- | --- |
| **Name** | `en_textcat_goemotions` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.1.1,<3.2.0` |
| **Default Pipeline** | `transformer`, `textcat_multilabel` |
| **Components** | `transformer`, `textcat_multilabel` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [GoEmotions dataset](https://github.com/google-research/google-research/tree/master/goemotions) |
| **License** | `MIT` |
| **Author** | [Explosion](explosion.ai) |
> The dataset that this model is trained on has known flaws described [here](https://github.com/google-research/google-research/tree/master/goemotions#disclaimer) as well as label errors resulting from [annotator disagreement](https://www.youtube.com/watch?v=khZ5-AN-n2Y). Anyone using this model should be aware of these limitations of the dataset.
### Label Scheme
<details>
<summary>View label scheme (28 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat_multilabel`** | `admiration`, `amusement`, `anger`, `annoyance`, `approval`, `caring`, `confusion`, `curiosity`, `desire`, `disappointment`, `disapproval`, `disgust`, `embarrassment`, `excitement`, `fear`, `gratitude`, `grief`, `joy`, `love`, `nervousness`, `optimism`, `pride`, `realization`, `relief`, `remorse`, `sadness`, `surprise`, `neutral` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 90.22 |
| `CATS_MICRO_P` | 66.67 |
| `CATS_MICRO_R` | 47.81 |
| `CATS_MICRO_F` | 55.68 |
| `CATS_MACRO_P` | 55.00 |
| `CATS_MACRO_R` | 41.93 |
| `CATS_MACRO_F` | 46.29 |
| `CATS_MACRO_AUC` | 90.22 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TRANSFORMER_LOSS` | 83.51 |
| `TEXTCAT_MULTILABEL_LOSS` | 4549.84 |
|
{"language": ["en"], "license": "mit", "tags": ["spacy", "text-classification"], "model-index": [{"name": "en_textcat_goemotions", "results": []}]}
|
explosion/en_textcat_goemotions
| null |
[
"spacy",
"text-classification",
"en",
"license:mit",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #text-classification #en #license-mit #region-us
|
spaCy Project: Categorization of emotions in Reddit posts (Text Classification) This project uses spaCy to train a text classifier on the GoEmotions dataset
============================================================================================================================================================
>
> The dataset that this model is trained on has known flaws described here as well as label errors resulting from annotator disagreement. Anyone using this model should be aware of these limitations of the dataset.
>
>
>
### Label Scheme
View label scheme (28 labels for 1 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (28 labels for 1 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #text-classification #en #license-mit #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (28 labels for 1 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_English-EWT
| Feature | Description |
| --- | --- |
| **Name** | `en_udv25_englishewt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1760 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `$`, `''`, `,`, `-LRB-`, `-RRB-`, `.`, `:`, `ADD`, `AFX`, `CC`, `CD`, `DT`, `EX`, `FW`, `GW`, `HYPH`, `IN`, `JJ`, `JJR`, `JJS`, `LS`, `MD`, `NFP`, `NN`, `NNP`, `NNPS`, `NNS`, `PDT`, `POS`, `PRP`, `PRP$`, `RB`, `RBR`, `RBS`, `RP`, `SYM`, `TO`, `UH`, `VB`, `VBD`, `VBG`, `VBN`, `VBP`, `VBZ`, `WDT`, `WP`, `WP$`, `WRB`, `XX`, ```` |
| **`morphologizer`** | `Number=Sing\|POS=PROPN`, `POS=PUNCT`, `Degree=Pos\|POS=ADJ`, `Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Definite=Def\|POS=DET\|PronType=Art`, `Number=Sing\|POS=NOUN`, `POS=ADP`, `Number=Sing\|POS=DET\|PronType=Dem`, `Definite=Ind\|POS=DET\|PronType=Art`, `POS=AUX\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `POS=VERB\|VerbForm=Ger`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PART`, `POS=VERB\|VerbForm=Inf`, `POS=SCONJ`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NUM`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=AUX\|VerbForm=Ger`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=ADV`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=PROPN`, `Degree=Pos\|NumType=Ord\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=CCONJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `POS=PRON\|PronType=Rel`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PRON`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=DET`, `Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Degree=Pos\|POS=ADV`, `Degree=Cmp\|POS=ADV`, `Number=Sing\|POS=PRON`, `Degree=Cmp\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=ADV\|PronType=Dem`, `POS=ADV\|PronType=Int`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Degree=Sup\|POS=ADJ`, `POS=PRON\|PronType=Int`, `NumType=Mult\|POS=ADV`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=DET\|PronType=Int`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `POS=PRON\|Poss=Yes\|PronType=Int`, `Case=Acc\|POS=PRON\|Person=2\|PronType=Prs`, `POS=X`, `POS=PRON\|PronType=Dem`, `Number=Sing\|POS=PROPN\|Typo=Yes`, `POS=ADV\|PronType=Rel`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Degree=Sup\|POS=ADV`, `POS=INTJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Foreign=Yes\|POS=X`, `POS=SYM`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Imp\|POS=AUX\|VerbForm=Fin`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|POS=CCONJ`, `POS=SCONJ\|Typo=Yes`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SYM`, `POS=DET\|Typo=Yes`, `Degree=Pos\|POS=PROPN`, `Abbr=Yes\|POS=ADP`, `POS=ADP\|Typo=Yes`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs\|Reflex=Yes`, `POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs\|Typo=Yes`, `Abbr=Yes\|POS=VERB\|Tense=Pres\|VerbForm=Part`, `Abbr=Yes\|POS=PART`, `POS=AUX\|Typo=Yes\|VerbForm=Fin`, `Degree=Pos\|POS=ADJ\|Typo=Yes`, `POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=NOUN\|Typo=Yes`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Abbr=Yes\|Number=Sing\|POS=NOUN`, `Degree=Pos\|POS=NOUN`, `POS=CCONJ\|Typo=Yes`, `Number=Sing\|POS=X`, `Abbr=Yes\|POS=SCONJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|POS=AUX\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `POS=ADV\|Typo=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Number=Sing\|POS=NUM`, `POS=PRON\|Poss=Yes\|PronType=Rel`, `Abbr=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Abbr=Yes\|POS=INTJ`, `Abbr=Yes\|POS=VERB\|VerbForm=Inf`, `Abbr=Yes\|Number=Sing\|POS=PRON`, `Abbr=Yes\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Prs`, `Abbr=Yes\|POS=PRON\|PronType=Int`, `Abbr=Yes\|POS=AUX\|VerbForm=Fin`, `Abbr=Yes\|POS=ADV`, `Abbr=Yes\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `POS=ADJ`, `Number=Plur\|POS=NOUN\|Typo=Yes`, `POS=DET\|PronType=Rel\|Typo=Yes`, `POS=PART\|Typo=Yes`, `Abbr=Yes\|POS=DET`, `POS=DET\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Degree=Pos\|NumType=Ord\|POS=ADV`, `POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs\|Typo=Yes`, `POS=PRON\|Typo=Yes`, `Number=Plur\|POS=VERB`, `POS=VERB\|Typo=Yes\|VerbForm=Inf`, `Mood=Ind\|POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Mood=Imp\|POS=AUX\|VerbForm=Inf`, `Abbr=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Abbr=Yes\|Case=Nom\|POS=PRON\|Person=2\|PronType=Prs`, `POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Part`, `Mood=Ind\|POS=AUX\|Tense=Past\|Typo=Yes\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `POS=VERB\|Typo=Yes\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin`, `Abbr=Yes\|POS=PRON`, `Abbr=Yes\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Abbr=Yes\|Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj`, `dep`, `det`, `det:predet`, `discourse`, `expl`, `fixed`, `flat`, `flat:foreign`, `goeswith`, `iobj`, `list`, `mark`, `nmod`, `nmod:npmod`, `nmod:poss`, `nmod:tmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `obl:npmod`, `obl:tmod`, `orphan`, `parataxis`, `punct`, `reparandum`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `0`, `2`, `4`, `6`, `8`, `10`, `12`, `13`, `15`, `17`, `19`, `21`, `23`, `26`, `28`, `29`, `30`, `32`, `34`, `36`, `39`, `42`, `43`, `45`, `47`, `49`, `51`, `53`, `55`, `57`, `59`, `61`, `62`, `64`, `67`, `69`, `71`, `73`, `75`, `77`, `79`, `81`, `83`, `85`, `87`, `1`, `89`, `90`, `92`, `94`, `95`, `97`, `99`, `101`, `105`, `106`, `108`, `110`, `111`, `112`, `113`, `115`, `117`, `119`, `121`, `122`, `124`, `125`, `126`, `127`, `128`, `129`, `130`, `132`, `133`, `136`, `137`, `138`, `139`, `142`, `143`, `145`, `150`, `153`, `156`, `157`, `159`, `162`, `163`, `164`, `167`, `169`, `171`, `174`, `176`, `177`, `179`, `182`, `184`, `187`, `189`, `191`, `193`, `194`, `197`, `198`, `201`, `203`, `204`, `208`, `210`, `211`, `213`, `214`, `215`, `217`, `220`, `221`, `224`, `225`, `227`, `229`, `231`, `233`, `235`, `236`, `239`, `241`, `242`, `244`, `246`, `247`, `248`, `249`, `250`, `251`, `252`, `254`, `256`, `258`, `259`, `261`, `263`, `264`, `265`, `266`, `269`, `270`, `272`, `273`, `274`, `276`, `277`, `278`, `281`, `283`, `72`, `285`, `287`, `288`, `291`, `292`, `293`, `296`, `297`, `298`, `299`, `300`, `301`, `302`, `303`, `304`, `305`, `306`, `307`, `308`, `309`, `310`, `311`, `315`, `316`, `317`, `318`, `319`, `320`, `322`, `88`, `324`, `327`, `328`, `332`, `336`, `337`, `338`, `340`, `341`, `342`, `343`, `344`, `347`, `349`, `350`, `351`, `352`, `353`, `354`, `356`, `357`, `358`, `360`, `361`, `362`, `363`, `364`, `365`, `366`, `367`, `369`, `373`, `375`, `376`, `377`, `378`, `379`, `144`, `381`, `383`, `384`, `386`, `387`, `389`, `390`, `393`, `394`, `396`, `397`, `398`, `399`, `402`, `405`, `407`, `408`, `410`, `411`, `412`, `413`, `414`, `416`, `418`, `419`, `421`, `422`, `423`, `424`, `426`, `428`, `429`, `430`, `432`, `434`, `436`, `437`, `438`, `441`, `442`, `443`, `444`, `445`, `446`, `447`, `260`, `448`, `452`, `453`, `454`, `455`, `456`, `457`, `458`, `460`, `461`, `462`, `463`, `464`, `465`, `466`, `467`, `409`, `468`, `469`, `470`, `471`, `472`, `473`, `476`, `477`, `481`, `484`, `486`, `487`, `488`, `491`, `492`, `493`, `494`, `495`, `496`, `497`, `498`, `499`, `500`, `503`, `504`, `506`, `507`, `508`, `509`, `511`, `512`, `513`, `514`, `515`, `516`, `517`, `518`, `519`, `107`, `520`, `521`, `522`, `523`, `524`, `525`, `526`, `527`, `528`, `529`, `531`, `533`, `534`, `537`, `538`, `542`, `543`, `544`, `545`, `546`, `547`, `548`, `549`, `550`, `553`, `554`, `557`, `558`, `560`, `561`, `564`, `565`, `566`, `567`, `568`, `569`, `570`, `571`, `572`, `573`, `574`, `575`, `576`, `577`, `578`, `579`, `580`, `581`, `582`, `583`, `584`, `586`, `587`, `588`, `589`, `590`, `591`, `592`, `594`, `595`, `76`, `596`, `597`, `598`, `600`, `601`, `602`, `149`, `603`, `604`, `605`, `606`, `607`, `608`, `609`, `490`, `610`, `611`, `96`, `255`, `614`, `617`, `619`, `620`, `621`, `622`, `623`, `624`, `626`, `627`, `628`, `630`, `632`, `633`, `635`, `638`, `639`, `640`, `641`, `644`, `647`, `650`, `654`, `657`, `659`, `173`, `661`, `662`, `663`, `664`, `668`, `669`, `670`, `671`, `673`, `676`, `677`, `678`, `680`, `682`, `158`, `91`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `691`, `692`, `693`, `695`, `697`, `699`, `700`, `701`, `183`, `702`, `703`, `704`, `706`, `707`, `709`, `711`, `713`, `485`, `714`, `716`, `717`, `718`, `719`, `720`, `721`, `722`, `723`, `724`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `735`, `736`, `737`, `738`, `739`, `741`, `742`, `744`, `745`, `746`, `748`, `749`, `752`, `753`, `754`, `755`, `756`, `757`, `759`, `760`, `762`, `763`, `764`, `765`, `768`, `769`, `772`, `774`, `775`, `776`, `777`, `781`, `782`, `783`, `784`, `785`, `786`, `787`, `788`, `789`, `78`, `791`, `794`, `795`, `796`, `798`, `800`, `801`, `802`, `803`, `804`, `805`, `806`, `807`, `808`, `809`, `810`, `811`, `812`, `813`, `814`, `815`, `816`, `817`, `818`, `819`, `820`, `822`, `823`, `824`, `825`, `826`, `827`, `828`, `829`, `830`, `131`, `831`, `631`, `832`, `833`, `834`, `838`, `839`, `841`, `842`, `843`, `844`, `845`, `846`, `847`, `849`, `792`, `850`, `851`, `852`, `853`, `856`, `857`, `858`, `859`, `860`, `861`, `862`, `864`, `865`, `715`, `866`, `867`, `868`, `869`, `870`, `871`, `872`, `873`, `877`, `878`, `879`, `881`, `882`, `883`, `885`, `886`, `887`, `888`, `848`, `889`, `890`, `891`, `892`, `893`, `894`, `895`, `896`, `900`, `901`, `902`, `903`, `905`, `907`, `908`, `911`, `912`, `913`, `914`, `918`, `919`, `920`, `923`, `924`, `925`, `926`, `927`, `928`, `929`, `930`, `931`, `932`, `933`, `52`, `934`, `935`, `937`, `939`, `941`, `943`, `944`, `945`, `946`, `947`, `950`, `951`, `952`, `954`, `955`, `956`, `957`, `961`, `962`, `963`, `964`, `965`, `966`, `967`, `968`, `969`, `970`, `971`, `972`, `973`, `974`, `975`, `976`, `977`, `374`, `978`, `979`, `980`, `982`, `983`, `986`, `987`, `988`, `989`, `990`, `991`, `992`, `993`, `994`, `995`, `996`, `998`, `1000`, `1001`, `1002`, `1003`, `1004`, `1005`, `1006`, `1007`, `1008`, `1009`, `1012`, `1016`, `1020`, `1021`, `1023`, `1024`, `1025`, `1031`, `1032`, `1033`, `1034`, `1035`, `1036`, `1037`, `1038`, `1039`, `1041`, `1042`, `1043`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1050`, `1051`, `1052`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1060`, `1061`, `1062`, `1063`, `1064`, `1065`, `642`, `1066`, `1067`, `1068`, `1069`, `1071`, `1072`, `1073`, `1074`, `1079`, `1080`, `1081`, `1082`, `1083`, `1085`, `1087`, `1088`, `1089`, `1090`, `559`, `1092`, `1093`, `1094`, `1096`, `1097`, `1098`, `1101`, `1102`, `1103`, `1104`, `1105`, `1106`, `1107`, `1109`, `1110`, `1112`, `1113`, `1114`, `1115`, `1116`, `1117`, `1118`, `1119`, `1122`, `1123`, `1124`, `1126`, `1127`, `1128`, `1129`, `1130`, `1132`, `1134`, `1137`, `1138`, `1140`, `1141`, `1142`, `1143`, `1144`, `1145`, `1146`, `1147`, `1150`, `1152`, `1161`, `1162`, `1163`, `1164`, `1165`, `1169`, `1170`, `1172`, `1173`, `1174`, `1175`, `1176`, `1177`, `1178`, `1181`, `1182`, `1183`, `1186`, `1187`, `1188`, `1190`, `1191`, `1192`, `1111`, `1193`, `1194`, `1195`, `1196`, `1198`, `1200`, `1201`, `1202`, `1203`, `1204`, `1208`, `1211`, `1213`, `1215`, `1216`, `1217`, `1218`, `1219`, `1221`, `1222`, `1223`, `1224`, `1225`, `1226`, `1227`, `1230`, `1231`, `1232`, `1234`, `1235`, `1249`, `1250`, `1252`, `1253`, `1254`, `1255`, `1257`, `1258`, `1260`, `1262`, `1263`, `1264`, `1265`, `1266`, `1267`, `1269`, `1272`, `7`, `1274`, `1276`, `1277`, `1278`, `1280`, `1282`, `1283`, `1284`, `1285`, `1286`, `1287`, `1289`, `1290`, `1291`, `1293`, `1295`, `1298`, `1302`, `1303`, `1311`, `1312`, `1313`, `1314`, `1316`, `1318`, `1317`, `1320`, `1322`, `1323`, `192`, `1324`, `1326`, `1327`, `234`, `1329`, `1330`, `1331`, `1332`, `747`, `1333`, `1334`, `1335`, `1336`, `1337`, `1339`, `1340`, `1341`, `1342`, `1344`, `1346`, `1350`, `1351`, `1352`, `1355`, `1357`, `1358`, `1360`, `1361`, `1362`, `1363`, `1364`, `1365`, `1367`, `1369`, `1370`, `1371`, `1372`, `1373`, `1374`, `1375`, `1376`, `1378`, `1380`, `1382`, `1384`, `1385`, `1386`, `1389`, `1390`, `1391`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1399`, `1401`, `1402`, `1403`, `1404`, `1405`, `1406`, `1407`, `1408`, `1409`, `1410`, `1411`, `1412`, `1413`, `1414`, `1416`, `1418`, `1419`, `1420`, `1421`, `1422`, `188`, `1423`, `1424`, `1425`, `1426`, `1428`, `1429`, `1430`, `1431`, `1432`, `1433`, `1434`, `1435`, `148`, `1436`, `1439`, `1440`, `1441`, `1442`, `1443`, `1444`, `1445`, `1446`, `1447`, `1448`, `1449`, `1450`, `1451`, `1452`, `1453`, `1454`, `1455`, `1456`, `1457`, `1458`, `1459`, `1460`, `1461`, `1462`, `1463`, `1464`, `1466`, `1467`, `1468`, `1469`, `1470`, `1471`, `1472`, `1474`, `1475`, `1478`, `1481`, `1484`, `1486`, `1488`, `1489`, `1473`, `1490`, `1492`, `1493`, `1494`, `1495`, `1496`, `1497`, `1498`, `1499`, `1500`, `1501`, `1502`, `1503`, `1504`, `1505`, `44`, `1506`, `1511`, `1513`, `1515`, `1517`, `1518`, `1522`, `1523`, `1525`, `1528`, `1530`, `1531`, `1532`, `1534`, `1536`, `1537`, `1538`, `1539`, `1540`, `1541`, `1543`, `1546`, `1547`, `1548`, `1549`, `1551`, `1552`, `1555`, `1556`, `1557`, `1558`, `1559`, `1560`, `1561`, `1562`, `1563`, `1564`, `1565`, `1566`, `1567`, `1568`, `1569`, `1570`, `1571`, `1572`, `1573`, `1574`, `1575`, `1576`, `1577`, `1578`, `1579`, `1580`, `1581`, `1582`, `1583`, `1584`, `1585`, `1586`, `1588`, `1590`, `1591`, `1592`, `1594`, `1597`, `1598`, `1599`, `1601`, `168`, `1602`, `1603`, `1605`, `1607`, `1608`, `1611`, `1612`, `1613`, `1614`, `1615`, `1616`, `1617`, `1618`, `1619`, `1620`, `1621`, `1622`, `1623`, `1624`, `1625`, `1626`, `1627`, `1628`, `1629`, `1630`, `1632`, `1554`, `1633`, `1634`, `1635`, `1636`, `1637`, `1638`, `1639`, `1642`, `1647`, `1648`, `1649`, `1651`, `1653`, `1654`, `1655`, `1657`, `1658`, `1659`, `1660`, `1661`, `1662`, `1663`, `1664`, `1665`, `1666`, `1667`, `1668`, `1669`, `1670`, `1671`, `1672`, `1673`, `1674`, `1675`, `1676`, `1677`, `1678`, `1679`, `1680`, `1681`, `1682`, `1683`, `1684`, `1685`, `1686`, `1687`, `1688`, `1689`, `1690`, `1691`, `1692`, `1693`, `1694`, `1695`, `1696`, `1697`, `1698`, `1699`, `1700`, `1701`, `1702`, `1704`, `1705`, `1706`, `1707`, `1708`, `1709`, `1710`, `1711`, `1712`, `1713`, `1714`, `1715`, `1716`, `1717`, `1718`, `1719`, `1720`, `1721`, `1722`, `1723`, `1724`, `1725`, `1726`, `1727`, `1730`, `1732`, `1734`, `1735`, `1736`, `1737`, `1738`, `1740`, `1742`, `1743`, `1744`, `1745`, `1746`, `1747`, `1748`, `1749`, `1750`, `1751`, `1754`, `1755`, `1756`, `1758`, `1760`, `1761`, `1762`, `1763`, `1766`, `1767`, `1768`, `1769`, `1770`, `1772`, `1775`, `1778`, `1779`, `1784`, `1787`, `1788`, `1789`, `1790`, `1791`, `1793`, `1795`, `1796`, `1798`, `1800`, `1804`, `1805`, `1806`, `1807`, `1808`, `1809`, `1810`, `1811`, `1812`, `1813`, `1814`, `1815`, `1816`, `1818`, `1821`, `1822`, `1823`, `1824`, `1825`, `1826`, `1827`, `1828`, `1831`, `1832`, `1833`, `1834`, `1835`, `1836`, `1837`, `1838`, `1839`, `1840`, `1841`, `1842`, `1843`, `1844`, `1846`, `1847`, `1848`, `1849`, `1850`, `1851`, `1852`, `1853`, `1855`, `1857`, `1858`, `1859`, `1860`, `1861`, `1862`, `1863`, `1866`, `1867`, `1868`, `1869`, `1872`, `1873`, `1876`, `1877`, `1878`, `1879`, `1880`, `1881`, `1883`, `1884`, `1886`, `1887`, `1888`, `1893`, `1752`, `1896`, `1897`, `1899`, `1900`, `1901`, `1906`, `1907`, `1908`, `1910`, `1911`, `1912`, `1913`, `1916`, `1917`, `1918`, `1919`, `1920`, `1922`, `1923`, `1925`, `1926`, `1927`, `1928`, `1929`, `1930`, `1931`, `1932`, `1933`, `1120`, `1934`, `1935`, `1936`, `1937`, `1938`, `1939`, `1940`, `1941`, `1942`, `1943`, `1944`, `1945`, `1946`, `1947`, `1948`, `1949`, `1950`, `1951`, `1952`, `1953`, `1954`, `1955`, `1956`, `1957`, `1958`, `1959`, `1961`, `1962`, `1963`, `1964`, `1965`, `1966`, `1967`, `1968`, `1969`, `1970`, `1971`, `1972`, `1973`, `1974`, `1975`, `1976`, `1977`, `1978`, `1979`, `1982`, `1985`, `1987`, `1988`, `1989`, `1990`, `1992`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2000`, `2003`, `2006`, `152`, `2007`, `2009`, `2010`, `2011`, `2012`, `2013`, `2014`, `2015`, `2016`, `2017`, `2019`, `2020`, `2021`, `2022`, `2023`, `2024`, `2025`, `2026`, `2029`, `2030`, `2031`, `2032`, `2033`, `2034`, `2035`, `2037`, `2038`, `2039`, `2040`, `2041`, `2042`, `2043`, `2044`, `2045`, `2047` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.15 |
| `TOKEN_P` | 99.18 |
| `TOKEN_R` | 99.11 |
| `TOKEN_ACC` | 99.83 |
| `SENTS_F` | 90.62 |
| `SENTS_P` | 90.99 |
| `SENTS_R` | 90.26 |
| `TAG_ACC` | 96.36 |
| `POS_ACC` | 96.94 |
| `MORPH_ACC` | 96.91 |
| `DEP_UAS` | 91.90 |
| `DEP_LAS` | 89.42 |
| `LEMMA_ACC` | 97.36 |
|
{"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/en_udv25_englishewt_trf
| null |
[
"spacy",
"token-classification",
"en",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#spacy #token-classification #en #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_English-EWT
### Label Scheme
View label scheme (1760 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (1760 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #en #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (1760 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Spanish-AnCora
| Feature | Description |
| --- | --- |
| **Name** | `es_udv25_spanishancora_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `GNU GPL 3.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (2060 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `ADJ`, `ADP`, `ADV`, `AUX`, `AUX_PRON`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `PUNCT_VERB_PRON_PUNCT`, `SCONJ`, `SYM`, `VERB`, `VERB_PRON`, `X` |
| **`morphologizer`** | `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `AdpType=Preppron\|POS=ADP`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `AdpType=Prep\|POS=ADP`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PROPN`, `Case=Acc,Dat\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PRON\|PronType=Int,Rel`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `POS=SCONJ`, `POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Number=Plur\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT\|PunctType=Peri`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=PUNCT\|PunctType=Comm`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Ind`, `POS=ADV`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Tot`, `POS=PRON\|PronType=Ind`, `POS=ADV\|Polarity=Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|PronType=Int,Rel`, `POS=PUNCT\|PunctType=Quot`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Brck`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Brck`, `NumForm=Digit\|NumType=Card\|POS=NUM`, `NumType=Card\|POS=NUM`, `POS=VERB\|VerbForm=Ger`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|POS=ADV`, `POS=AUX\|VerbForm=Inf`, `Number=Plur\|POS=DET\|PronType=Ind`, `Number=Plur\|POS=DET\|PronType=Dem`, `Degree=Cmp\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Case=Acc,Dat\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `AdvType=Tim\|POS=NOUN`, `AdpType=Prep\|POS=ADV`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Ind`, `NumType=Card\|Number=Plur\|POS=NUM`, `AdpType=Preppron\|POS=ADV`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `NumForm=Digit\|POS=NOUN`, `Number=Sing\|POS=PRON\|PronType=Dem`, `AdpType=Preppron\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=DET\|PronType=Tot`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Degree=Cmp\|Number=Plur\|POS=ADJ`, `POS=AUX\|VerbForm=Ger`, `Gender=Fem\|POS=NOUN`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `AdvType=Tim\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Number=Sing\|POS=PRON\|PronType=Int,Rel`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Acc,Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Case=Acc,Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Acc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `POS=PART`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin`, `NumForm=Digit\|POS=SYM`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumForm=Digit\|NumType=Frac\|POS=NUM`, `Gender=Fem\|Number=Sing\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `POS=PUNCT\|PunctType=Colo`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=PRON\|PronType=Neg`, `POS=PUNCT\|PunctType=Semi`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `POS=INTJ`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `AdpType=Prep\|POS=ADJ`, `Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=PUNCT\|PunctType=Dash`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Gender=Masc\|POS=NOUN`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc,Dat\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=NOUN\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc,Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc,Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `POS=DET\|PronType=Ind`, `POS=DET\|PronType=Int,Rel`, `AdvType=Tim\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Qest`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Qest`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Degree=Abs\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `POS=PUNCT\|PunctSide=Ini\|PunctType=Excl`, `POS=PUNCT\|PunctSide=Fin\|PunctType=Excl`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin`, `Case=Acc,Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Tot`, `Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `Case=Acc,Dat\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Definite=Ind\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `POS=SCONJ\|PronType=Int,Rel`, `Case=Acc\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `NumType=Card\|Number=Sing\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc,Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=DET\|PronType=Dem`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=SYM`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Nom\|Number=Sing\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Acc,Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin`, `NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Dem`, `Degree=Abs\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Poss=Yes\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Acc,Nom\|Number=Plur\|POS=PRON\|Person=2\|Polite=Form\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Com\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PrepCase=Pre\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Number=Sing\|POS=NOUN\|VerbForm=Fin`, `Case=Acc,Dat\|Mood=Imp\|Number=Plur,Sing\|POS=VERB\|Person=1,2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Case=Acc,Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Tot`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `Number=Sing\|POS=VERB\|VerbForm=Fin`, `POS=VERB\|VerbForm=Fin`, `Degree=Abs\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Degree=Abs\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc,Dat\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `AdpType=Prep\|Degree=Cmp\|POS=ADV`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=DET\|PronType=Dem`, `Definite=Ind\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Art`, `Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Masc\|Number=Sing\|POS=AUX\|VerbForm=Fin`, `Case=Acc,Dat\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Inf`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Plur\|POS=PRON\|Person=2\|PrepCase=Npr\|PronType=Prs`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Dat\|Number=Sing\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|PunctType=Quot\|VerbForm=Inf`, `Case=Com\|POS=PRON\|Person=3\|PrepCase=Pre\|PronType=Prs\|Reflex=Yes`, `NumForm=Digit\|NumType=Frac\|POS=SYM`, `Case=Dat\|Number=Sing\|POS=AUX\|Person=3\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Acc,Dat\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=1\|PrepCase=Npr\|PronType=Prs`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Ind`, `Case=Acc,Dat\|Number=Plur\|POS=VERB\|Person=2\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Number=Sing\|POS=PRON\|PronType=Tot`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Plur\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Ger`, `NumType=Card\|Number=Plur\|POS=DET\|PronType=Ind`, `POS=PRON\|PronType=Dem`, `POS=AUX\|VerbForm=Fin`, `Gender=Fem\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Int,Rel`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `AdvType=Tim\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Gender=Fem\|NumType=Card\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Ind`, `Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Sing\|POS=NOUN\|VerbForm=Part`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Case=Acc,Dat\|Number=Sing\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Ger`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Com\|Number=Sing\|POS=PRON\|Person=1\|PrepCase=Pre\|PronType=Prs`, `POS=X`, `Case=Com\|POS=PRON\|Person=3\|PronType=Prs\|Reflex=Yes`, `POS=ADP`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Plur,Sing\|POS=VERB\|Person=1,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Case=Acc,Dat\|Number=Sing\|POS=AUX\|Person=1\|PrepCase=Npr\|PronType=Prs\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=2\|Poss=Yes\|PronType=Ind`, `Case=Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Dat\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2,3\|PrepCase=Npr\|PronType=Prs\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc,Dat\|Number=Plur\|POS=VERB\|Person=1\|PrepCase=Npr\|PronType=Prs\|Reflex=Yes\|VerbForm=Ger`, `Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=NOUN\|PunctType=Comm`, `Degree=Cmp\|POS=ADJ`, `Gender=Masc\|POS=ADJ`, `Degree=Abs\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=PRON\|PronType=Ind`, `POS=PRON\|PronType=Neg`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Ind`, `Number=Sing\|POS=DET\|PronType=Int,Rel` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `dep`, `det`, `expl:pass`, `fixed`, `flat`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `5`, `6`, `8`, `10`, `14`, `16`, `18`, `20`, `22`, `24`, `25`, `27`, `29`, `33`, `36`, `38`, `40`, `42`, `45`, `48`, `50`, `54`, `57`, `59`, `60`, `62`, `64`, `66`, `68`, `71`, `73`, `75`, `77`, `81`, `83`, `85`, `87`, `88`, `91`, `93`, `95`, `97`, `99`, `100`, `102`, `104`, `106`, `108`, `110`, `112`, `114`, `115`, `117`, `119`, `120`, `122`, `49`, `125`, `126`, `128`, `130`, `134`, `138`, `140`, `143`, `145`, `146`, `148`, `150`, `151`, `153`, `156`, `158`, `160`, `162`, `164`, `167`, `170`, `171`, `173`, `177`, `178`, `179`, `181`, `182`, `184`, `186`, `187`, `188`, `191`, `193`, `195`, `198`, `201`, `202`, `13`, `204`, `206`, `208`, `210`, `214`, `216`, `218`, `221`, `223`, `224`, `226`, `228`, `230`, `232`, `234`, `235`, `237`, `239`, `241`, `242`, `244`, `248`, `250`, `254`, `257`, `258`, `260`, `261`, `262`, `264`, `265`, `266`, `267`, `269`, `271`, `273`, `277`, `278`, `280`, `284`, `286`, `288`, `289`, `290`, `291`, `293`, `296`, `298`, `300`, `302`, `304`, `306`, `308`, `309`, `313`, `315`, `319`, `321`, `322`, `323`, `324`, `325`, `327`, `328`, `330`, `332`, `336`, `338`, `339`, `341`, `342`, `343`, `345`, `347`, `348`, `350`, `351`, `352`, `354`, `355`, `357`, `359`, `361`, `363`, `365`, `367`, `370`, `372`, `375`, `377`, `379`, `382`, `385`, `389`, `391`, `393`, `395`, `397`, `398`, `400`, `402`, `404`, `408`, `410`, `413`, `415`, `416`, `418`, `419`, `420`, `422`, `424`, `427`, `429`, `431`, `433`, `434`, `435`, `436`, `438`, `440`, `441`, `443`, `445`, `447`, `448`, `450`, `451`, `452`, `454`, `456`, `457`, `458`, `460`, `462`, `463`, `465`, `466`, `468`, `470`, `473`, `477`, `478`, `480`, `481`, `483`, `485`, `489`, `491`, `492`, `494`, `496`, `498`, `500`, `501`, `504`, `505`, `506`, `507`, `509`, `511`, `514`, `516`, `519`, `521`, `522`, `524`, `526`, `528`, `532`, `535`, `538`, `541`, `543`, `545`, `546`, `548`, `550`, `554`, `555`, `557`, `559`, `560`, `561`, `562`, `564`, `565`, `567`, `569`, `571`, `572`, `573`, `575`, `576`, `579`, `582`, `584`, `586`, `589`, `590`, `591`, `592`, `595`, `596`, `597`, `599`, `600`, `601`, `603`, `606`, `607`, `608`, `610`, `615`, `617`, `618`, `622`, `624`, `625`, `626`, `627`, `629`, `631`, `633`, `585`, `634`, `636`, `637`, `638`, `639`, `643`, `644`, `646`, `647`, `648`, `650`, `651`, `653`, `654`, `657`, `658`, `660`, `662`, `663`, `667`, `669`, `671`, `673`, `674`, `678`, `680`, `683`, `684`, `685`, `686`, `688`, `689`, `692`, `693`, `695`, `696`, `697`, `699`, `701`, `702`, `704`, `707`, `709`, `711`, `712`, `714`, `715`, `717`, `718`, `719`, `720`, `722`, `725`, `728`, `730`, `732`, `733`, `734`, `735`, `736`, `738`, `739`, `740`, `741`, `743`, `745`, `748`, `750`, `752`, `753`, `755`, `756`, `759`, `760`, `763`, `764`, `765`, `766`, `768`, `770`, `772`, `773`, `774`, `775`, `776`, `778`, `779`, `780`, `783`, `785`, `786`, `788`, `791`, `793`, `795`, `797`, `798`, `800`, `803`, `804`, `805`, `807`, `808`, `810`, `813`, `816`, `819`, `821`, `823`, `824`, `825`, `826`, `829`, `832`, `833`, `836`, `129`, `837`, `838`, `839`, `843`, `845`, `846`, `848`, `849`, `851`, `852`, `853`, `855`, `856`, `857`, `858`, `862`, `864`, `866`, `868`, `869`, `873`, `875`, `877`, `878`, `879`, `882`, `884`, `886`, `888`, `890`, `891`, `892`, `893`, `895`, `897`, `898`, `900`, `902`, `904`, `906`, `907`, `909`, `910`, `912`, `914`, `915`, `916`, `918`, `920`, `921`, `923`, `924`, `926`, `928`, `930`, `931`, `933`, `935`, `936`, `937`, `939`, `940`, `943`, `944`, `945`, `946`, `947`, `949`, `951`, `952`, `953`, `955`, `956`, `957`, `0`, `959`, `961`, `963`, `965`, `966`, `968`, `969`, `970`, `972`, `973`, `975`, `976`, `978`, `979`, `980`, `982`, `983`, `984`, `986`, `987`, `989`, `990`, `993`, `995`, `996`, `997`, `1000`, `1003`, `1004`, `1006`, `1007`, `1008`, `1010`, `1012`, `1013`, `1014`, `1015`, `1017`, `1018`, `1021`, `1025`, `1027`, `1029`, `1030`, `1032`, `1034`, `1035`, `1036`, `1038`, `1039`, `1041`, `1043`, `1044`, `1045`, `1046`, `1047`, `1049`, `1050`, `1052`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1060`, `1061`, `1063`, `1065`, `1067`, `1069`, `1070`, `1072`, `1075`, `1076`, `1077`, `1078`, `1079`, `1080`, `1081`, `1082`, `1085`, `1086`, `1088`, `1090`, `1091`, `1092`, `1093`, `1094`, `1096`, `1097`, `1100`, `1101`, `1103`, `1104`, `1106`, `1108`, `1109`, `1111`, `1112`, `1114`, `1115`, `1116`, `598`, `26`, `1117`, `1118`, `1119`, `1121`, `1122`, `1123`, `1124`, `1125`, `1127`, `1128`, `1130`, `1132`, `1133`, `1135`, `1137`, `1139`, `1140`, `1141`, `1142`, `1144`, `1147`, `1151`, `1152`, `1153`, `1155`, `1157`, `1160`, `1162`, `1163`, `1165`, `1166`, `1170`, `1171`, `1173`, `1175`, `1177`, `1179`, `1180`, `1183`, `1185`, `1186`, `1188`, `1189`, `1191`, `1192`, `1193`, `1196`, `65`, `1197`, `1198`, `1202`, `1204`, `1206`, `1208`, `1209`, `1210`, `1213`, `1214`, `1215`, `1218`, `1220`, `1221`, `1223`, `1225`, `1226`, `1228`, `1230`, `1232`, `1233`, `1235`, `1236`, `1237`, `1238`, `1241`, `1242`, `1243`, `1244`, `1248`, `1253`, `1254`, `1256`, `1259`, `1260`, `1262`, `1264`, `1265`, `1266`, `1267`, `1269`, `1272`, `1273`, `1274`, `1275`, `1277`, `1280`, `1283`, `1286`, `1289`, `1291`, `1293`, `1294`, `1295`, `1296`, `1297`, `1298`, `1300`, `1301`, `1303`, `1307`, `1309`, `1311`, `1312`, `1316`, `1317`, `1318`, `1319`, `1321`, `1322`, `1323`, `1324`, `1325`, `1326`, `1327`, `1329`, `1330`, `1331`, `1332`, `1333`, `1334`, `1335`, `1336`, `1338`, `1339`, `1341`, `1342`, `1344`, `1346`, `1347`, `1348`, `1349`, `1350`, `1351`, `1352`, `1354`, `1356`, `1357`, `1359`, `1360`, `1361`, `1363`, `1364`, `1365`, `1369`, `1370`, `1371`, `1372`, `1373`, `1377`, `1378`, `1379`, `1381`, `1382`, `1383`, `1385`, `1386`, `1388`, `1389`, `1390`, `1391`, `1392`, `1394`, `1395`, `1396`, `1398`, `1399`, `1400`, `1402`, `1403`, `1406`, `1408`, `1409`, `1410`, `1413`, `1415`, `1416`, `1417`, `1418`, `1419`, `1421`, `1422`, `1423`, `1425`, `1427`, `1428`, `1431`, `1432`, `1433`, `1434`, `1435`, `1437`, `1438`, `1441`, `1442`, `1443`, `1445`, `1446`, `1447`, `1448`, `1449`, `1450`, `1452`, `1453`, `1454`, `1455`, `1457`, `1458`, `1460`, `1462`, `1463`, `1464`, `1467`, `1468`, `1469`, `1470`, `1472`, `1477`, `1479`, `1481`, `1484`, `1486`, `1488`, `1489`, `1492`, `1494`, `1495`, `1496`, `1498`, `1500`, `1501`, `1503`, `1504`, `1505`, `1507`, `1509`, `1510`, `1512`, `1513`, `1514`, `1516`, `1518`, `1519`, `1520`, `1523`, `1525`, `1526`, `1527`, `1529`, `1531`, `1532`, `1533`, `1535`, `1536`, `1537`, `1538`, `1540`, `1541`, `1542`, `1544`, `1546`, `1547`, `1548`, `124`, `1549`, `1551`, `1553`, `1555`, `1557`, `1560`, `1561`, `1563`, `1564`, `1565`, `1569`, `1571`, `1572`, `1573`, `1574`, `1575`, `1577`, `1579`, `1581`, `1582`, `1583`, `1585`, `1588`, `1589`, `1590`, `1591`, `1592`, `1595`, `1596`, `1597`, `1598`, `1599`, `1600`, `1601`, `1603`, `1605`, `1609`, `1611`, `1613`, `1614`, `1618`, `1619`, `1622`, `1624`, `1626`, `1628`, `1630`, `1631`, `1634`, `1636`, `1637`, `1638`, `1640`, `1642`, `1643`, `1644`, `1645`, `1646`, `1648`, `1649`, `1650`, `1651`, `1652`, `1653`, `1654`, `1656`, `1658`, `1660`, `1662`, `1665`, `1667`, `1668`, `1669`, `1671`, `1672`, `1673`, `1674`, `1675`, `1676`, `1678`, `1680`, `1681`, `1682`, `1683`, `1684`, `1685`, `1686`, `1688`, `1689`, `1690`, `1691`, `1692`, `1694`, `1696`, `1697`, `1698`, `1700`, `1701`, `1702`, `1703`, `1704`, `1706`, `1708`, `1709`, `1710`, `1711`, `1712`, `1713`, `1714`, `1715`, `1717`, `1718`, `1719`, `1721`, `1722`, `1724`, `1725`, `1726`, `1728`, `1729`, `1730`, `1731`, `1732`, `1733`, `1735`, `1737`, `1739`, `1741`, `1743`, `1744`, `1745`, `1747`, `1749`, `1750`, `1752`, `1753`, `1756`, `1758`, `1760`, `1761`, `1762`, `1764`, `1765`, `1767`, `1769`, `1772`, `1773`, `1774`, `1775`, `1777`, `1778`, `1781`, `1783`, `1784`, `1786`, `1790`, `1791`, `1792`, `1793`, `1795`, `1796`, `1798`, `1799`, `1801`, `1802`, `1804`, `1805`, `1806`, `1807`, `1809`, `1810`, `1811`, `1814`, `1816`, `1817`, `1818`, `1819`, `1820`, `1822`, `1824`, `1826`, `1827`, `1829`, `1831`, `1832`, `1834`, `1836`, `1838`, `1840`, `1842`, `1843`, `1844`, `1845`, `1847`, `1848`, `1850`, `1851`, `1853`, `1854`, `1856`, `1859`, `1860`, `1861`, `1863`, `1865`, `1866`, `1868`, `1869`, `1870`, `1871`, `1873`, `1875`, `1877`, `1879`, `1881`, `1883`, `1884`, `1887`, `1889`, `1890`, `1892`, `1893`, `1894`, `1895`, `1897`, `1899`, `1902`, `1903`, `1904`, `1906`, `1907`, `1909`, `1910`, `1912`, `1913`, `1914`, `1916`, `1917`, `1918`, `1920`, `1921`, `1923`, `1926`, `1927`, `1928`, `1929`, `1930`, `1931`, `1932`, `1933`, `1934`, `1935`, `1937`, `1938`, `1939`, `1942`, `1943`, `1944`, `1945`, `1946`, `1947`, `1948`, `1949`, `1950`, `1952`, `1953`, `1955`, `1956`, `1957`, `1958`, `1959`, `1961`, `1964`, `1967`, `1969`, `1971`, `1972`, `1974`, `1975`, `1977`, `1978`, `1979`, `1980`, `1981`, `1922`, `1982`, `1983`, `1984`, `1986`, `1988`, `1989`, `1990`, `1992`, `1993`, `1994`, `1995`, `1998`, `1999`, `2000`, `2003`, `2006`, `2007`, `2008`, `2009`, `2011`, `2013`, `2015`, `2016`, `2017`, `2018`, `2020`, `2023`, `2027`, `2028`, `2030`, `2031`, `2032`, `2033`, `2034`, `2035`, `2036`, `2039`, `2042`, `2043`, `2045`, `2047`, `2050`, `2052`, `2053`, `2054`, `2055`, `2056`, `2057`, `2061`, `2062`, `2063`, `2064`, `2065`, `2066`, `2067`, `2068`, `2069`, `2070`, `2073`, `2074`, `2075`, `2076`, `2078`, `2079`, `2080`, `2081`, `2082`, `2083`, `2084`, `2089`, `2090`, `2092`, `2093`, `2094`, `2095`, `2096`, `2098`, `2099`, `2100`, `2101`, `2103`, `2104`, `2106`, `2108`, `2109`, `2110`, `2113`, `2116`, `2119`, `2121`, `2124`, `2125`, `2126`, `2127`, `2128`, `2129`, `2132`, `2133`, `2134`, `2136`, `2137`, `2138`, `2139`, `2140`, `2141`, `2142`, `2143`, `2145`, `2146`, `2147`, `2148`, `2149`, `2150`, `2151`, `2152`, `2153`, `2154`, `2155`, `2157`, `2159`, `2160`, `2161`, `2162`, `2163`, `2164`, `2166`, `2167`, `2169`, `2172`, `2173`, `2174`, `2175`, `2178`, `2180`, `2181`, `2184`, `2186`, `2189`, `2190`, `2191`, `2192`, `2194`, `2195`, `2197`, `2199`, `2200`, `2202`, `2203`, `2204`, `2205`, `2210`, `2211`, `2212`, `2214`, `2215`, `2216`, `2217`, `2218`, `2219`, `2220`, `2221`, `2222`, `2223`, `2225`, `2227`, `2228`, `2229`, `2230`, `2231`, `2232`, `2233`, `2234`, `2235`, `2238`, `2239`, `2240`, `2241`, `2242`, `2243`, `2244`, `2245`, `2246`, `2250`, `2252`, `2254`, `2255`, `2256`, `2257`, `2258`, `2259`, `2260`, `2262`, `2264`, `2265`, `2266`, `2267`, `2268`, `2269`, `2270`, `2271`, `2272`, `2273`, `2274`, `2275`, `2276`, `2277`, `2278`, `2279`, `2280`, `2281`, `2283`, `2284`, `2285`, `2286`, `2287`, `2288`, `2289`, `2290`, `2291`, `2293`, `2294`, `2295`, `2296`, `2297`, `2298`, `2299`, `2301`, `2303`, `2304`, `2305`, `2306`, `2307`, `2308`, `2309`, `2310`, `2312`, `2313`, `2314`, `2315`, `2317`, `2319`, `2320`, `2321`, `2322`, `2324`, `2325`, `2326`, `2328`, `2329`, `2330`, `2331`, `2332`, `2333`, `2334`, `2335`, `2336`, `2337`, `2338`, `2339`, `2341`, `2342`, `2346`, `2347`, `2352`, `2353`, `2356`, `2358`, `2359`, `2360`, `2361`, `2362`, `2364`, `2365`, `2366`, `2368`, `2371`, `2372`, `2374`, `2375`, `2376`, `2377`, `2378`, `2379`, `2380`, `2382`, `2383`, `2384`, `2386`, `2387`, `2388`, `2389`, `2391`, `2394`, `2395`, `2396`, `2398`, `2399`, `2400`, `2401`, `2403`, `2404`, `2406`, `2409`, `2410`, `2411`, `2415`, `2418`, `2419`, `2420`, `2421`, `2422`, `2423`, `2424`, `2425`, `2427`, `430`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2434`, `2435`, `2436`, `2437`, `2438`, `2439`, `2440`, `2441`, `2442`, `2444`, `2445`, `2446`, `2447`, `2448`, `2449`, `2450`, `2451`, `2452`, `2453`, `2454`, `2456`, `2457`, `2458`, `2460`, `2461`, `2462`, `2463`, `2464`, `2465`, `2466`, `2467`, `2468`, `2469`, `2472`, `2474`, `2475`, `2476`, `2479`, `2480`, `2481`, `2482`, `2483`, `2484`, `2486`, `2487`, `2488`, `2490`, `2491`, `2493`, `2494`, `2495`, `2496`, `2497`, `2499`, `2500`, `2501`, `2502`, `2503`, `2504`, `2505`, `2506`, `2507`, `2508`, `2509`, `2510`, `2511`, `2512`, `2514`, `2515`, `2516`, `2517`, `2518`, `2519`, `2520`, `2521`, `2522`, `2523`, `2524`, `2525`, `2527`, `2528`, `2529`, `2530`, `2531`, `2532`, `2533`, `2535`, `2536`, `2537`, `2538`, `2539`, `2540`, `2541`, `2542`, `2543`, `2544`, `2545`, `2546`, `2547`, `2548`, `2550`, `2552`, `2554`, `2555`, `2556`, `2557`, `2558`, `2559`, `2560`, `2561`, `2562`, `2563`, `2566`, `2567`, `2568`, `2569`, `2570`, `2572`, `2574`, `2576`, `2577`, `2578`, `2580`, `2582`, `2583`, `2584`, `2585` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.98 |
| `TOKEN_P` | 99.98 |
| `TOKEN_R` | 99.99 |
| `TOKEN_ACC` | 100.00 |
| `SENTS_F` | 97.99 |
| `SENTS_P` | 97.43 |
| `SENTS_R` | 98.55 |
| `TAG_ACC` | 98.92 |
| `POS_ACC` | 99.03 |
| `MORPH_ACC` | 97.96 |
| `DEP_UAS` | 93.99 |
| `DEP_LAS` | 91.95 |
| `LEMMA_ACC` | 98.93 |
|
{"language": ["es"], "license": "gpl-3.0", "tags": ["spacy", "token-classification"]}
|
explosion/es_udv25_spanishancora_trf
| null |
[
"spacy",
"token-classification",
"es",
"license:gpl-3.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"es"
] |
TAGS
#spacy #token-classification #es #license-gpl-3.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Spanish-AnCora
### Label Scheme
View label scheme (2060 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (2060 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #es #license-gpl-3.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (2060 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Finnish-TDT
| Feature | Description |
| --- | --- |
| **Name** | `fi_udv25_finnishtdt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (12912 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `A`, `Adj`, `Adp`, `Adv`, `Adv_V`, `C`, `C_V`, `Foreign`, `Interj`, `N`, `Num`, `Pron`, `Punct`, `Symb`, `V`, `V_Pron` |
| **`morphologizer`** | `Case=Nom\|Number=Sing\|POS=NOUN`, `NumType=Ord\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=U\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `POS=ADV`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `POS=SCONJ`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|POS=ADV`, `Case=Gen\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `POS=ADJ`, `Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ins\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PROPN`, `Case=Par\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=All\|Derivation=U\|Number=Sing\|POS=NOUN`, `AdpType=Post\|POS=ADP`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Abl\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Par\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Derivation=Sti\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ill\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=All\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Foreign=Yes\|POS=X`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Derivation=Ja\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ine\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON`, `Case=Nom\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Derivation=Ttain\|POS=ADV`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=All\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|Number=Plur\|POS=NOUN`, `Case=Com\|POS=NOUN\|Person[psor]=3`, `Case=Com\|POS=PRON\|Person[psor]=3\|PronType=Ind`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=1`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `AdpType=Post\|POS=ADP\|Person[psor]=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ill\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Abbr=Yes\|Case=Ine\|Number=Sing\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=NOUN`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Par\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PROPN\|Style=Coll`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PROPN`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `NumType=Card\|POS=NUM`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Clitic=Ko\|Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Dem`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|VerbForm=Fin`, `Case=Ela\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=SYM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|POS=ADV`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=ADV`, `Case=Ine\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=All\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `POS=ADV\|Typo=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `AdpType=Prep\|POS=ADP`, `Case=Par\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `POS=INTJ`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abl\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Tra\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abe\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Clitic=Kin\|Mood=Cnd\|POS=AUX\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Ind`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Derivation=Sti\|POS=ADV\|Typo=Yes`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ade\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Ill\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Par\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PROPN`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `POS=ADV\|Style=Coll`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=NOUN\|Style=Coll`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Case=Ine\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Dem`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Clitic=Ko\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Case=Ill\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ela\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Abbr=Yes\|Case=Abl\|Number=Sing\|POS=PROPN`, `Case=Abl\|Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM\|Typo=Yes`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Han\|POS=ADV`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Llinen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=NOUN`, `Case=Abl\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=All\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Tra\|InfForm=1\|Number=Sing\|POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=All\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|Case=Ade\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=SCONJ\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ess\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin`, `Degree=Sup\|Derivation=Sti\|POS=ADV`, `Case=Ine\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Abl\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Ess\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Par\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|PronType=Rcp`, `Case=Par\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Ill\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=All\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Tra\|Clitic=Kin\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ess\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Tra\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Clitic=Kin\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Par\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ess\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Com\|Derivation=U\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=All\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Clitic=Kaan\|Connegative=Yes\|Mood=Cnd\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Abe\|InfForm=3\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Degree=Cmp\|Derivation=Sti\|POS=ADV`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Ja,Tar\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Sup\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Clitic=Kin\|InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Ine\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ill\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abe\|Clitic=Kaan\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Par\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Connegative=Yes\|Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Abl\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ela\|Degree=Sup\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=U\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ess\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Cnd\|POS=VERB\|VerbForm=Fin`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ill\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Ine\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Mood=Pot\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|POS=CCONJ`, `Number=Sing\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `POS=NUM`, `Case=Par\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Degree=Cmp\|POS=ADV`, `Case=Ine\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Case=Ela\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=All\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ill\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Abbr=Yes\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ela\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Par\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Clitic=Han\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Ela\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Par\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Abl\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Abe\|Number=Sing\|POS=NOUN`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Ess\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Derivation=Inen\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=All\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|POS=PROPN`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Mood=Pot\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ine\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=INTJ`, `Case=Ade\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Dem`, `Clitic=Pa\|POS=ADV`, `Case=Nom\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Mood=Pot\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=Gen\|Number=Plur\|POS=NOUN`, `Case=Ill\|Number=Plur\|POS=PROPN`, `Case=Gen\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Clitic=Pa,S\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Connegative=Yes\|Mood=Cnd\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Ill\|POS=PRON\|PronType=Rel`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Abbr=Yes\|Case=All\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|POS=SCONJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Nom\|Number=Plur\|POS=PRON\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Gen\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Case=All\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ\|Style=Coll`, `Case=Ade\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Style=Coll`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|Style=Coll\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ill\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM\|Style=Coll`, `Case=Ill\|Clitic=Kaan\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Number[psor]=Plur\|POS=ADV\|Person[psor]=1`, `Abbr=Yes\|Case=Ine\|Number=Sing\|POS=PROPN`, `Case=Ela\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=All\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Cmp\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Llinen\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `AdpType=Post\|Number[psor]=Plur\|POS=ADP\|Person[psor]=1\|Style=Coll`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=All\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Number=Plur\|POS=PROPN`, `Case=Par\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Case=Gen\|Number=Plur\|POS=NOUN\|Style=Coll`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|Style=Coll\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Abbr=Yes\|Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|Case=Ela\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Com\|Number=Plur\|POS=PROPN\|Person[psor]=3`, `Case=Ess\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Par\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Abl\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Clitic=Kaan\|NumType=Card\|Number=Sing\|POS=NUM`, `InfForm=1\|Number=Sing\|POS=VERB\|Style=Coll\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Clitic=Kaan\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Par\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=ADJ\|Style=Coll`, `POS=INTJ\|Style=Coll`, `Case=Ill\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Par\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Mood=Ind\|POS=VERB\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Pass`, `Case=Par\|NumType=Ord\|Number=Sing\|POS=ADJ\|Style=Coll`, `Number=Plur\|POS=SCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Style=Coll\|Tense=Pres\|VerbForm=Fin`, `Case=Tra\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Com\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abl\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Tra\|Number=Plur\|POS=NOUN`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Plur\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Par\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Style=Coll`, `Case=Ade\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ill\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ill\|NumType=Ord\|Number=Sing\|POS=ADJ`, `AdpType=Post\|POS=ADP\|Typo=Yes`, `Case=Ill\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Par\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ine\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|VerbForm=Inf\|Voice=Act`, `Case=Tra\|Degree=Cmp\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=2\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ine\|InfForm=3\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Clitic=Kin\|Derivation=U\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|InfForm=2\|POS=VERB\|VerbForm=Inf\|Voice=Pass`, `Case=Ill\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=2\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Gen\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|InfForm=3\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Clitic=Pa,S\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa,S\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=NOUN\|Style=Coll`, `Connegative=Yes\|Mood=Cnd\|POS=VERB\|Style=Coll\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Style=Coll`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ela\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Abl\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ela\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Ela\|Number=Plur\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Ela\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|Style=Coll`, `Connegative=Yes\|Mood=Pot\|POS=AUX\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|PronType=Prs\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko,S\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|POS=ADV`, `Case=Par\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=NUM`, `POS=NOUN\|Typo=Yes`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Ine\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ine\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Clitic=Kin\|POS=SCONJ`, `Case=Nom\|Clitic=Kin\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `Clitic=Kaan\|Derivation=Sti\|POS=ADV`, `Case=Ill\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ade\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Par\|Clitic=Kin\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Gen\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Ess\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PROPN`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=CCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Par\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Number=Plur\|POS=VERB\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=1\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Ka\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Par\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Tra\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Kin\|InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Sup\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Han\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ade\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|POS=AUX\|VerbForm=Fin\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=ADJ\|Person[psor]=1`, `Case=Par\|Degree=Sup\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ine\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|PartForm=Past\|Person[psor]=1\|VerbForm=Part\|Voice=Pass`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=All\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person[psor]=3\|PronType=Ind`, `Case=Abl\|Number=Plur\|POS=PRON\|PronType=Ind`, `Clitic=Kaan\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Derivation=Llinen,Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Derivation=Minen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Derivation=U\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Sup\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Tra\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Tra\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PROPN`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Rel`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ela\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abe\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Derivation=Sti\|POS=ADV`, `Case=Ine\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Clitic=Ka\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|PronType=Ind`, `Case=Gen\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ins\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Derivation=Inen,Vs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=All\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Par\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Par\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|Degree=Cmp\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=ADV\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Mood=Ind\|POS=AUX\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Plur\|POS=AUX\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Case=Abl\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Ine\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Case=Ade\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Plur\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Tra\|Degree=Pos\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=2\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Derivation=Minen\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Degree=Pos\|POS=ADJ`, `Case=Ela\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Ela\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=All\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Mood=Pot\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `POS=ADV\|Person[psor]=3`, `Case=Par\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Mood=Pot\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Derivation=Ja\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Derivation=Vs\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ess\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ess\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Clitic=Han\|Degree=Pos\|Number=Sing\|POS=NOUN`, `Case=Ade\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=All\|Derivation=Ja\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Ill\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Nom\|Clitic=Ko\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Clitic=Pa\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Par\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Abbr=Yes\|Case=Ela\|Number=Plur\|POS=NOUN`, `Case=All\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Clitic=Kaan\|Number=Sing\|POS=NUM\|PronType=Ind`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Case=Ine\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Degree=Sup\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Clitic=Ko\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Par\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Case=Ine\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ade\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2\|Style=Coll`, `Case=Ill\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ade\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Clitic=Han\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Plur\|POS=PROPN`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Case=Par\|Derivation=Ja\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Gen\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `AdpType=Post\|Clitic=Kin\|POS=ADP`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Nom\|Number=Plur\|POS=PRON`, `Case=Par\|Derivation=U\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Abl\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PROPN`, `Case=All\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Abe\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=All\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=All\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Par\|Clitic=Kin\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ine\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=All\|Derivation=Lainen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Derivation=Lainen\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Abe\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Degree=Sup\|POS=ADV`, `Case=Tra\|Degree=Cmp\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=NUM`, `Number=Plur\|POS=ADV\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Connegative=Yes\|Mood=Cnd\|POS=AUX\|Typo=Yes\|VerbForm=Fin`, `Number=Sing\|POS=ADV\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Typo=Yes`, `Case=Ill\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Clitic=Kaan\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Kaan\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Dem\|Typo=Yes`, `Clitic=Han\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Number[psor]=Sing\|POS=ADV\|Person[psor]=2`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Par\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind\|Typo=Yes`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Clitic=Kaan\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Abl\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Tra\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ill\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Gen\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person[psor]=1\|PronType=Rcp`, `Case=Ine\|InfForm=2\|Number=Sing\|Number[psor]=Plur\|POS=VERB\|Person[psor]=1\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Ine\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Plur\|POS=PRON\|PronType=Ind`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=2`, `Case=All\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `AdpType=Post\|Number[psor]=Plur\|POS=ADP\|Person[psor]=2`, `Number=Sing\|POS=CCONJ\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `POS=CCONJ\|Style=Coll`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|Person[psor]=1\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=SCONJ\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Tra\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Clitic=Kaan\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PRON`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=All\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ela\|Clitic=Kin\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Par\|Clitic=Kin\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Tra\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1\|Style=Arch`, `Case=Ill\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Number=Plur\|POS=CCONJ\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Pass`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Sing\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Connegative=Yes\|Mood=Ind\|POS=AUX\|Style=Coll\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Clitic=Pa\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Ja\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Nom\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ine\|InfForm=2\|Number=Sing\|POS=AUX\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Tra\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Abe\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Clitic=Ko\|InfForm=1\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Degree=Sup\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Par\|Clitic=Kaan\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Inen,Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Tra\|InfForm=1\|Number=Sing\|POS=AUX\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Int\|Style=Coll`, `Case=Par\|Number=Sing\|POS=NUM`, `Case=Ess\|NumType=Ord\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Derivation=Minen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=1\|Style=Coll\|VerbForm=Part\|Voice=Pass`, `Case=Tra\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Acc\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ess\|Clitic=Kin\|NumType=Card\|Number=Sing\|POS=NUM`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Clitic=Ka\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Ind\|Style=Coll`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Sup\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Ine\|Clitic=Kaan\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=NUM`, `Case=All\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ade\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Ade\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ine\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Com\|Degree=Pos\|POS=ADJ`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Clitic=Ko\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ine\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Pot\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Number=Plur\|POS=VERB\|Person=0\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Ill\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Mood=Pot\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `InfForm=1\|Number=Sing\|POS=VERB\|Style=Arch\|VerbForm=Inf\|Voice=Act`, `Case=All\|Degree=Pos\|Derivation=Minen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Case=Nom\|Degree=Sup\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Clitic=S\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int\|Typo=Yes`, `Clitic=Han,Pa\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Abe\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Nom\|Derivation=Minen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Han\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `POS=INTJ\|Typo=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PROPN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Han,Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Clitic=Ko\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Cmp\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Connegative=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `POS=SCONJ\|Style=Coll`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ins\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Case=Ess\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `AdpType=Post\|Number[psor]=Sing\|POS=ADP\|Person[psor]=1`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Clitic=Pa\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Ko\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=VERB\|Person=2\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `AdpType=Post\|Number[psor]=Sing\|POS=ADP\|Person[psor]=2\|Style=Coll`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Connegative=Yes\|Mood=Ind\|POS=VERB\|Style=Coll\|Tense=Pres\|VerbForm=Fin`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Kaan\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Case=All\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=All\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Tra\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Clitic=S\|Number=Sing\|POS=PRON\|PronType=Int`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=0\|VerbForm=Fin\|Voice=Act`, `Clitic=Han,Pa\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Kaan\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Ade\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Case=Acc\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|POS=VERB\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Par\|Number=Sing\|POS=PROPN\|Style=Coll`, `Clitic=Kin\|POS=ADV\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Derivation=Sti\|POS=ADV\|Style=Coll`, `Case=Abl\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Clitic=Ko\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2\|Style=Coll`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs\|Style=Coll`, `Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ\|Style=Coll`, `AdpType=Post\|Number[psor]=Sing\|POS=ADP\|Person[psor]=1\|Style=Coll`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Ess\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=PROPN`, `Clitic=Pa\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Pass`, `Case=All\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Clitic=Han\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Derivation=U\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem`, `Clitic=Pa\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Ko\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Gen\|Number=Sing\|POS=PRON`, `Case=Gen\|Number=Plur\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Clitic=Kaan\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Kin\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Clitic=Kaan\|Connegative=Yes\|Mood=Cnd\|POS=AUX\|VerbForm=Fin`, `Clitic=S\|POS=ADV`, `Case=Gen\|Clitic=Ko\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=All\|Clitic=Kaan\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Par\|Clitic=Kin\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Case=Ins\|InfForm=2\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Ine\|Clitic=Kaan\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Number=Plur\|POS=ADV\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Ade\|Clitic=S\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ade\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Style=Coll`, `Case=Ess\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Par\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Clitic=Han,Ko\|POS=ADV`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `AdpType=Post\|Number[psor]=Sing\|POS=ADP\|Person[psor]=2`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ela\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ela\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Clitic=Ka\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Nom\|Clitic=Kin\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Clitic=Ko\|Derivation=Sti\|POS=ADV`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=2`, `Case=Nom\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Ess\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Ela\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Ela\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=2\|Reflex=Yes`, `Case=Acc\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Ill\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Case=Par\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Clitic=Kin\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ela\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Ade\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=1`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Clitic=Ko,S\|POS=ADV\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Int\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=NOUN\|Style=Coll`, `Clitic=Ko\|POS=ADV\|Style=Coll`, `Case=Nom\|Derivation=U\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Connegative=Yes\|Mood=Cnd\|POS=VERB\|Style=Coll\|VerbForm=Fin`, `Case=Par\|Number=Sing\|POS=PRON\|PronType=Ind\|Style=Coll`, `Case=Abl\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2\|Style=Coll`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Ade\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Gen\|Number=Sing\|POS=PRON\|PronType=Ind\|Typo=Yes`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ade\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Abl\|NumType=Card\|Number=Sing\|POS=NUM`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Ill\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Rcp\|Typo=Yes`, `Case=Ade\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Par\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `POS=ADV\|Person[psor]=3\|Typo=Yes`, `Clitic=Pa\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Par\|Clitic=Kaan\|NumType=Card\|Number=Sing\|POS=NUM`, `Clitic=Pa,S\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Han\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han,Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Connegative=Yes\|Mood=Pot\|POS=VERB\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Number[psor]=Plur\|POS=ADV\|Person[psor]=2`, `Mood=Pot\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Abl\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Clitic=Pa\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=PROPN`, `Case=All\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2\|Style=Coll`, `Case=All\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs\|Style=Coll`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PROPN\|Style=Coll`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Ade\|Derivation=Vs\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `POS=VERB\|Person[psor]=3\|VerbForm=Inf\|Voice=Act`, `Clitic=Kaan\|POS=ADV\|Style=Coll`, `Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Pa\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ess\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Clitic=Kin\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Ine\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Ko\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Nom\|Clitic=Han,Ko\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Number=Plur\|POS=PRON\|Person[psor]=3`, `Case=Ade\|Number=Sing\|POS=PROPN\|Style=Coll`, `Case=Ess\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Ela\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Ill\|Clitic=Kaan\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Ela\|Clitic=Ko\|Number=Sing\|POS=PROPN`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Ine\|Derivation=Minen\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Ill\|Number=Sing\|POS=NUM`, `Case=Nom\|Number=Sing\|POS=PRON\|PronType=Ind\|Style=Coll`, `Clitic=S\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Pass`, `Case=Abl\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Ko,S\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs\|Style=Coll`, `Case=Ill\|Number=Sing\|POS=PRON`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Clitic=Ko,S\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Ela\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Clitic=Kin\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `AdpType=Post\|POS=ADP\|Style=Coll`, `Case=Gen\|Number=Plur\|POS=NUM`, `Case=Ela\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Par\|NumType=Card\|Number=Sing\|POS=NUM\|Style=Coll`, `Case=Gen\|Derivation=Ton\|Number=Plur\|POS=NOUN`, `Case=Nom\|Clitic=Ko\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|PronType=Prs\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=All\|Clitic=Pa\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Abl\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Clitic=Kaan\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Plur\|POS=PRON\|Person[psor]=3`, `Case=Gen\|Clitic=Kin\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Han\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Abl\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=All\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=2`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Clitic=Ko\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ine\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Ine\|Clitic=Kin\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Prs\|Style=Coll`, `Case=Par\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Style=Coll`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Number=Sing\|POS=SCONJ\|Person=1\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Abl\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Par\|Clitic=Ko\|Number=Sing\|POS=NOUN`, `Mood=Pot\|Number=Sing\|POS=AUX\|Person=0\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Number=Plur\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Ade\|Derivation=U\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Abl\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Clitic=Han\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=PRON\|PronType=Dem\|Style=Coll`, `Case=Tra\|InfForm=1\|Number=Sing\|Number[psor]=Sing\|POS=AUX\|Person[psor]=2\|VerbForm=Inf\|Voice=Act`, `Clitic=Han\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Case=Nom\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Clitic=Kaan\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=PRON\|Person[psor]=3`, `Clitic=Pa,S\|Mood=Ind\|POS=VERB\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Clitic=Pa\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Han\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes\|Style=Coll`, `Case=Par\|Clitic=Kin\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ\|Style=Coll`, `Case=Ine\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Par\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Style=Coll`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Ine\|Number=Sing\|POS=PRON\|PronType=Ind\|Style=Coll`, `Case=Ine\|Derivation=Inen\|Number=Plur\|POS=ADJ\|Style=Coll`, `Case=Gen\|Degree=Sup\|Number=Sing\|Number[psor]=Sing\|POS=ADJ\|Person[psor]=1`, `Case=Nom\|Clitic=Kin\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Agt\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|Number[psor]=Sing\|POS=PROPN\|Person[psor]=2`, `Case=Par\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=2\|Reflex=Yes`, `Case=Ade\|Number=Sing\|POS=PRON\|PronType=Rel\|Style=Coll`, `Clitic=Pa,S\|POS=ADV`, `Case=Ess\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Nom\|Clitic=Kin\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Style=Coll\|VerbForm=Part\|Voice=Act`, `Case=All\|Number=Plur\|POS=PROPN`, `Clitic=Ko,S\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Ind\|Style=Coll`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Clitic=Kaan\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Clitic=Han\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Clitic=Ko,S\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa,S\|Number=Sing\|POS=VERB\|Person=0\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Clitic=Han,Ko\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=PRON`, `Case=Nom\|Clitic=Kaan\|POS=PRON\|PronType=Ind`, `Clitic=Pa\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Ill\|Number=Sing\|POS=PRON\|PronType=Int\|Style=Coll`, `Connegative=Yes\|Mood=Pot\|POS=AUX\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=VERB\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Number=Plur\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|VerbForm=Fin\|Voice=Act`, `Connegative=Yes\|Mood=Imp\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Clitic=Han\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=All\|Number=Plur\|POS=PRON`, `Case=Nom\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Pot\|Number=Sing\|POS=AUX\|Person=0\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ess\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Degree=Sup\|Number=Plur\|POS=ADJ\|Person[psor]=3`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Com\|Derivation=Ja\|POS=NOUN\|Person[psor]=3`, `Clitic=Pa,S\|Mood=Ind\|POS=AUX\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Case=Abl\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ade\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ela\|Number=Sing\|POS=PRON\|Person[psor]=3`, `Case=Ade\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ade\|Clitic=Kin\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Derivation=Lainen\|Number=Sing\|POS=PROPN`, `Case=Nom\|Number=Sing\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PROPN`, `Case=Gen\|Clitic=Kin\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Gen\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Acc\|Number=Plur\|POS=PRON\|PronType=Int`, `Case=Ess\|Degree=Pos\|Number=Plur\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Clitic=Kin\|Number=Sing\|POS=PROPN`, `Case=Nom\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Ela\|Clitic=Kaan\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Derivation=Inen\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Clitic=Kin\|Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Ill\|NumType=Card\|Number=Sing\|POS=NUM\|Style=Coll`, `Case=Ine\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ade\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Case=Tra\|Clitic=Kin\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=Gen\|Number=Sing\|POS=PROPN\|Style=Coll`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=ADJ\|Style=Coll`, `Case=Ine\|Number=Plur\|POS=NOUN\|Style=Coll`, `Clitic=Kaan\|InfForm=1\|Number=Sing\|POS=VERB\|Style=Coll\|VerbForm=Inf\|Voice=Act`, `Case=Ill\|Number=Plur\|POS=NOUN\|Style=Coll`, `Clitic=Han,Ko\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Clitic=Ko,S\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Style=Coll\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=AUX\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|PronType=Prs\|Style=Coll\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Abl\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=All\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Ela\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Pass`, `Case=Ela\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ill\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=All\|Number=Plur\|POS=PRON\|Person[psor]=3\|PronType=Rcp`, `Case=Ela\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|POS=VERB\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Pass`, `Case=All\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=ADJ\|Typo=Yes`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|POS=ADJ`, `Case=Ess\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Ess\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Case=Par\|Number=Plur\|POS=PRON\|PronType=Rcp`, `Case=Tra\|NumType=Card\|Number=Sing\|POS=NUM`, `Clitic=Pa\|Mood=Ind\|POS=VERB\|Tense=Past\|VerbForm=Fin\|Voice=Pass`, `Case=Ine\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=Gen\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Gen\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ela\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Tra\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Case=Tra\|Number=Sing\|POS=NOUN`, `Case=Ins\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Tra\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Minen\|Number=Plur\|POS=NOUN`, `Abbr=Yes\|Case=All\|Number=Sing\|POS=PROPN`, `Case=All\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=U\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|NumType=Ord\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `POS=X`, `Case=Par\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Style=Coll`, `Case=Gen\|Derivation=Inen\|Number=Plur\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=ADJ\|Style=Coll`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Ade\|Clitic=Kin\|Derivation=U\|Number=Sing\|POS=NOUN`, `Case=Nom\|Clitic=Kin\|Derivation=Ja\|Number=Plur\|POS=NOUN`, `Case=All\|Clitic=Kin\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=All\|Clitic=Kin\|Number=Plur\|POS=NOUN`, `Case=Ill\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Gen\|Clitic=Kin\|Degree=Cmp\|Number=Sing\|POS=ADJ`, `Case=Abl\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Clitic=Han\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ela\|Clitic=Kaan\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Ade\|Derivation=U\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1\|Typo=Yes`, `Case=Nom\|Clitic=Kin\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Ill\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Par\|NumType=Ord\|POS=ADJ`, `Case=Par\|Degree=Sup\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Clitic=Kin\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Ja\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ade\|Clitic=Kaan\|Number=Sing\|POS=NOUN`, `Case=Ess\|Clitic=Kin\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Abl\|Degree=Pos\|Derivation=Ja\|Number=Plur\|POS=ADJ`, `Case=Tra\|Number=Sing\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Abbr=Yes\|Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ill\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ela\|Number=Plur\|Number[psor]=Plur\|POS=PRON\|Person[psor]=2`, `Case=Nom\|Degree=Pos\|Number=Plur\|Number[psor]=Plur\|POS=AUX\|PartForm=Pres\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=NUM`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `Clitic=Kin\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin\|Voice=Act`, `Case=Ela\|Clitic=Kin\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Tra\|Derivation=Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Cmp\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Past\|Person[psor]=1\|VerbForm=Part\|Voice=Act`, `Case=Ins\|InfForm=3\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Number=Plur\|Number[psor]=Sing\|POS=PRON\|Person[psor]=1\|Reflex=Yes`, `Case=Nom\|Derivation=Ton\|Number=Plur\|POS=NOUN`, `Case=Ade\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Sup\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Degree=Cmp\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Clitic=Kin\|InfForm=2\|Number=Sing\|POS=VERB\|VerbForm=Inf\|Voice=Act`, `Case=Ela\|Number=Plur\|POS=PRON\|Person[psor]=3`, `Case=Gen\|Degree=Sup\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Par\|Clitic=S\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Abl\|Clitic=Pa\|Number=Sing\|POS=PRON\|PronType=Int`, `Clitic=Ko\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Abl\|Number=Sing\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=2`, `Clitic=Pa,S\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Number=Sing\|Number[psor]=Sing\|POS=VERB\|PartForm=Pres\|Person[psor]=2\|VerbForm=Part\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=PROPN`, `Clitic=Kin\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Pres\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Number=Plur\|Number[psor]=Plur\|POS=NOUN\|Person[psor]=1`, `Clitic=Han\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Ess\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN`, `Clitic=Han\|Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Clitic=Han\|Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Clitic=Pa,S\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=0\|Tense=Pres\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Nom\|Clitic=Kin\|Derivation=Inen\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Ade\|Number=Plur\|POS=PRON`, `Case=All\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Lainen,Vs\|Number=Sing\|POS=NOUN`, `AdpType=Post\|Clitic=Kaan\|POS=ADP`, `AdpType=Prep\|POS=ADP\|Person[psor]=3`, `Case=Ine\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Ine\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Lainen\|Number=Plur\|POS=ADJ\|Style=Coll`, `AdpType=Prep\|Clitic=Kaan\|POS=ADP`, `Case=Nom\|Clitic=Han\|Number=Sing\|POS=PROPN`, `Clitic=Pa\|InfForm=1\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Ade\|Clitic=Kaan\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Ind`, `Case=Ill\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Case=Ade\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Derivation=U\|Number=Plur\|POS=NOUN`, `Case=Gen\|Clitic=Kaan\|Number=Sing\|POS=PROPN`, `Case=Com\|Clitic=Kin\|Derivation=U\|POS=NOUN\|Person[psor]=3`, `Case=Ade\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Gen\|Number=Sing\|Number[psor]=Sing\|POS=PRON\|Person[psor]=2\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ill\|Clitic=Kin\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Gen\|Derivation=U\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=All\|Degree=Sup\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ill\|Number=Plur\|POS=PRON\|Person[psor]=3`, `Case=Ess\|Degree=Pos\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Nom\|Degree=Sup\|Derivation=Ton\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Vs\|Number=Plur\|Number[psor]=Sing\|POS=NOUN\|Person[psor]=1`, `Case=Par\|Degree=Cmp\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ess\|Number=Sing\|POS=PRON\|PronType=Ind\|Typo=Yes`, `Case=Ela\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ill\|Derivation=Lainen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ill\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Clitic=Han\|Number=Sing\|POS=NOUN`, `Case=Gen\|Degree=Sup\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Case=Ela\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Ess\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Clitic=Kin\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Degree=Pos\|Derivation=Llinen\|Number=Plur\|POS=ADJ\|Typo=Yes`, `Case=Ela\|InfForm=3\|Number=Sing\|POS=AUX\|VerbForm=Inf\|Voice=Act`, `Case=Com\|Degree=Pos\|Derivation=Inen\|POS=ADJ`, `Case=Com\|Degree=Pos\|Derivation=Llinen\|POS=ADJ`, `Case=Gen\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Connegative=Yes\|Mood=Cnd\|POS=VERB\|VerbForm=Fin\|Voice=Pass`, `Case=Nom\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Number=Sing\|POS=NUM`, `Abbr=Yes\|Case=Ill\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Ind`, `Case=Tra\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Derivation=Tar\|Number=Plur\|POS=NOUN`, `Case=Gen\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Case=Ade\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ess\|Degree=Sup\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Abl\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Clitic=Kaan\|Connegative=Yes\|Mood=Ind\|POS=AUX\|Tense=Pres\|VerbForm=Fin\|Voice=Pass`, `Clitic=Kin\|Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Plur\|POS=PRON`, `Case=Ill\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Case=Nom\|Number=Plur\|POS=PRON\|PronType=Rel\|Typo=Yes`, `Case=Ade\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Ess\|Derivation=Lainen\|Number=Plur\|POS=NOUN`, `Case=Ade\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Gen\|Derivation=Lainen,Vs\|Number=Sing\|POS=NOUN`, `Case=All\|Derivation=Lainen\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Degree=Cmp\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ill\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `POS=PROPN\|Typo=Yes`, `Case=All\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Ine\|Number=Sing\|POS=PRON\|Person[psor]=3\|Reflex=Yes`, `Abbr=Yes\|Case=Nom\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ill\|Derivation=Lainen\|Number=Sing\|POS=PROPN`, `Case=Ela\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Abl\|Number=Plur\|POS=PROPN`, `Case=Ess\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Abbr=Yes\|Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Sup\|Derivation=Lainen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Degree=Pos\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Nom\|Derivation=Lainen,Vs\|Number=Sing\|POS=NOUN`, `Case=Abl\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Sup\|Number=Sing\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Degree=Cmp\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ela\|Clitic=Kin\|Degree=Sup\|Number=Sing\|POS=ADJ`, `Abbr=Yes\|POS=ADJ`, `Case=Ine\|Degree=Cmp\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Par\|Derivation=Tar\|Number=Sing\|POS=NOUN`, `Case=Ela\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Clitic=Kin\|Degree=Pos\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Ess\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Ela\|Derivation=U\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Clitic=Kin\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ela\|Derivation=Lainen,Vs\|Number=Sing\|POS=NOUN`, `Case=Nom\|Degree=Pos\|Derivation=Ton\|Number=Sing\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `POS=CCONJ\|Typo=Yes`, `Case=All\|Number=Sing\|POS=PRON`, `Case=Ess\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Derivation=Llinen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Ine\|POS=SYM`, `Abbr=Yes\|Case=Ess\|Number=Sing\|POS=NOUN`, `Case=Par\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Abbr=Yes\|Case=Nom\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ade\|Clitic=Kin\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Par\|Degree=Cmp\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Derivation=Minen\|Number=Sing\|POS=NOUN`, `Clitic=Ko\|Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin\|Voice=Act`, `Case=Nom\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Nom\|Derivation=Inen,Vs\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|Clitic=Kaan\|Derivation=Ja\|Number=Sing\|POS=NOUN`, `Abbr=Yes\|POS=PROPN`, `Case=Par\|Derivation=Inen\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Neg\|VerbForm=Part\|Voice=Act`, `Case=Ela\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|Person[psor]=3\|VerbForm=Part\|Voice=Act`, `Case=Ade\|Degree=Sup\|Derivation=Inen\|Number=Plur\|POS=ADJ\|Person[psor]=3`, `Case=Ade\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=All\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=Par\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Clitic=Kin\|Number=Plur\|POS=PROPN`, `Case=Par\|Derivation=Inen\|Number=Plur\|POS=ADJ`, `Case=Ine\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Tra\|Derivation=Ja\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Abbr=Yes\|Case=Tra\|Degree=Pos\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ade\|InfForm=3\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Case=Par\|Degree=Sup\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|NumType=Ord\|POS=ADJ`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=Par\|NumType=Card\|Number=Sing\|POS=ADJ`, `Case=Ela\|Derivation=Vs\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=PRON\|Reflex=Yes`, `Case=Tra\|Degree=Pos\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=AUX\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Abbr=Yes\|Case=Ine\|Degree=Pos\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Sup\|Number=Sing\|POS=ADJ\|Person[psor]=3`, `Case=Nom\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Case=Ill\|Number=Plur\|POS=NOUN\|Person[psor]=3\|Typo=Yes`, `Case=Tra\|Derivation=Vs\|Number=Plur\|POS=NOUN`, `Case=Ess\|Derivation=Inen,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Clitic=Kin\|Number=Plur\|POS=PRON`, `Case=Ill\|Clitic=Kin\|Number=Plur\|POS=PRON`, `Case=Ine\|Number=Plur\|POS=PRON\|PronType=Ind\|Style=Coll`, `Case=Ill\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Case=Ade\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|Typo=Yes\|VerbForm=Fin\|Voice=Act`, `Case=Par\|Derivation=Llinen\|Number=Plur\|POS=ADJ`, `Case=Ess\|Derivation=Lainen\|Number=Sing\|POS=NOUN`, `Case=Tra\|Degree=Sup\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Ela\|Number=Sing\|POS=PRON\|PronType=Rcp`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs\|Typo=Yes`, `Case=All\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Pres\|VerbForm=Part\|Voice=Pass`, `Case=Ess\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Tra\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Abbr=Yes\|Case=Gen\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Number=Sing\|POS=PRON\|PronType=Rel`, `Case=Gen\|Degree=Sup\|Derivation=Lainen\|Number=Plur\|POS=ADJ`, `Case=Ess\|Derivation=Inen\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Case=Tra\|Degree=Sup\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Gen\|Derivation=Llinen,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Degree=Pos\|Derivation=Lainen\|POS=ADJ`, `Abbr=Yes\|Case=Par\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Abl\|Degree=Pos\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `Abbr=Yes\|Case=Par\|Number=Sing\|POS=PROPN`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Agt\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Par\|Derivation=Tar\|Number=Plur\|POS=NOUN`, `Case=Nom\|Derivation=Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ela\|Degree=Sup\|Derivation=Ton\|Number=Plur\|POS=ADJ`, `InfForm=1\|Number=Sing\|POS=VERB\|Typo=Yes\|VerbForm=Inf\|Voice=Act`, `Case=Ade\|Degree=Cmp\|Number=Plur\|POS=ADJ`, `Case=All\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=NOUN\|Typo=Yes`, `Case=All\|Number=Sing\|POS=NOUN\|Typo=Yes`, `Case=Ine\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=All\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Agt\|VerbForm=Part\|Voice=Act`, `Case=All\|Degree=Sup\|Number=Plur\|POS=ADJ`, `Case=Gen\|Derivation=Llinen\|Number=Sing\|POS=ADJ`, `Case=Gen\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ill\|Clitic=Kin\|Degree=Pos\|Derivation=Inen\|Number=Sing\|POS=ADJ`, `Case=Nom\|Degree=Pos\|Number=Sing\|POS=PROPN`, `Abbr=Yes\|Case=Nom\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Ess\|Derivation=Ja\|Number=Plur\|POS=NOUN\|Person[psor]=3`, `Case=Ine\|Clitic=Kin\|Number=Plur\|POS=PRON\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PROPN\|Person[psor]=3`, `Case=Abl\|Number=Plur\|POS=PRON\|PronType=Rel`, `Case=Par\|POS=SYM`, `Case=Ine\|Derivation=Ton,Vs\|Number=Sing\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Number=Plur\|POS=VERB\|PartForm=Past\|VerbForm=Part\|Voice=Act`, `Case=Ine\|Number=Sing\|POS=PROPN\|Typo=Yes`, `Mood=Pot\|POS=VERB\|Typo=Yes\|VerbForm=Fin\|Voice=Pass`, `Case=Ela\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Derivation=Minen\|Number=Sing\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Par\|Number=Plur\|POS=NOUN\|Person[psor]=3\|Style=Coll`, `Case=Nom\|Clitic=Kin\|Number=Sing\|POS=NOUN\|Person[psor]=3`, `Case=Tra\|Number=Sing\|POS=PRON\|PronType=Int`, `Case=Par\|Degree=Pos\|Number=Sing\|POS=VERB\|PartForm=Past\|Typo=Yes\|VerbForm=Part\|Voice=Act`, `Case=Tra\|Number=Sing\|POS=PRON\|PronType=Dem`, `Case=Ade\|Number=Sing\|POS=NUM`, `Case=Par\|Derivation=Ton\|Number=Plur\|POS=NOUN`, `Case=Ine\|Degree=Pos\|Derivation=Lainen\|Number=Sing\|POS=ADJ\|Typo=Yes`, `Case=Ine\|Derivation=Ton,Vs\|Number=Plur\|POS=NOUN`, `Case=Gen\|Clitic=Kin\|Degree=Pos\|Number=Plur\|POS=PRON\|PronType=Ind` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `cc:preconj`, `ccomp`, `compound`, `compound:nn`, `compound:prt`, `conj`, `cop`, `cop:own`, `csubj`, `csubj:cop`, `dep`, `det`, `discourse`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `goeswith`, `mark`, `nmod`, `nmod:gobj`, `nmod:gsubj`, `nmod:poss`, `nsubj`, `nsubj:cop`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp`, `xcomp:ds` |
| **`experimental_edit_tree_lemmatizer`** | `3`, `4`, `7`, `10`, `13`, `15`, `19`, `21`, `23`, `25`, `29`, `35`, `40`, `41`, `45`, `48`, `50`, `52`, `55`, `57`, `59`, `61`, `64`, `67`, `71`, `73`, `75`, `77`, `80`, `85`, `86`, `90`, `92`, `94`, `96`, `99`, `101`, `103`, `104`, `106`, `109`, `111`, `112`, `114`, `117`, `120`, `123`, `127`, `130`, `134`, `136`, `138`, `141`, `145`, `147`, `149`, `151`, `153`, `157`, `158`, `160`, `161`, `163`, `166`, `168`, `170`, `173`, `175`, `177`, `179`, `181`, `184`, `187`, `191`, `194`, `198`, `199`, `201`, `202`, `205`, `207`, `210`, `212`, `214`, `217`, `218`, `222`, `224`, `226`, `228`, `230`, `232`, `234`, `236`, `239`, `241`, `243`, `246`, `249`, `251`, `253`, `254`, `256`, `258`, `260`, `261`, `264`, `267`, `269`, `271`, `273`, `274`, `278`, `281`, `282`, `284`, `286`, `289`, `291`, `292`, `294`, `299`, `301`, `304`, `306`, `308`, `310`, `313`, `316`, `317`, `320`, `322`, `327`, `329`, `334`, `336`, `338`, `340`, `344`, `345`, `348`, `350`, `352`, `354`, `357`, `359`, `362`, `363`, `365`, `366`, `367`, `368`, `369`, `370`, `372`, `375`, `377`, `380`, `382`, `385`, `387`, `389`, `390`, `392`, `395`, `397`, `400`, `403`, `406`, `408`, `411`, `413`, `415`, `417`, `419`, `421`, `423`, `425`, `428`, `431`, `433`, `436`, `438`, `440`, `442`, `443`, `446`, `448`, `451`, `453`, `455`, `457`, `459`, `461`, `463`, `466`, `469`, `471`, `473`, `476`, `477`, `481`, `482`, `484`, `488`, `490`, `491`, `495`, `498`, `501`, `503`, `506`, `509`, `513`, `515`, `517`, `519`, `521`, `523`, `526`, `528`, `529`, `531`, `533`, `535`, `537`, `538`, `539`, `542`, `544`, `546`, `548`, `551`, `553`, `556`, `558`, `562`, `564`, `566`, `568`, `570`, `574`, `576`, `578`, `582`, `584`, `586`, `588`, `591`, `593`, `595`, `596`, `598`, `600`, `601`, `602`, `604`, `606`, `608`, `609`, `435`, `610`, `611`, `614`, `616`, `617`, `620`, `622`, `625`, `626`, `628`, `630`, `631`, `633`, `635`, `637`, `638`, `640`, `641`, `643`, `645`, `646`, `650`, `651`, `653`, `655`, `657`, `659`, `660`, `664`, `667`, `671`, `672`, `674`, `677`, `681`, `683`, `684`, `686`, `687`, `689`, `691`, `693`, `695`, `698`, `701`, `703`, `705`, `707`, `710`, `713`, `716`, `720`, `723`, `725`, `726`, `730`, `731`, `734`, `736`, `738`, `739`, `741`, `744`, `748`, `749`, `750`, `752`, `755`, `757`, `759`, `761`, `762`, `763`, `767`, `769`, `772`, `774`, `777`, `780`, `781`, `782`, `784`, `785`, `787`, `788`, `790`, `792`, `793`, `794`, `797`, `799`, `802`, `803`, `805`, `807`, `810`, `388`, `811`, `813`, `815`, `817`, `821`, `823`, `824`, `826`, `828`, `829`, `831`, `832`, `833`, `834`, `836`, `838`, `840`, `842`, `844`, `845`, `847`, `849`, `852`, `855`, `857`, `861`, `863`, `865`, `867`, `868`, `870`, `872`, `875`, `876`, `878`, `879`, `881`, `883`, `886`, `888`, `890`, `891`, `892`, `895`, `896`, `898`, `900`, `903`, `907`, `910`, `912`, `914`, `915`, `917`, `920`, `921`, `924`, `926`, `928`, `930`, `932`, `934`, `937`, `940`, `941`, `943`, `944`, `945`, `946`, `947`, `949`, `952`, `954`, `956`, `960`, `963`, `966`, `969`, `971`, `972`, `974`, `977`, `978`, `981`, `983`, `985`, `987`, `990`, `991`, `993`, `995`, `996`, `999`, `1002`, `1006`, `1008`, `1011`, `1013`, `1016`, `1018`, `1020`, `1022`, `1024`, `1026`, `1028`, `1030`, `1032`, `1034`, `1036`, `1038`, `1040`, `1043`, `1044`, `1046`, `1048`, `1051`, `1054`, `1056`, `1057`, `1060`, `1062`, `1064`, `1066`, `1067`, `1069`, `1071`, `1074`, `1077`, `1078`, `1081`, `1084`, `1086`, `1087`, `1089`, `1091`, `1093`, `1095`, `1096`, `1098`, `1100`, `1101`, `1103`, `1104`, `1106`, `1108`, `1111`, `1114`, `1116`, `1118`, `1119`, `1121`, `1123`, `1125`, `1127`, `1128`, `1133`, `1136`, `1139`, `1141`, `1143`, `1146`, `1149`, `1150`, `1151`, `1153`, `1156`, `1157`, `1159`, `1161`, `1163`, `1167`, `1169`, `1171`, `1172`, `1174`, `1176`, `1180`, `1181`, `1184`, `1186`, `1189`, `1190`, `1193`, `1195`, `1197`, `1199`, `1202`, `1204`, `1205`, `1206`, `1207`, `1209`, `1210`, `1212`, `1214`, `1218`, `1220`, `1222`, `1224`, `1225`, `1227`, `1229`, `1230`, `1232`, `1235`, `1236`, `1238`, `1239`, `1241`, `1245`, `1246`, `1248`, `1249`, `1251`, `1252`, `1253`, `1255`, `1256`, `1259`, `1260`, `1262`, `1263`, `1265`, `1268`, `1269`, `1271`, `1272`, `1275`, `1276`, `1277`, `1279`, `1280`, `1283`, `1285`, `1286`, `1289`, `1291`, `1294`, `1295`, `1298`, `1300`, `1302`, `1304`, `1306`, `1308`, `1311`, `1312`, `1313`, `1314`, `1316`, `1317`, `1318`, `1320`, `1322`, `1323`, `1325`, `1327`, `1330`, `1332`, `1334`, `1339`, `1341`, `1344`, `1345`, `1347`, `1349`, `1352`, `1355`, `1356`, `1360`, `1363`, `1365`, `1367`, `1368`, `1369`, `1372`, `1374`, `1376`, `1377`, `1379`, `1380`, `1382`, `1384`, `1386`, `1389`, `1391`, `1392`, `1393`, `1396`, `1399`, `1400`, `1401`, `1403`, `1405`, `1406`, `1408`, `1411`, `1414`, `1416`, `1417`, `1419`, `1420`, `1422`, `1423`, `1425`, `1428`, `1430`, `1433`, `1436`, `1437`, `1439`, `1442`, `1444`, `1446`, `1449`, `1451`, `1454`, `1456`, `1457`, `1459`, `1461`, `1462`, `1464`, `1465`, `1467`, `1469`, `1470`, `1472`, `1475`, `1477`, `1478`, `1480`, `1482`, `1483`, `1484`, `1486`, `1487`, `1489`, `1491`, `1492`, `1494`, `1497`, `1498`, `1499`, `1501`, `1503`, `1506`, `1507`, `1511`, `1513`, `1514`, `1517`, `1519`, `1521`, `1523`, `1526`, `1528`, `1531`, `1533`, `1535`, `1536`, `1538`, `1540`, `1542`, `1545`, `1547`, `1549`, `1550`, `1551`, `1552`, `1554`, `1555`, `1556`, `1558`, `1559`, `1560`, `1562`, `1563`, `1564`, `1566`, `1568`, `1570`, `1575`, `1577`, `1578`, `1579`, `1580`, `1582`, `1585`, `1586`, `1589`, `1590`, `1592`, `1594`, `1598`, `1600`, `1601`, `1603`, `1604`, `1605`, `1607`, `1609`, `1610`, `1613`, `1616`, `1618`, `1619`, `1621`, `1622`, `1624`, `1627`, `1629`, `1631`, `1633`, `1635`, `1638`, `1640`, `1643`, `1646`, `1647`, `1649`, `1651`, `1654`, `1655`, `1658`, `1662`, `1663`, `1666`, `1669`, `1671`, `1673`, `1676`, `1679`, `1682`, `1685`, `1686`, `1688`, `1690`, `1693`, `1695`, `1698`, `1700`, `1702`, `1704`, `1705`, `1707`, `1710`, `1713`, `1715`, `1717`, `1719`, `1721`, `1724`, `1725`, `1727`, `1729`, `1730`, `1731`, `1732`, `1734`, `1736`, `1737`, `1738`, `1741`, `1744`, `1746`, `1747`, `1749`, `1751`, `1752`, `1753`, `1754`, `1756`, `1758`, `1761`, `1762`, `1764`, `1765`, `1766`, `1767`, `1769`, `1772`, `1774`, `1777`, `1778`, `1779`, `1781`, `1782`, `1784`, `1787`, `1790`, `1792`, `1794`, `1798`, `1800`, `1803`, `1805`, `1807`, `1809`, `1810`, `1811`, `1813`, `1815`, `1816`, `1820`, `1823`, `1824`, `1827`, `1830`, `1832`, `1833`, `1834`, `1835`, `1836`, `1838`, `1841`, `1842`, `1843`, `1845`, `1847`, `1849`, `1853`, `1856`, `1858`, `1860`, `1861`, `1863`, `1864`, `1865`, `1866`, `1867`, `1869`, `1870`, `1874`, `1875`, `1876`, `1879`, `1881`, `1882`, `1883`, `1886`, `1887`, `1890`, `1891`, `1893`, `1896`, `1898`, `1901`, `1903`, `1906`, `1908`, `1910`, `1912`, `1914`, `1917`, `1919`, `1921`, `1923`, `1926`, `1927`, `1928`, `1930`, `1931`, `1933`, `1935`, `1937`, `1939`, `1941`, `1943`, `1944`, `1946`, `1948`, `1950`, `1952`, `1955`, `1956`, `1957`, `1958`, `1960`, `1962`, `1963`, `1965`, `1967`, `1969`, `1970`, `1972`, `1973`, `1974`, `1975`, `1976`, `1978`, `1981`, `1984`, `1986`, `1989`, `1992`, `1994`, `1995`, `1996`, `1998`, `2000`, `2003`, `2004`, `2006`, `2007`, `2008`, `2009`, `2011`, `2013`, `2016`, `2017`, `2019`, `2020`, `2022`, `2025`, `2028`, `2029`, `2031`, `2034`, `2035`, `2038`, `2041`, `2043`, `2045`, `2047`, `2049`, `2051`, `2052`, `2055`, `2057`, `2058`, `2060`, `2062`, `2063`, `2065`, `2067`, `2069`, `2071`, `2073`, `2074`, `2076`, `2078`, `2082`, `2084`, `2086`, `2088`, `2089`, `2090`, `2092`, `2093`, `2094`, `2096`, `2098`, `2100`, `2102`, `2104`, `2107`, `2109`, `2110`, `2111`, `2112`, `2114`, `2115`, `2116`, `2117`, `2119`, `2121`, `2124`, `2125`, `2126`, `2129`, `2130`, `2132`, `2135`, `2137`, `2140`, `2142`, `2144`, `2146`, `2147`, `2148`, `2150`, `2151`, `2152`, `2153`, `2156`, `2159`, `2161`, `2163`, `2164`, `2165`, `2167`, `2169`, `2170`, `2171`, `2172`, `2173`, `2176`, `2178`, `2180`, `2182`, `2183`, `2186`, `2188`, `2191`, `2193`, `2195`, `2197`, `2198`, `2199`, `2202`, `2204`, `2206`, `2208`, `2210`, `2211`, `2214`, `2218`, `2219`, `2222`, `2224`, `2226`, `2227`, `2228`, `2229`, `2232`, `2234`, `2237`, `2239`, `2240`, `2242`, `2243`, `2245`, `2246`, `2247`, `2248`, `2249`, `2252`, `2253`, `2256`, `2258`, `2261`, `2263`, `2265`, `2269`, `2271`, `2273`, `2274`, `2276`, `2277`, `2279`, `2282`, `2284`, `2287`, `2290`, `2292`, `2293`, `2294`, `2296`, `2297`, `2300`, `2301`, `2303`, `2305`, `2308`, `2310`, `2312`, `2313`, `2315`, `2316`, `2317`, `2319`, `2321`, `2322`, `2324`, `2325`, `2326`, `2330`, `2332`, `2334`, `2335`, `2338`, `2340`, `2341`, `2343`, `2345`, `2346`, `2348`, `2349`, `2350`, `2352`, `2354`, `2356`, `2358`, `2360`, `2362`, `2364`, `2368`, `2371`, `2376`, `2377`, `2379`, `2381`, `2382`, `2383`, `2384`, `2385`, `2387`, `2388`, `2389`, `2390`, `2392`, `2393`, `2394`, `2395`, `2396`, `2399`, `2401`, `2403`, `2405`, `2408`, `2409`, `2411`, `2414`, `2416`, `2418`, `2421`, `2423`, `2425`, `2428`, `2429`, `2430`, `2432`, `2435`, `2437`, `2439`, `2441`, `2445`, `2448`, `2449`, `2451`, `2452`, `2453`, `2454`, `2456`, `2459`, `2462`, `2463`, `2464`, `2466`, `2467`, `2469`, `2472`, `2475`, `2476`, `2478`, `2481`, `2483`, `2485`, `2488`, `2490`, `2493`, `2497`, `2499`, `2502`, `2504`, `2506`, `2509`, `2511`, `2513`, `2514`, `2516`, `2520`, `2523`, `2526`, `2527`, `2530`, `2531`, `2533`, `2534`, `2536`, `2537`, `2539`, `2540`, `2543`, `2546`, `2548`, `2551`, `2554`, `2555`, `2557`, `2559`, `2560`, `2562`, `2563`, `2566`, `2568`, `2570`, `2572`, `2575`, `2578`, `2580`, `2583`, `2586`, `2588`, `2590`, `2593`, `2596`, `2598`, `2601`, `2603`, `2605`, `2608`, `2611`, `2614`, `2615`, `2617`, `2618`, `2620`, `2623`, `2626`, `2629`, `2631`, `2633`, `2635`, `2637`, `2639`, `2640`, `2642`, `2644`, `2646`, `2648`, `2652`, `2655`, `2657`, `2660`, `2662`, `2663`, `2666`, `2668`, `2669`, `2672`, `2676`, `2679`, `2682`, `2685`, `2687`, `2689`, `2691`, `2693`, `2695`, `2697`, `2699`, `2702`, `2703`, `2705`, `2707`, `2709`, `2711`, `2713`, `2714`, `2720`, `2722`, `2724`, `2726`, `2728`, `2730`, `2732`, `2734`, `2736`, `2738`, `2740`, `2743`, `2746`, `2749`, `2753`, `2755`, `2757`, `2759`, `2760`, `2762`, `2765`, `2766`, `2767`, `2769`, `2771`, `2772`, `2775`, `2778`, `2781`, `2783`, `2786`, `2788`, `2792`, `2793`, `2796`, `2798`, `2799`, `2802`, `2805`, `2806`, `2809`, `2810`, `2813`, `2814`, `2817`, `2820`, `2822`, `2823`, `2824`, `2825`, `2827`, `2829`, `2831`, `2833`, `2835`, `2837`, `2839`, `2842`, `2844`, `2845`, `2846`, `2849`, `2851`, `2853`, `2855`, `2857`, `2860`, `2862`, `2864`, `2865`, `2867`, `2869`, `2871`, `2874`, `2875`, `2877`, `2879`, `2880`, `2881`, `2882`, `2883`, `2886`, `2888`, `2889`, `2890`, `2892`, `2893`, `2894`, `2896`, `2899`, `2900`, `2902`, `2903`, `2904`, `2905`, `2906`, `2907`, `2909`, `2912`, `2915`, `2917`, `2918`, `2920`, `2922`, `2924`, `2925`, `2926`, `2928`, `2930`, `2932`, `2936`, `2938`, `2940`, `2943`, `2944`, `2947`, `2948`, `2950`, `2951`, `2954`, `2956`, `2957`, `2959`, `2961`, `2962`, `2963`, `2964`, `2965`, `2966`, `2968`, `2969`, `2972`, `2974`, `2975`, `2978`, `2979`, `2982`, `2985`, `2986`, `2988`, `2989`, `2991`, `2993`, `2994`, `2996`, `2999`, `3000`, `3002`, `3005`, `3007`, `3008`, `3010`, `3011`, `3012`, `3014`, `3016`, `3018`, `3020`, `3021`, `3023`, `3024`, `3027`, `3029`, `3032`, `3034`, `3035`, `3037`, `3040`, `3041`, `3044`, `3047`, `3048`, `3049`, `3052`, `3054`, `3056`, `3058`, `3060`, `3062`, `3064`, `3066`, `3067`, `3070`, `3071`, `3072`, `3074`, `3076`, `3077`, `3080`, `3082`, `3085`, `3088`, `3089`, `3092`, `3095`, `3097`, `3098`, `3100`, `3102`, `3104`, `3107`, `3108`, `3110`, `3113`, `3115`, `3117`, `3121`, `3122`, `3123`, `3125`, `3127`, `3128`, `3131`, `3134`, `3135`, `3136`, `3138`, `3139`, `3141`, `3142`, `3143`, `3145`, `3146`, `3147`, `3148`, `3151`, `3152`, `3154`, `3155`, `3158`, `3160`, `3163`, `3164`, `3165`, `3167`, `3168`, `3170`, `3171`, `3173`, `3174`, `3175`, `3176`, `3178`, `3179`, `3180`, `3182`, `3184`, `3186`, `3188`, `3190`, `3193`, `3195`, `3196`, `3198`, `3200`, `3202`, `3203`, `3205`, `3207`, `3209`, `3211`, `3213`, `3215`, `3217`, `3220`, `3221`, `3224`, `3225`, `3226`, `3229`, `3231`, `3233`, `3234`, `3237`, `3239`, `3240`, `3241`, `3242`, `3243`, `3244`, `3246`, `3249`, `3251`, `3252`, `3254`, `3256`, `3258`, `3259`, `3260`, `3262`, `3264`, `3267`, `3268`, `3269`, `3270`, `3271`, `3273`, `3275`, `3278`, `3280`, `3281`, `3282`, `3285`, `3286`, `3288`, `3290`, `3292`, `3295`, `3296`, `3298`, `3300`, `3301`, `3302`, `3304`, `3306`, `3309`, `3311`, `3312`, `3314`, `3315`, `3317`, `3319`, `3321`, `3324`, `3325`, `3327`, `3329`, `3331`, `3333`, `3335`, `3336`, `3338`, `3341`, `3343`, `3345`, `3347`, `3351`, `3352`, `3354`, `3356`, `3358`, `3359`, `3361`, `3363`, `3366`, `3367`, `3370`, `3371`, `3372`, `3373`, `3374`, `3375`, `3377`, `3381`, `3383`, `3384`, `3386`, `3388`, `3391`, `3394`, `3395`, `3398`, `3400`, `3402`, `3403`, `3405`, `3406`, `3407`, `3409`, `3411`, `3414`, `3417`, `3418`, `3419`, `3420`, `3422`, `3424`, `3426`, `3427`, `3429`, `3430`, `3433`, `3434`, `3435`, `3437`, `3442`, `3446`, `3447`, `3449`, `3450`, `3452`, `3453`, `3454`, `3457`, `3459`, `3461`, `3464`, `3466`, `3467`, `3469`, `3470`, `3472`, `3475`, `3477`, `3479`, `3482`, `3484`, `3486`, `3487`, `3488`, `3490`, `3491`, `3493`, `3496`, `3498`, `3501`, `3503`, `3505`, `3506`, `3508`, `3510`, `3511`, `3513`, `3515`, `3516`, `3519`, `3522`, `3524`, `3526`, `3528`, `3530`, `3532`, `3533`, `3535`, `3538`, `3540`, `3541`, `3543`, `3544`, `3546`, `3548`, `3550`, `3552`, `3553`, `3556`, `3557`, `3558`, `3559`, `3561`, `3562`, `3563`, `3564`, `3567`, `3569`, `3571`, `3572`, `3573`, `3574`, `3576`, `3578`, `3580`, `3583`, `3584`, `3586`, `3589`, `3590`, `3592`, `3593`, `3596`, `3598`, `3599`, `3601`, `3603`, `3605`, `3607`, `3608`, `3609`, `3611`, `3613`, `3614`, `3615`, `3617`, `3619`, `3620`, `3622`, `3624`, `3625`, `3626`, `3628`, `3630`, `3633`, `3635`, `3636`, `3639`, `3640`, `3642`, `3645`, `3647`, `3648`, `3650`, `3652`, `3654`, `3655`, `3657`, `3660`, `3662`, `3663`, `3664`, `3665`, `3666`, `3670`, `3671`, `3673`, `3675`, `3677`, `3679`, `3681`, `3684`, `3686`, `3687`, `3690`, `3692`, `3694`, `3696`, `3698`, `3700`, `3701`, `3703`, `3705`, `3708`, `3709`, `3712`, `3715`, `3716`, `3719`, `3721`, `3722`, `3723`, `3724`, `3725`, `3727`, `3728`, `3730`, `3731`, `3735`, `3737`, `3740`, `3742`, `3743`, `3744`, `3745`, `3746`, `3747`, `3749`, `3750`, `3752`, `3754`, `3757`, `3758`, `3759`, `3760`, `3763`, `3765`, `3766`, `3768`, `3771`, `3772`, `3774`, `3775`, `3777`, `3779`, `3781`, `3782`, `3784`, `3786`, `3787`, `3788`, `3789`, `3790`, `3791`, `3792`, `3794`, `3795`, `3797`, `3798`, `3799`, `3801`, `3803`, `3805`, `3808`, `3811`, `3812`, `3813`, `3815`, `3816`, `3818`, `2885`, `3820`, `3822`, `3823`, `3826`, `3828`, `3830`, `3833`, `3834`, `3837`, `3840`, `3842`, `3843`, `3845`, `3848`, `3851`, `3852`, `3853`, `3856`, `3859`, `3860`, `3861`, `3862`, `3863`, `3864`, `3866`, `3867`, `3868`, `3870`, `3871`, `3872`, `3874`, `3876`, `3879`, `3881`, `3883`, `3886`, `3888`, `3890`, `3891`, `3893`, `3895`, `3897`, `3898`, `3900`, `3902`, `3904`, `3905`, `3908`, `3909`, `3910`, `3912`, `3913`, `3914`, `3917`, `3918`, `3920`, `3922`, `3923`, `3924`, `3926`, `3927`, `3928`, `3930`, `3933`, `3936`, `3937`, `3939`, `3942`, `3944`, `3945`, `3947`, `3950`, `3952`, `3955`, `3956`, `3957`, `3958`, `3959`, `3960`, `3961`, `3962`, `3963`, `3964`, `3965`, `3966`, `3967`, `3969`, `3971`, `3973`, `3976`, `3978`, `3979`, `3980`, `3981`, `3982`, `3984`, `3985`, `3989`, `3992`, `3993`, `3995`, `3996`, `3998`, `4000`, `4003`, `4004`, `4005`, `4007`, `4008`, `4011`, `4013`, `4014`, `4015`, `4016`, `4018`, `4020`, `4021`, `4022`, `4024`, `4025`, `4026`, `4028`, `4029`, `4032`, `4033`, `4034`, `4036`, `4039`, `4040`, `4042`, `4044`, `4046`, `4047`, `4049`, `4051`, `4052`, `4054`, `4055`, `4056`, `4057`, `4060`, `4064`, `4066`, `4068`, `4069`, `4070`, `4071`, `4072`, `4073`, `4074`, `4076`, `4078`, `4081`, `4082`, `4083`, `4084`, `4085`, `4087`, `4089`, `4091`, `4093`, `4095`, `4096`, `4098`, `4102`, `4103`, `4105`, `4106`, `4107`, `4110`, `4111`, `4112`, `4113`, `4115`, `4119`, `4121`, `4122`, `4123`, `4124`, `4126`, `4127`, `4129`, `4130`, `4134`, `4136`, `4138`, `4142`, `4143`, `4144`, `4145`, `4148`, `4150`, `4151`, `4153`, `4155`, `4156`, `4158`, `4159`, `4161`, `4162`, `4163`, `4165`, `4166`, `4169`, `4170`, `4172`, `4173`, `4174`, `4175`, `4177`, `4178`, `4180`, `4182`, `4183`, `4185`, `4187`, `4189`, `4191`, `4192`, `4194`, `4195`, `4196`, `4198`, `4200`, `4202`, `4203`, `4204`, `4205`, `4207`, `4209`, `4211`, `4213`, `4215`, `4216`, `4219`, `4221`, `4222`, `4223`, `4224`, `4227`, `4229`, `4232`, `4233`, `4234`, `4236`, `4238`, `4239`, `4240`, `4241`, `4243`, `4245`, `4247`, `4250`, `4251`, `4254`, `4255`, `4256`, `4258`, `4262`, `4265`, `4268`, `4271`, `4273`, `4275`, `4277`, `4278`, `4279`, `4281`, `4283`, `4285`, `4286`, `4288`, `4291`, `4293`, `4295`, `4297`, `4299`, `4300`, `4303`, `4306`, `4309`, `4312`, `4314`, `4315`, `4318`, `4320`, `4324`, `4326`, `4328`, `4330`, `4331`, `4334`, `4336`, `4338`, `4342`, `4343`, `4345`, `4346`, `4347`, `4348`, `4349`, `4350`, `4352`, `4356`, `4359`, `4361`, `4362`, `4363`, `4364`, `4369`, `4370`, `4372`, `4374`, `4376`, `4380`, `4382`, `4383`, `4384`, `4388`, `4391`, `4392`, `4394`, `4395`, `4396`, `4398`, `4400`, `4402`, `4404`, `4405`, `4407`, `4409`, `4410`, `4411`, `4414`, `4416`, `4418`, `4419`, `4421`, `4423`, `4425`, `4426`, `4428`, `4429`, `4431`, `4432`, `4434`, `4437`, `4438`, `4439`, `4441`, `4442`, `4443`, `4445`, `4446`, `4449`, `4450`, `4452`, `4454`, `4456`, `4457`, `4458`, `4459`, `4462`, `4463`, `4464`, `4465`, `4466`, `4469`, `4472`, `4475`, `4478`, `4481`, `4482`, `4484`, `4485`, `4487`, `4489`, `4492`, `4493`, `4496`, `4498`, `4499`, `4500`, `4502`, `4503`, `4505`, `4506`, `4508`, `4510`, `4511`, `4512`, `4514`, `4515`, `4518`, `4519`, `4521`, `4524`, `4526`, `4527`, `4528`, `4531`, `4534`, `4537`, `4539`, `4541`, `4542`, `4544`, `4546`, `4549`, `4551`, `4552`, `4556`, `4558`, `4561`, `4564`, `4566`, `4568`, `4569`, `4572`, `4574`, `4575`, `4576`, `4577`, `4578`, `4580`, `4582`, `4583`, `4585`, `4587`, `4589`, `4591`, `4593`, `4594`, `4596`, `4597`, `4600`, `4601`, `4602`, `4603`, `4605`, `4606`, `4608`, `4611`, `4614`, `4615`, `4617`, `4620`, `4623`, `4625`, `4626`, `4627`, `4631`, `4632`, `4633`, `4634`, `4637`, `4638`, `4639`, `4642`, `4643`, `4646`, `4648`, `4650`, `4651`, `4653`, `4655`, `4656`, `4657`, `4658`, `4659`, `4661`, `4664`, `4666`, `4668`, `4669`, `4670`, `4671`, `4673`, `4674`, `4677`, `4679`, `4680`, `4683`, `4684`, `4685`, `4687`, `4688`, `4690`, `4691`, `4693`, `4694`, `4697`, `4698`, `4699`, `4701`, `4702`, `4705`, `4706`, `4707`, `4709`, `4710`, `4711`, `4714`, `4715`, `4717`, `4718`, `4720`, `4722`, `4726`, `4728`, `4730`, `4734`, `4736`, `4737`, `4738`, `4740`, `4743`, `4745`, `4747`, `4748`, `4749`, `4751`, `4752`, `4754`, `4756`, `4759`, `4760`, `4761`, `4763`, `4764`, `4765`, `4769`, `4770`, `4773`, `4775`, `4776`, `4777`, `4779`, `4782`, `4783`, `4784`, `4787`, `4789`, `4790`, `4791`, `4792`, `4794`, `4795`, `4796`, `4797`, `4798`, `4801`, `4802`, `4803`, `4805`, `4807`, `4811`, `4812`, `4814`, `4817`, `4818`, `4819`, `4821`, `4824`, `4825`, `4826`, `4829`, `4831`, `4833`, `4834`, `4835`, `4837`, `4839`, `4840`, `4842`, `4844`, `4846`, `4848`, `4849`, `4852`, `4854`, `4856`, `4857`, `4860`, `4861`, `4862`, `4863`, `4866`, `4867`, `4869`, `4871`, `4872`, `4875`, `4877`, `4879`, `4881`, `4886`, `4887`, `4889`, `4890`, `4892`, `4893`, `4896`, `4897`, `4898`, `4900`, `4901`, `4904`, `4905`, `4907`, `4909`, `4910`, `4912`, `4914`, `4916`, `4919`, `4922`, `4924`, `4926`, `4927`, `4929`, `4930`, `4933`, `4934`, `4936`, `4938`, `4940`, `4943`, `4946`, `4948`, `4949`, `4950`, `4951`, `4952`, `4954`, `4955`, `4958`, `4960`, `4962`, `4964`, `4965`, `4967`, `4970`, `4972`, `4973`, `4975`, `4978`, `4981`, `4983`, `4984`, `4986`, `4987`, `4989`, `4990`, `4991`, `4992`, `4993`, `4994`, `4995`, `4998`, `4999`, `5000`, `5002`, `5003`, `5006`, `5007`, `5008`, `5010`, `5011`, `5014`, `5017`, `5019`, `5020`, `5023`, `5024`, `5026`, `5028`, `5029`, `5031`, `5033`, `5034`, `5035`, `5036`, `5037`, `5039`, `5041`, `5043`, `5046`, `5049`, `5051`, `5053`, `5054`, `5056`, `5059`, `5062`, `5063`, `5065`, `5067`, `5068`, `5069`, `5072`, `5075`, `5076`, `5077`, `5078`, `5079`, `5081`, `5082`, `5085`, `5088`, `5089`, `5090`, `5091`, `5093`, `5096`, `5098`, `5100`, `5102`, `5103`, `5104`, `5105`, `5106`, `5107`, `5109`, `5111`, `5112`, `5113`, `5115`, `5116`, `5117`, `5119`, `5121`, `5122`, `5123`, `5125`, `5126`, `5127`, `5128`, `5130`, `5132`, `5133`, `5135`, `5136`, `5137`, `5139`, `5141`, `5143`, `5144`, `5147`, `5148`, `5150`, `5153`, `5154`, `5157`, `5159`, `5161`, `5163`, `5165`, `5169`, `5171`, `5172`, `5173`, `5175`, `5178`, `5179`, `5181`, `5185`, `5186`, `5188`, `5190`, `5193`, `5195`, `5196`, `5199`, `5202`, `5203`, `5205`, `5207`, `5209`, `5211`, `5212`, `5214`, `5216`, `5217`, `5220`, `5222`, `5223`, `5224`, `5227`, `5228`, `5230`, `5231`, `5232`, `5233`, `5235`, `5237`, `5238`, `5241`, `5243`, `5244`, `5245`, `5248`, `5250`, `5252`, `5254`, `5255`, `5256`, `5259`, `5260`, `5262`, `5266`, `5269`, `5270`, `5272`, `5273`, `5276`, `5277`, `5278`, `5280`, `5283`, `5284`, `5285`, `5286`, `5287`, `5289`, `5290`, `5293`, `5296`, `5299`, `5300`, `5301`, `5302`, `5304`, `5306`, `5308`, `5309`, `5310`, `5313`, `5314`, `5315`, `5316`, `5319`, `5320`, `5321`, `5324`, `5326`, `5327`, `5329`, `5332`, `5333`, `5335`, `5337`, `5339`, `5341`, `5343`, `5345`, `5347`, `5349`, `5350`, `5351`, `5354`, `5355`, `5356`, `5357`, `5358`, `5360`, `5361`, `5362`, `5364`, `5365`, `5369`, `5371`, `5374`, `5375`, `5377`, `5380`, `5383`, `5385`, `5386`, `5387`, `5388`, `5390`, `5392`, `5394`, `5396`, `5398`, `5399`, `5400`, `5402`, `5405`, `5409`, `5410`, `5412`, `5413`, `5414`, `5415`, `5418`, `5420`, `5423`, `5425`, `5426`, `5427`, `5428`, `5431`, `5432`, `5434`, `5436`, `5438`, `5439`, `5441`, `5442`, `5445`, `5447`, `5450`, `5451`, `5452`, `5455`, `5457`, `5459`, `5462`, `5463`, `5465`, `5466`, `5467`, `5469`, `5470`, `5471`, `5474`, `5476`, `5478`, `5479`, `5480`, `5481`, `5483`, `5486`, `5488`, `5489`, `5490`, `5492`, `5495`, `5497`, `5498`, `5500`, `5501`, `5506`, `5508`, `5510`, `5511`, `5512`, `5515`, `5516`, `5517`, `5518`, `5519`, `5522`, `5524`, `5525`, `5527`, `5528`, `5530`, `5531`, `5532`, `5535`, `5536`, `5538`, `5539`, `5541`, `5543`, `5544`, `5546`, `5547`, `5549`, `5552`, `5555`, `5558`, `5559`, `5561`, `5562`, `5563`, `5564`, `5567`, `5568`, `5569`, `5571`, `5572`, `5574`, `5575`, `5577`, `5580`, `5583`, `5584`, `5585`, `5588`, `5590`, `5591`, `5593`, `5594`, `5595`, `5596`, `5597`, `5599`, `5602`, `5603`, `5604`, `5607`, `5609`, `5610`, `5611`, `5612`, `5613`, `5614`, `5616`, `5617`, `5621`, `5623`, `5625`, `5628`, `5630`, `5633`, `5636`, `5639`, `5642`, `5644`, `5645`, `5647`, `5649`, `5651`, `5653`, `5654`, `5655`, `5658`, `5659`, `5662`, `5664`, `5665`, `5667`, `5669`, `5670`, `5671`, `5673`, `5674`, `5676`, `5678`, `5680`, `5682`, `5684`, `5685`, `5687`, `5689`, `5691`, `5695`, `5696`, `5698`, `5699`, `5703`, `5704`, `5706`, `5709`, `5710`, `5711`, `5712`, `5714`, `5717`, `5719`, `5722`, `5723`, `5724`, `5726`, `5727`, `5728`, `5730`, `5733`, `5735`, `5736`, `5738`, `5739`, `5741`, `5743`, `5746`, `5747`, `5748`, `5750`, `5753`, `5756`, `5757`, `5758`, `5759`, `5761`, `5762`, `5765`, `5766`, `5768`, `5771`, `5772`, `5773`, `5776`, `5777`, `5778`, `5779`, `5781`, `5783`, `5784`, `5785`, `5789`, `5791`, `5793`, `5794`, `5797`, `5798`, `5800`, `5802`, `5803`, `5807`, `5808`, `5810`, `5813`, `5815`, `5818`, `5821`, `5823`, `5825`, `5826`, `5827`, `5828`, `5830`, `5831`, `5832`, `5835`, `5838`, `5839`, `5841`, `5842`, `5844`, `5846`, `5849`, `5851`, `5852`, `5854`, `5857`, `5858`, `5860`, `5861`, `5863`, `5864`, `5865`, `5867`, `5869`, `5871`, `5872`, `5873`, `5874`, `5875`, `5876`, `5877`, `5878`, `5879`, `5881`, `5884`, `5885`, `5887`, `5888`, `5889`, `5891`, `5892`, `5894`, `5895`, `5896`, `5898`, `5900`, `5902`, `5904`, `5906`, `5909`, `5911`, `5914`, `5915`, `5916`, `5917`, `5918`, `5919`, `5921`, `5923`, `5925`, `5927`, `5929`, `5931`, `5934`, `5937`, `5939`, `5940`, `5941`, `5943`, `5945`, `5946`, `5948`, `5950`, `5952`, `5954`, `5955`, `5956`, `5960`, `5961`, `5962`, `5965`, `5967`, `5969`, `5972`, `5973`, `5975`, `5976`, `5977`, `5978`, `5981`, `5982`, `5984`, `5986`, `5988`, `5990`, `5991`, `5993`, `5995`, `5998`, `6000`, `6002`, `6005`, `6007`, `6008`, `6009`, `6011`, `6013`, `6014`, `6015`, `6018`, `6019`, `6021`, `6024`, `6026`, `6028`, `6030`, `6031`, `6032`, `6034`, `6036`, `6037`, `6040`, `6042`, `6045`, `6046`, `6047`, `6048`, `6049`, `6050`, `6051`, `6053`, `6054`, `6055`, `6056`, `6058`, `6059`, `6061`, `6062`, `6064`, `6065`, `6067`, `6068`, `6069`, `6070`, `6072`, `6073`, `6074`, `6075`, `6077`, `6079`, `6082`, `6084`, `6085`, `6087`, `6089`, `6091`, `6093`, `6096`, `6098`, `6099`, `6101`, `6102`, `6104`, `6106`, `6108`, `6109`, `6110`, `6112`, `6114`, `6115`, `6118`, `6120`, `6122`, `6123`, `6125`, `6127`, `6129`, `6130`, `6132`, `6134`, `6135`, `6136`, `6137`, `6139`, `6140`, `6141`, `6145`, `6147`, `6149`, `6150`, `6151`, `6153`, `6154`, `6155`, `6157`, `6159`, `6161`, `6163`, `6164`, `6165`, `6166`, `6167`, `6170`, `6171`, `6173`, `6176`, `6178`, `6179`, `6182`, `6183`, `6185`, `6187`, `6188`, `6190`, `6191`, `6193`, `6194`, `6196`, `6197`, `6198`, `6199`, `6200`, `6201`, `6203`, `6205`, `6206`, `6207`, `6208`, `6209`, `6211`, `6213`, `6215`, `6216`, `6217`, `6219`, `6220`, `6222`, `6224`, `6226`, `6229`, `6232`, `6235`, `6238`, `6239`, `6242`, `6243`, `6245`, `6247`, `6248`, `6249`, `6252`, `6253`, `6254`, `6256`, `6257`, `6258`, `6260`, `6261`, `6262`, `6263`, `6265`, `6267`, `6269`, `6272`, `6273`, `6274`, `6275`, `6277`, `6278`, `6281`, `6283`, `6285`, `6287`, `6288`, `6289`, `6290`, `6291`, `6293`, `6295`, `6297`, `6298`, `6300`, `6302`, `6304`, `6307`, `6308`, `6310`, `6312`, `6314`, `6316`, `6318`, `6321`, `6323`, `6326`, `6328`, `6329`, `6330`, `6332`, `6333`, `6336`, `6338`, `6340`, `6342`, `6345`, `6346`, `6347`, `6349`, `6350`, `6352`, `6354`, `6357`, `6359`, `6363`, `6364`, `6366`, `6368`, `6369`, `6370`, `6374`, `6376`, `6377`, `6381`, `6384`, `6385`, `6387`, `6389`, `6391`, `6394`, `6395`, `6397`, `6398`, `6400`, `6401`, `6402`, `6403`, `6406`, `6407`, `6409`, `6412`, `6413`, `6414`, `6416`, `6418`, `6419`, `6421`, `6424`, `6426`, `6427`, `6428`, `6430`, `6431`, `6432`, `6433`, `6434`, `6436`, `6438`, `6440`, `6442`, `6443`, `6445`, `6447`, `6450`, `6452`, `6453`, `6454`, `6455`, `6456`, `6458`, `6460`, `6461`, `6462`, `6463`, `6464`, `6465`, `6466`, `6468`, `6469`, `6470`, `6471`, `6472`, `6473`, `6475`, `6476`, `6480`, `6482`, `6484`, `6486`, `6488`, `6491`, `6493`, `6495`, `6497`, `6498`, `6499`, `6500`, `6502`, `6504`, `6506`, `6508`, `6510`, `6511`, `6513`, `6516`, `6517`, `6519`, `6521`, `6523`, `6525`, `6527`, `6528`, `6530`, `6532`, `6533`, `6536`, `6539`, `6541`, `6542`, `6543`, `6544`, `6546`, `6547`, `6550`, `6552`, `6555`, `6556`, `6557`, `6558`, `6559`, `6561`, `6562`, `6563`, `6565`, `6566`, `6567`, `6570`, `6571`, `6572`, `6573`, `6574`, `6575`, `6576`, `6577`, `6578`, `6579`, `6582`, `6583`, `6585`, `6588`, `6589`, `6590`, `6591`, `6592`, `6593`, `6594`, `6596`, `6597`, `6598`, `6600`, `6601`, `6602`, `6603`, `6604`, `6606`, `6607`, `6608`, `6610`, `6611`, `6612`, `6614`, `6617`, `6619`, `6621`, `6622`, `6625`, `6628`, `6631`, `6632`, `6633`, `6634`, `6637`, `6639`, `6640`, `6642`, `6644`, `6645`, `6646`, `6647`, `6649`, `6650`, `6651`, `6653`, `6655`, `6656`, `6658`, `6659`, `6661`, `6662`, `6665`, `6666`, `6668`, `6669`, `6670`, `6671`, `6672`, `6673`, `6674`, `6675`, `6676`, `6677`, `6678`, `6679`, `6680`, `6682`, `6683`, `6684`, `6685`, `6687`, `6688`, `6690`, `6692`, `6693`, `6694`, `6696`, `6697`, `6698`, `6701`, `6702`, `6705`, `6707`, `6708`, `6709`, `6711`, `6712`, `6714`, `6716`, `6717`, `6718`, `6721`, `6723`, `6724`, `6726`, `6730`, `6732`, `6733`, `6734`, `6735`, `6736`, `6738`, `6740`, `6742`, `6744`, `6745`, `6746`, `6747`, `6748`, `6750`, `6753`, `6755`, `6756`, `6758`, `6759`, `6760`, `6761`, `6763`, `6764`, `6767`, `6770`, `6772`, `6773`, `6774`, `6776`, `6778`, `6781`, `6783`, `6784`, `6785`, `6788`, `6790`, `6793`, `6794`, `6796`, `6797`, `6801`, `6804`, `6807`, `6809`, `6812`, `6814`, `6816`, `6817`, `6819`, `6821`, `6822`, `6824`, `6825`, `6828`, `6831`, `6834`, `6835`, `6836`, `6838`, `6839`, `6841`, `6844`, `6846`, `6847`, `6848`, `6849`, `6851`, `6852`, `6854`, `6855`, `6856`, `6857`, `6859`, `6860`, `6861`, `6862`, `6864`, `6865`, `6866`, `6867`, `6868`, `6870`, `6873`, `6875`, `6876`, `6878`, `6882`, `6885`, `6888`, `6889`, `6890`, `6893`, `6895`, `6898`, `6901`, `6903`, `6905`, `6906`, `6909`, `6911`, `6912`, `6913`, `6914`, `6915`, `6916`, `6919`, `6920`, `6923`, `6924`, `6925`, `6926`, `6930`, `6932`, `6934`, `6937`, `6939`, `6940`, `6942`, `6944`, `6945`, `6947`, `6949`, `6952`, `6954`, `6957`, `6959`, `6960`, `6963`, `6966`, `6969`, `6970`, `6973`, `6974`, `6976`, `6978`, `6980`, `6983`, `6985`, `6986`, `6988`, `6989`, `6991`, `6993`, `6995`, `6997`, `7000`, `7002`, `7005`, `7007`, `7008`, `7010`, `7013`, `7016`, `7018`, `7019`, `7020`, `7022`, `7023`, `7024`, `7025`, `7028`, `7030`, `7031`, `7032`, `7033`, `7035`, `7036`, `7037`, `7038`, `7040`, `7041`, `7043`, `7044`, `7045`, `7048`, `7050`, `7051`, `7054`, `7056`, `7058`, `7059`, `7060`, `7063`, `7065`, `7066`, `7067`, `7068`, `7071`, `7074`, `7077`, `7078`, `7079`, `7080`, `7081`, `7083`, `7084`, `7086`, `7088`, `7091`, `7092`, `7094`, `7095`, `7097`, `7099`, `7100`, `7102`, `7104`, `7105`, `7107`, `7109`, `7112`, `7113`, `7115`, `7117`, `7119`, `7120`, `7123`, `7126`, `7128`, `7129`, `7131`, `7133`, `7136`, `7139`, `7142`, `7143`, `7144`, `7145`, `7148`, `7150`, `7151`, `7152`, `7154`, `7156`, `7158`, `7160`, `7163`, `7164`, `7167`, `7170`, `7172`, `7175`, `7177`, `7180`, `7181`, `7182`, `7185`, `7186`, `7189`, `7190`, `7191`, `7192`, `7193`, `7194`, `7196`, `7199`, `7201`, `7202`, `7203`, `7204`, `7207`, `7209`, `7211`, `7214`, `7215`, `7217`, `7218`, `7220`, `7222`, `7223`, `7225`, `7227`, `7228`, `7230`, `7232`, `7233`, `7234`, `7236`, `7237`, `7238`, `7239`, `7240`, `7242`, `7244`, `7246`, `7247`, `7249`, `7250`, `7252`, `7253`, `7255`, `7257`, `7259`, `7261`, `7262`, `7264`, `7267`, `7268`, `7270`, `7272`, `7274`, `7276`, `7278`, `7279`, `7281`, `7284`, `7285`, `7289`, `7291`, `7295`, `7296`, `7298`, `7299`, `7301`, `7305`, `7308`, `7310`, `7311`, `7313`, `7315`, `7316`, `7318`, `7319`, `7321`, `7323`, `7324`, `7325`, `7328`, `7330`, `7331`, `7333`, `7336`, `7337`, `7338`, `7339`, `7341`, `7343`, `7345`, `7346`, `7347`, `7349`, `7352`, `7353`, `7356`, `7359`, `7361`, `7362`, `7363`, `7364`, `7366`, `7368`, `7370`, `7373`, `7375`, `7377`, `7379`, `7381`, `7384`, `7386`, `7388`, `7390`, `7393`, `7395`, `7397`, `7398`, `7399`, `7401`, `7402`, `7405`, `7408`, `7409`, `7411`, `7412`, `7414`, `7415`, `7417`, `7420`, `7422`, `7423`, `7425`, `7426`, `7429`, `7431`, `7434`, `7437`, `7440`, `7442`, `7446`, `7448`, `7450`, `7452`, `7453`, `7455`, `7457`, `7459`, `7460`, `7464`, `7468`, `7470`, `7471`, `7473`, `7475`, `7476`, `7478`, `7479`, `7481`, `7482`, `7486`, `7487`, `7489`, `7490`, `7491`, `7493`, `7494`, `7495`, `7497`, `7499`, `7501`, `7502`, `7504`, `7505`, `7506`, `7508`, `7509`, `7512`, `7513`, `7514`, `7515`, `7517`, `7518`, `7521`, `7523`, `7525`, `7526`, `7529`, `7532`, `7533`, `7534`, `7537`, `7538`, `7540`, `7541`, `7542`, `7544`, `7547`, `7549`, `7550`, `7552`, `7553`, `7554`, `7556`, `7558`, `7560`, `7562`, `7563`, `7565`, `7566`, `7567`, `7568`, `7569`, `7570`, `7571`, `7572`, `7575`, `7577`, `7579`, `7580`, `7581`, `7583`, `7585`, `7586`, `7588`, `7592`, `7595`, `7596`, `4604`, `7598`, `7599`, `7601`, `7602`, `7604`, `7605`, `7608`, `7609`, `7613`, `7615`, `7617`, `7619`, `7620`, `7621`, `7623`, `7624`, `7626`, `7627`, `7630`, `7631`, `7632`, `7635`, `7637`, `7638`, `7639`, `7640`, `7643`, `7646`, `7648`, `7650`, `7651`, `7652`, `7653`, `7654`, `7655`, `7658`, `7660`, `7661`, `7662`, `7664`, `7665`, `7667`, `7668`, `7669`, `7672`, `7673`, `7676`, `7680`, `7682`, `7684`, `7685`, `7686`, `7689`, `7690`, `7692`, `7694`, `7695`, `7696`, `7698`, `7700`, `7701`, `7702`, `7704`, `7706`, `7708`, `7711`, `7713`, `7714`, `7715`, `7717`, `7718`, `7720`, `7722`, `7723`, `7725`, `7727`, `7730`, `7732`, `7735`, `7738`, `7741`, `7743`, `7746`, `7748`, `7751`, `7754`, `7757`, `7758`, `7760`, `7761`, `7763`, `7765`, `7769`, `7770`, `7772`, `7773`, `7774`, `7775`, `7777`, `7780`, `7781`, `7782`, `7785`, `7786`, `7787`, `7788`, `7789`, `7790`, `7791`, `7793`, `7794`, `7796`, `7797`, `7799`, `7801`, `7802`, `7804`, `7807`, `7809`, `7810`, `7813`, `7815`, `7817`, `7819`, `7821`, `7823`, `7826`, `7827`, `7830`, `7832`, `7833`, `7835`, `7837`, `7839`, `7841`, `7843`, `7844`, `7845`, `7849`, `7850`, `7852`, `7854`, `7856`, `7857`, `7858`, `7859`, `7862`, `7863`, `7866`, `7867`, `7869`, `7871`, `7873`, `7875`, `7876`, `7877`, `7878`, `7879`, `7882`, `7884`, `7886`, `7887`, `7888`, `7890`, `7891`, `7894`, `7896`, `7897`, `7899`, `7901`, `7902`, `7903`, `7904`, `7906`, `7907`, `7908`, `7910`, `7913`, `7916`, `7917`, `7921`, `7924`, `7925`, `7926`, `7928`, `7929`, `7930`, `7932`, `7934`, `7935`, `7937`, `7938`, `7939`, `7940`, `7942`, `7944`, `7947`, `7948`, `7949`, `7950`, `7952`, `7954`, `7955`, `7957`, `7959`, `7961`, `7962`, `7963`, `7965`, `7967`, `7969`, `7971`, `7972`, `7973`, `7975`, `7978`, `7980`, `7982`, `7984`, `7986`, `7988`, `7990`, `7991`, `7993`, `7996`, `7997`, `7998`, `7999`, `8002`, `8005`, `8008`, `8009`, `8011`, `8013`, `8015`, `8017`, `8018`, `8021`, `8025`, `8026`, `8027`, `8030`, `8031`, `8033`, `8035`, `8037`, `8038`, `8040`, `8043`, `8044`, `8045`, `8047`, `8049`, `8051`, `8053`, `8055`, `8057`, `8059`, `8062`, `8064`, `8065`, `8069`, `8072`, `8074`, `8075`, `8077`, `8079`, `8080`, `8081`, `8082`, `8084`, `8087`, `8089`, `8092`, `8095`, `8097`, `8098`, `8101`, `8104`, `8105`, `8107`, `8108`, `8109`, `8111`, `8113`, `8116`, `8118`, `8119`, `8123`, `8125`, `8127`, `8129`, `8130`, `8132`, `8133`, `8134`, `8135`, `8137`, `8138`, `8140`, `8142`, `8144`, `8145`, `8147`, `8149`, `8150`, `8153`, `8155`, `8157`, `8160`, `8162`, `8165`, `8167`, `8170`, `8171`, `8173`, `8174`, `8176`, `8178`, `8180`, `8182`, `8184`, `8185`, `8187`, `8188`, `8190`, `8192`, `8196`, `8199`, `8201`, `8202`, `8204`, `8206`, `8208`, `8209`, `8211`, `8213`, `8216`, `8218`, `8219`, `8222`, `8224`, `8225`, `8227`, `8228`, `8230`, `8231`, `8235`, `8237`, `8240`, `8241`, `8243`, `8244`, `8245`, `8248`, `8249`, `8251`, `8253`, `8254`, `8256`, `8257`, `8258`, `8259`, `8261`, `8262`, `8263`, `8264`, `8266`, `8267`, `8270`, `8273`, `8274`, `8277`, `8279`, `8280`, `8281`, `8283`, `8286`, `8287`, `8289`, `8290`, `8291`, `8292`, `8293`, `8294`, `8295`, `8298`, `8299`, `8300`, `8303`, `8304`, `8305`, `8307`, `8309`, `8310`, `8311`, `8314`, `8316`, `8318`, `8320`, `8322`, `8325`, `8327`, `8329`, `8330`, `8332`, `8334`, `8337`, `8339`, `8340`, `8341`, `8342`, `8345`, `8346`, `8347`, `8348`, `8350`, `8352`, `8353`, `8355`, `8357`, `8360`, `8361`, `8363`, `8366`, `8368`, `8370`, `8371`, `8372`, `8374`, `8376`, `8377`, `8378`, `8379`, `8382`, `8385`, `8386`, `8387`, `8388`, `8390`, `8391`, `8393`, `8394`, `8395`, `8396`, `8397`, `8398`, `8399`, `8401`, `8403`, `8404`, `8406`, `8409`, `8411`, `8412`, `8413`, `8414`, `8417`, `8418`, `8419`, `8420`, `8422`, `8423`, `8424`, `8426`, `8427`, `8429`, `8430`, `8432`, `8434`, `8437`, `8438`, `8439`, `8441`, `8443`, `8445`, `8446`, `8449`, `8451`, `8452`, `8453`, `8454`, `8455`, `8456`, `8457`, `8458`, `8459`, `8461`, `8462`, `8463`, `8466`, `8467`, `8469`, `8471`, `8472`, `8475`, `8477`, `8478`, `8479`, `8480`, `8481`, `8483`, `8487`, `8489`, `8490`, `8491`, `8494`, `8495`, `8498`, `8499`, `8501`, `8502`, `8505`, `8506`, `8508`, `8509`, `8510`, `8515`, `8517`, `8518`, `8520`, `8521`, `8522`, `8524`, `8525`, `8526`, `8527`, `8530`, `8533`, `8534`, `8536`, `8537`, `8538`, `8539`, `8540`, `8542`, `8544`, `8546`, `8547`, `8548`, `8551`, `8552`, `8554`, `8555`, `8557`, `8559`, `8561`, `8562`, `8563`, `8566`, `8568`, `8570`, `8573`, `8574`, `8575`, `8576`, `8577`, `8579`, `8582`, `8584`, `8585`, `8586`, `8587`, `8588`, `8589`, `8590`, `8591`, `8592`, `8593`, `8594`, `8596`, `8598`, `8601`, `8602`, `8603`, `8605`, `8607`, `8608`, `8610`, `8612`, `8613`, `8614`, `8617`, `8619`, `8621`, `8622`, `8623`, `8624`, `8626`, `8627`, `8629`, `8630`, `8632`, `8633`, `8635`, `8636`, `8639`, `8642`, `8645`, `8646`, `8647`, `8648`, `8651`, `8653`, `8654`, `8655`, `8656`, `8657`, `8658`, `8661`, `8662`, `8665`, `8667`, `8669`, `8671`, `8672`, `8673`, `8674`, `8676`, `8677`, `8678`, `8679`, `8681`, `8683`, `8684`, `8685`, `8687`, `8690`, `8691`, `8693`, `8696`, `8698`, `8699`, `8700`, `8701`, `8703`, `8704`, `8707`, `8709`, `8712`, `8714`, `8715`, `8717`, `8718`, `8720`, `8721`, `8724`, `8725`, `8726`, `8728`, `8729`, `8730`, `8732`, `8735`, `8737`, `8739`, `8741`, `8742`, `8743`, `8745`, `8746`, `8749`, `8751`, `8754`, `8755`, `8756`, `8757`, `8758`, `8761`, `8763`, `8764`, `8766`, `8767`, `8768`, `8769`, `8770`, `8772`, `8773`, `8774`, `8776`, `8777`, `8778`, `8779`, `8782`, `8784`, `8787`, `8790`, `8791`, `8794`, `8796`, `8797`, `8798`, `8799`, `8802`, `8804`, `8806`, `8808`, `8809`, `8814`, `8816`, `8818`, `8820`, `8821`, `8823`, `8824`, `8825`, `8827`, `8828`, `8829`, `8831`, `8832`, `8834`, `8835`, `8838`, `8840`, `8842`, `8843`, `8844`, `8845`, `8847`, `8849`, `8851`, `8854`, `8856`, `8857`, `8858`, `8862`, `8864`, `8865`, `8866`, `8868`, `8871`, `8874`, `8877`, `8878`, `8881`, `8882`, `8883`, `8884`, `8885`, `8888`, `8890`, `8892`, `8894`, `8896`, `8898`, `8901`, `8903`, `8904`, `8906`, `8907`, `8909`, `8910`, `8911`, `8912`, `8913`, `8914`, `8916`, `8918`, `8922`, `8923`, `8924`, `8926`, `8927`, `8928`, `8929`, `8931`, `8933`, `8935`, `8936`, `8939`, `8941`, `8945`, `8947`, `8949`, `8951`, `8953`, `8955`, `8959`, `8962`, `8963`, `8965`, `8967`, `8969`, `8972`, `8974`, `8976`, `8978`, `8980`, `8981`, `8982`, `8983`, `8984`, `8986`, `8988`, `8989`, `8991`, `8993`, `8995`, `8997`, `8999`, `9001`, `9002`, `9003`, `9004`, `9005`, `9007`, `9010`, `9013`, `9014`, `9015`, `9016`, `9017`, `9018`, `9021`, `9022`, `9023`, `9025`, `9026`, `9027`, `9029`, `9030`, `9032`, `9036`, `9039`, `9040`, `9041`, `9042`, `9045`, `9046`, `9048`, `9049`, `9051`, `9053`, `9055`, `9057`, `9059`, `9060`, `9061`, `9062`, `9063`, `9065`, `9066`, `9068`, `9070`, `9072`, `9073`, `9074`, `9075`, `9077`, `9078`, `9079`, `9081`, `9083`, `9084`, `9085`, `9086`, `9089`, `9092`, `9094`, `9095`, `9097`, `9098`, `9100`, `9101`, `9103`, `9107`, `9108`, `9110`, `9112`, `9113`, `9115`, `9119`, `9120`, `9121`, `9122`, `9124`, `9125`, `9127`, `9130`, `9131`, `9132`, `9133`, `9134`, `9135`, `9137`, `9138`, `9139`, `9143`, `9145`, `9146`, `9149`, `9151`, `9153`, `9155`, `9156`, `9157`, `9159`, `948`, `9161`, `9163`, `9166`, `9167`, `9169`, `9171`, `9172`, `9175`, `9176`, `9179`, `9181`, `9184`, `9185`, `9187`, `9189`, `9190`, `9192`, `9193`, `9196`, `9198`, `9202`, `9204`, `9205`, `9207`, `9209`, `9211`, `9212`, `9215`, `9217`, `9219`, `9221`, `9224`, `9226`, `9227`, `9229`, `9232`, `9234`, `9236`, `9237`, `9239`, `9240`, `9242`, `9244`, `9246`, `9247`, `9249`, `9252`, `9253`, `9254`, `9258`, `9260`, `9262`, `9264`, `9265`, `9267`, `545`, `9269`, `9270`, `9271`, `9273`, `9274`, `9275`, `9276`, `9278`, `9281`, `9283`, `9285`, `9286`, `9290`, `9292`, `9294`, `9296`, `9297`, `9300`, `9301`, `9303`, `9306`, `9308`, `9310`, `9311`, `9313`, `9316`, `9317`, `9320`, `9321`, `9323`, `9325`, `9327`, `9329`, `9332`, `9335`, `9337`, `9338`, `9340`, `9341`, `9342`, `9344`, `9346`, `9348`, `9349`, `9351`, `9352`, `9354`, `9357`, `9360`, `9362`, `9363`, `9365`, `9368`, `9370`, `9373`, `9375`, `9376`, `9378`, `9381`, `9383`, `9386`, `9388`, `9389`, `9391`, `9393`, `9394`, `9396`, `9398`, `9400`, `9403`, `9406`, `9408`, `9410`, `9412`, `9413`, `9414`, `9416`, `9418`, `9419`, `9420`, `9424`, `9425`, `9428`, `9429`, `9431`, `9432`, `9435`, `9436`, `9438`, `9439`, `9441`, `9443`, `9444`, `9446`, `9448`, `9450`, `9452`, `9455`, `9457`, `9459`, `9460`, `9462`, `9464`, `9467`, `9470`, `9471`, `9472`, `9474`, `9476`, `9478`, `9479`, `9480`, `9482`, `9483`, `9485`, `9486`, `9487`, `9489`, `9491`, `9493`, `9495`, `9497`, `9499`, `9504`, `9506`, `9508`, `9510`, `9511`, `9512`, `9515`, `9516`, `9519`, `9521`, `9522`, `9523`, `9524`, `9526`, `9527`, `9528`, `9531`, `9533`, `9534`, `9536`, `9537`, `9539`, `9541`, `9543`, `9546`, `9548`, `9551`, `9553`, `9554`, `9559`, `9562`, `9564`, `9565`, `9567`, `9569`, `9572`, `9573`, `9574`, `9576`, `9577`, `9578`, `9581`, `9583`, `9584`, `9585`, `9586`, `9587`, `9588`, `9590`, `9591`, `9592`, `9594`, `9595`, `9596`, `9597`, `9601`, `9602`, `9604`, `9607`, `9609`, `9611`, `9614`, `9616`, `9618`, `9621`, `9624`, `9625`, `9628`, `9630`, `9632`, `9633`, `9636`, `9637`, `9638`, `9639`, `9640`, `9641`, `9643`, `9645`, `9648`, `9650`, `9652`, `9653`, `9655`, `9656`, `9658`, `9660`, `9662`, `9663`, `9665`, `9666`, `9669`, `9670`, `9672`, `9674`, `9676`, `9678`, `9679`, `9683`, `9685`, `9687`, `9688`, `9689`, `9691`, `9694`, `9695`, `9697`, `9699`, `9700`, `9702`, `9703`, `9704`, `9706`, `9707`, `9709`, `9711`, `9713`, `9714`, `9715`, `9719`, `9720`, `9722`, `9725`, `9728`, `9732`, `9733`, `9735`, `9736`, `9738`, `9739`, `9742`, `9743`, `9745`, `9747`, `9748`, `9750`, `9751`, `9752`, `9754`, `9755`, `9757`, `9759`, `9761`, `9762`, `9763`, `9765`, `9766`, `9767`, `9769`, `9770`, `9772`, `9773`, `9774`, `9776`, `9778`, `9779`, `9781`, `9783`, `9786`, `9787`, `9789`, `9791`, `9794`, `9796`, `9797`, `9799`, `9800`, `9803`, `9805`, `9807`, `9809`, `9812`, `9814`, `9817`, `9819`, `9820`, `9821`, `9823`, `9825`, `9826`, `9829`, `9830`, `9832`, `9834`, `9835`, `9837`, `9838`, `9841`, `9843`, `9845`, `9846`, `9847`, `9848`, `9850`, `9851`, `9852`, `9853`, `9854`, `9856`, `9857`, `9859`, `9860`, `9861`, `9863`, `9865`, `9867`, `9869`, `9871`, `9872`, `9874`, `9875`, `9876`, `9878`, `9879`, `9880`, `9883`, `9884`, `9885`, `9888`, `9890`, `9892`, `9894`, `9896`, `9897`, `9899`, `9901`, `9903`, `9904`, `9905`, `9907`, `9908`, `9910`, `9912`, `9914`, `9917`, `9920`, `9921`, `9923`, `9926`, `9927`, `9929`, `9931`, `9932`, `9933`, `9935`, `9937`, `9940`, `9941`, `9943`, `9945`, `9946`, `9948`, `9949`, `9950`, `9952`, `9953`, `9955`, `9958`, `9959`, `9961`, `9963`, `9964`, `9965`, `9967`, `9970`, `9972`, `9973`, `9974`, `3616`, `9975`, `9978`, `9980`, `9981`, `9983`, `9984`, `9985`, `6121`, `9988`, `9989`, `9992`, `9993`, `9995`, `9998`, `10000`, `10001`, `10002`, `10003`, `10006`, `10009`, `10012`, `10013`, `10016`, `10018`, `10020`, `10022`, `10023`, `10025`, `10026`, `10028`, `10029`, `10030`, `10033`, `10034`, `10035`, `10036`, `10038`, `10039`, `10042`, `10044`, `10048`, `10049`, `10051`, `10052`, `10054`, `10056`, `10058`, `10061`, `10062`, `10065`, `10066`, `10068`, `10069`, `10071`, `10073`, `10075`, `10076`, `10081`, `10083`, `10085`, `10087`, `10089`, `10091`, `10093`, `10095`, `10096`, `10098`, `10100`, `10101`, `10103`, `10105`, `10106`, `10108`, `10110`, `10112`, `10114`, `10116`, `10118`, `10120`, `10121`, `10122`, `10123`, `10124`, `10125`, `10127`, `10129`, `10130`, `10132`, `10133`, `10134`, `10136`, `10138`, `10140`, `10142`, `10144`, `10145`, `10146`, `10148`, `10149`, `10152`, `10156`, `10157`, `10158`, `10161`, `10163`, `10164`, `10166`, `10168`, `10170`, `10171`, `10172`, `10175`, `10179`, `10180`, `10181`, `10182`, `10184`, `10187`, `10188`, `10189`, `10191`, `10192`, `10193`, `10194`, `10197`, `10198`, `10200`, `10202`, `10205`, `10206`, `10207`, `10209`, `10211`, `10213`, `10216`, `10218`, `10220`, `10222`, `10224`, `10226`, `10228`, `10232`, `10233`, `10235`, `10237`, `10240`, `10242`, `10243`, `10245`, `10246`, `10249`, `10250`, `10252`, `10254`, `10255`, `10257`, `10259`, `10261`, `10263`, `10266`, `10269`, `10270`, `10273`, `10274`, `10276`, `10278`, `10279`, `10281`, `10282`, `10284`, `10287`, `10288`, `10289`, `10292`, `10293`, `10295`, `10296`, `10299`, `10301`, `10304`, `10306`, `10308`, `10309`, `10312`, `10313`, `10315`, `10316`, `10318`, `10320`, `10321`, `10322`, `10325`, `10328`, `10329`, `10330`, `10332`, `10334`, `10335`, `10337`, `10340`, `10343`, `10345`, `10346`, `10349`, `10350`, `10352`, `10354`, `10356`, `10357`, `10358`, `10359`, `10361`, `10362`, `10363`, `10365`, `10366`, `10367`, `10369`, `10372`, `10374`, `10377`, `10378`, `10379`, `10381`, `10383`, `10384`, `10387`, `10388`, `10390`, `10391`, `10392`, `10393`, `10395`, `10398`, `10400`, `10401`, `10404`, `10405`, `10407`, `10409`, `10411`, `10413`, `10414`, `10415`, `10416`, `10417`, `10418`, `10419`, `10420`, `10422`, `10423`, `10424`, `10425`, `10427`, `10428`, `10429`, `10430`, `10432`, `10434`, `10436`, `10437`, `10438`, `10440`, `10441`, `10444`, `10446`, `10448`, `10449`, `10450`, `10451`, `10454`, `10455`, `10457`, `10458`, `10461`, `10462`, `10463`, `10464`, `10466`, `10468`, `10469`, `10471`, `10473`, `10474`, `10476`, `10478`, `10479`, `10482`, `10483`, `10485`, `10488`, `10489`, `10490`, `10493`, `10495`, `10497`, `10498`, `10499`, `10500`, `10502`, `10504`, `10507`, `10509`, `10510`, `10511`, `10513`, `10514`, `10515`, `10516`, `10517`, `10519`, `10521`, `10523`, `10525`, `10527`, `10528`, `10530`, `10531`, `10532`, `10534`, `10535`, `10536`, `10538`, `10539`, `10540`, `10541`, `10543`, `10545`, `10547`, `10550`, `10551`, `10554`, `10556`, `10559`, `10563`, `10565`, `10567`, `10569`, `10571`, `10572`, `10574`, `10578`, `10581`, `10583`, `10586`, `10588`, `10592`, `10593`, `10594`, `10595`, `10596`, `10597`, `10598`, `10600`, `10602`, `10603`, `10604`, `10605`, `10606`, `10608`, `10611`, `10613`, `10615`, `10616`, `10618`, `10620`, `10623`, `10626`, `10627`, `10630`, `10632`, `10635`, `10637`, `10639`, `10641`, `10643`, `10644`, `10645`, `10647`, `10649`, `10650`, `10653`, `10656`, `10658`, `10659`, `10661`, `10662`, `10663`, `10664`, `10665`, `10667`, `10669`, `10670`, `10672`, `10674`, `10675`, `10676`, `10678`, `10679`, `10681`, `10684`, `10686`, `10687`, `10690`, `10692`, `10693`, `10694`, `10697`, `10700`, `10702`, `10704`, `10706`, `10708`, `10710`, `10712`, `10713`, `10716`, `10717`, `10718`, `10719`, `10720`, `10721`, `10722`, `10725`, `10727`, `10729`, `10731`, `10733`, `10735`, `10736`, `10737`, `10738`, `10739`, `10740`, `10741`, `10744`, `10746`, `10748`, `10750`, `10753`, `10754`, `10756`, `10757`, `10758`, `10760`, `10763`, `10764`, `10765`, `10766`, `10767`, `10769`, `10770`, `10772`, `10774`, `10775`, `10776`, `10779`, `10783`, `10785`, `10788`, `10790`, `10791`, `10793`, `10796`, `10797`, `10798`, `10800`, `10804`, `10805`, `10806`, `10807`, `10808`, `10809`, `10810`, `10812`, `10814`, `10816`, `10817`, `10818`, `10819`, `10821`, `10823`, `10825`, `10827`, `10828`, `10829`, `10830`, `10832`, `10834`, `10835`, `10836`, `10838`, `10839`, `10841`, `10842`, `10844`, `10846`, `10850`, `10852`, `10857`, `10858`, `10859`, `10861`, `10862`, `10863`, `10864`, `10865`, `10868`, `10870`, `10873`, `10875`, `10878`, `10880`, `10884`, `10886`, `10887`, `10888`, `10889`, `10892`, `10894`, `10897`, `10898`, `10900`, `10901`, `10903`, `10905`, `10906`, `10908`, `10909`, `10911`, `10912`, `10915`, `10916`, `10917`, `10918`, `10919`, `10921`, `10922`, `10924`, `10925`, `10927`, `10928`, `10931`, `10932`, `10933`, `10934`, `10935`, `10938`, `10942`, `10943`, `10945`, `10947`, `10949`, `10950`, `10951`, `10953`, `10955`, `10956`, `10957`, `10958`, `10959`, `10961`, `10962`, `10963`, `10964`, `10965`, `10966`, `10968`, `10971`, `10972`, `10974`, `10975`, `10977`, `10978`, `10980`, `10981`, `10983`, `10984`, `10985`, `10986`, `10987`, `10989`, `10991`, `10992`, `10993`, `10995`, `10997`, `10999`, `11001`, `11002`, `11004`, `11007`, `11010`, `11012`, `11013`, `11014`, `11016`, `11019`, `11023`, `11025`, `11027`, `11028`, `11030`, `11032`, `11033`, `11034`, `11036`, `11037`, `11038`, `11040`, `11044`, `11046`, `11049`, `11050`, `11054`, `11055`, `11057`, `11059`, `11060`, `11062`, `11064`, `11065`, `11066`, `11068`, `11071`, `11074`, `11075`, `11078`, `11079`, `11080`, `11083`, `11084`, `11085`, `11089`, `11090`, `11093`, `11094`, `11096`, `11097`, `11098`, `11100`, `11102`, `11103`, `11105`, `11106`, `11109`, `11113`, `11116`, `11117`, `11119`, `11120`, `11122`, `11125`, `11128`, `11129`, `11131`, `11132`, `11133`, `11134`, `11136`, `11137`, `11138`, `11140`, `11142`, `11144`, `11147`, `11148`, `11150`, `11152`, `11155`, `11157`, `11159`, `11160`, `11162`, `11163`, `11164`, `11166`, `11167`, `11169`, `11171`, `11174`, `11176`, `11178`, `11179`, `11181`, `11184`, `11185`, `11186`, `11189`, `11190`, `11192`, `11194`, `11196`, `11198`, `11199`, `11202`, `11204`, `11206`, `11208`, `11211`, `11212`, `11214`, `11216`, `11220`, `11222`, `11223`, `11226`, `11228`, `11230`, `11231`, `11232`, `11233`, `11235`, `11237`, `11239`, `11240`, `11241`, `11242`, `11243`, `11244`, `11245`, `11247`, `11250`, `11252`, `11254`, `11256`, `11258`, `11260`, `11262`, `11263`, `11264`, `11265`, `11269`, `11271`, `11273`, `11277`, `11279`, `11282`, `11285`, `11286`, `11289`, `11290`, `11292`, `11294`, `11296`, `11298`, `11301`, `11303`, `11305`, `11308`, `11309`, `11311`, `11314`, `11315`, `11317`, `11319`, `11320`, `11321`, `11324`, `11325`, `11326`, `11328`, `11330`, `11331`, `11332`, `11333`, `11334`, `11336`, `11338`, `11341`, `11342`, `11344`, `11348`, `11349`, `11350`, `11352`, `11353`, `11355`, `11357`, `11358`, `11359`, `11360`, `11361`, `11362`, `11366`, `11367`, `11370`, `11371`, `11373`, `11375`, `11376`, `11378`, `11380`, `11382`, `11383`, `11384`, `11385`, `11386`, `11387`, `11388`, `11390`, `11393`, `11395`, `11396`, `11397`, `11399`, `11401`, `11403`, `11404`, `11405`, `11406`, `11407`, `11408`, `11410`, `11412`, `11413`, `11414`, `11415`, `11416`, `11417`, `11420`, `11421`, `11422`, `11423`, `11425`, `11427`, `11428`, `11430`, `11433`, `11435`, `11438`, `11440`, `11442`, `11443`, `11445`, `11446`, `11448`, `11450`, `11452`, `11454`, `11455`, `11456`, `11457`, `11458`, `11459`, `11461`, `11462`, `11463`, `11464`, `11468`, `11469`, `11470`, `11473`, `11474`, `11476`, `11479`, `11480`, `11482`, `11484`, `11486`, `11487`, `11489`, `11490`, `11492`, `11494`, `11495`, `11496`, `11497`, `11499`, `11500`, `11501`, `11504`, `11505`, `11508`, `11509`, `11511`, `11513`, `11515`, `11518`, `11519`, `11520`, `11522`, `11523`, `11524`, `11525`, `11526`, `11529`, `11530`, `11531`, `11533`, `11534`, `11536`, `11538`, `11540`, `11542`, `11544`, `11546`, `11547`, `11549`, `11551`, `11552`, `11553`, `11555`, `11557`, `11559`, `11560`, `11562`, `11565`, `11567`, `11568`, `11570`, `11572`, `11573`, `11576`, `11577`, `11579`, `11580`, `11581`, `11583`, `11584`, `11586`, `11587`, `11591`, `11593`, `11594`, `11597`, `11598`, `11601`, `11602`, `11603`, `11605`, `11606`, `11608`, `11610`, `11611`, `11612`, `11614`, `11615`, `11618`, `11619`, `11620`, `11621`, `11623`, `11624`, `11627`, `11628`, `11631`, `11632`, `11633`, `11634`, `11635`, `11636`, `11639`, `11640`, `11642`, `11643`, `11644`, `11646`, `11647`, `11649`, `11650`, `11652`, `11656`, `11657`, `11658`, `11660`, `11663`, `11665`, `11666`, `11668`, `11670`, `11671`, `11673`, `11674`, `11675`, `11676`, `11678`, `11679`, `11680`, `11681`, `11684`, `11686`, `11687`, `11690`, `11691`, `11692`, `11693`, `11694`, `11695`, `11696`, `11697`, `11698`, `11699`, `11700`, `11701`, `11702`, `11703`, `11704`, `11705`, `11706`, `11707`, `11708`, `11709`, `11710`, `11711`, `11712`, `11713`, `11716`, `11718`, `11719`, `11722`, `11723`, `11724`, `11725`, `11727`, `11728`, `11729`, `11731`, `11733`, `11735`, `11736`, `11737`, `11739`, `11741`, `11744`, `11745`, `11747`, `11749`, `11751`, `11752`, `11753`, `11755`, `11756`, `11757`, `11758`, `11759`, `11760`, `11762`, `11764`, `11766`, `11768`, `11770`, `11771`, `11773`, `11775`, `11776`, `11778`, `11780`, `11782`, `11785`, `11786`, `11788`, `11790`, `11793`, `11795`, `11796`, `11798`, `11799`, `11800`, `11802`, `11803`, `11806`, `11808`, `11809`, `11810`, `11812`, `11813`, `11814`, `11816`, `11819`, `11821`, `11823`, `11825`, `11826`, `11829`, `11830`, `11832`, `11833`, `11835`, `11837`, `11838`, `11841`, `11842`, `11843`, `11847`, `11850`, `11853`, `11855`, `11857`, `11858`, `11860`, `11864`, `11865`, `11867`, `11869`, `11870`, `11872`, `11873`, `11876`, `11877`, `11879`, `11882`, `11884`, `11885`, `11886`, `11888`, `11890`, `11894`, `11895`, `11896`, `11898`, `11899`, `11902`, `11903`, `11906`, `11907`, `11908`, `11909`, `11910`, `11912`, `11914`, `11916`, `11919`, `11920`, `11923`, `11926`, `11928`, `11929`, `11930`, `11931`, `11933`, `11935`, `11937`, `11939`, `11942`, `11944`, `11946`, `11948`, `11951`, `11953`, `11954`, `11955`, `11957`, `11958`, `11959`, `11961`, `11964`, `11965`, `1175`, `11966`, `11967`, `11970`, `11971`, `11973`, `11974`, `11975`, `11976`, `11977`, `11979`, `11981`, `11984`, `11985`, `11987`, `11990`, `11992`, `11993`, `11994`, `11997`, `12000`, `12002`, `12003`, `12006`, `12008`, `12010`, `12011`, `12013`, `12015`, `12016`, `12018`, `12019`, `12021`, `12022`, `12023`, `12024`, `12025`, `12026`, `12027`, `12028`, `12030`, `12031`, `12033`, `12034`, `12035`, `12036`, `12037`, `12038`, `12039`, `12040`, `12041`, `12042`, `12043`, `12044`, `12045`, `12046`, `12047`, `12048`, `12049`, `12050`, `12051`, `12053`, `12054`, `12055`, `12056`, `12057`, `12061`, `12063`, `12066`, `12067`, `12069`, `12071`, `12074`, `12076`, `12079`, `12080`, `12081`, `12082`, `12083`, `12085`, `12087`, `12088`, `12090`, `12091`, `12092`, `12093`, `12094`, `12095`, `12096`, `12100`, `12101`, `12104`, `12105`, `12108`, `12109`, `12110`, `12112`, `12113`, `12115`, `12117`, `12118`, `12119`, `12120`, `12122`, `12123`, `12125`, `12126`, `12127`, `12128`, `12129`, `12130`, `12131`, `12134`, `12135`, `12136`, `12138`, `12140`, `12141`, `12142`, `12143`, `12144`, `12145`, `12146`, `12147`, `12150`, `12151`, `12152`, `12153`, `12154`, `12158`, `12159`, `12160`, `12161`, `12163`, `12164`, `12165`, `12168`, `12172`, `12173`, `12174`, `12175`, `12176`, `12177`, `12178`, `12179`, `12180`, `12183`, `12184`, `12186`, `12187`, `12190`, `12192`, `12195`, `12196`, `12198`, `12199`, `12200`, `12201`, `12203`, `12204`, `12205`, `12206`, `12207`, `12209`, `12210`, `12211`, `12212`, `12213`, `12214`, `12216`, `12217`, `12219`, `12221`, `12222`, `12223`, `12224`, `12225`, `12226`, `12227`, `12228`, `12229`, `12230`, `12232`, `12233`, `12234`, `12236`, `12237`, `12239`, `12241`, `12244`, `12245`, `12246`, `12247`, `12248`, `12250`, `12251`, `12252`, `12253`, `12254`, `12255`, `12257`, `12258`, `12260`, `12261`, `12262`, `12263`, `12265`, `12266`, `12270`, `12271`, `12272`, `12273`, `12275`, `12277`, `12279`, `12280`, `12281`, `12282`, `12283`, `12287`, `12288`, `12290`, `12292`, `12295`, `12297`, `12298`, `12299`, `12300`, `12301`, `12302`, `12303`, `12304`, `12305`, `12307`, `12310`, `12312`, `12313`, `12314`, `12316`, `12318`, `12319`, `12320`, `12321`, `12322`, `12326`, `12327`, `12330`, `12332`, `12334`, `12335`, `12336`, `12337`, `12339`, `12341`, `12342`, `12344`, `12346`, `12347`, `12349`, `12351`, `12353`, `12355`, `12356`, `12357`, `12359`, `12361`, `12363`, `12364`, `12365`, `12366`, `12369`, `12370`, `12371`, `12374`, `12376`, `12377`, `12378`, `12381`, `12383`, `12384`, `12385`, `12387`, `12389`, `12390`, `12392`, `12393`, `12394`, `12395`, `12397`, `12398`, `12399`, `12400`, `12402`, `12403`, `12404`, `12405`, `12406`, `12407`, `12408`, `12410`, `12412`, `12413`, `12414`, `12415`, `12416`, `12418`, `12420`, `12421`, `12422`, `12424`, `12426`, `12427`, `12429`, `12430`, `12431`, `12432`, `12433`, `12434`, `12435`, `12437`, `12438`, `12439`, `12440`, `12441`, `12442`, `12444`, `12445`, `12446`, `12447`, `12448`, `12449`, `12450`, `12451`, `12453`, `12455`, `12457`, `12458`, `12459`, `12461`, `12463`, `12466`, `12467`, `12468`, `12469`, `12470`, `12472`, `12473`, `12475`, `12476`, `12477`, `12479`, `12480`, `12481`, `12485`, `12486`, `12487`, `12490`, `12492`, `12493`, `12494`, `12495`, `12496`, `12497`, `12500`, `12501`, `12503`, `12504`, `12505`, `12506`, `12508`, `12509`, `12511`, `12514`, `12516`, `12518`, `12520`, `12522`, `12523`, `12524`, `12525`, `12526`, `12527`, `12528`, `12531`, `12532`, `12533`, `12534`, `12536`, `12537`, `12538`, `12539`, `12541`, `12542`, `12543`, `12544`, `12545`, `12546`, `12547`, `12548`, `12550`, `12551`, `12552`, `12554`, `12555`, `12557`, `12558`, `12560`, `12561`, `12563`, `12564`, `12565`, `12567`, `12569`, `12570`, `12571`, `12572`, `12574`, `12575`, `12577`, `12580`, `12583`, `12585`, `12586`, `12588`, `12589`, `12591`, `12594`, `12596`, `12597`, `12598`, `12600`, `12601`, `12603`, `12604`, `12607`, `12608`, `12609`, `12611`, `12612`, `12614`, `12615`, `12618`, `12620`, `12622`, `12624`, `12625`, `12627`, `12628`, `12630`, `12632`, `12633`, `12634`, `12635`, `12637`, `12638`, `12640`, `12642`, `12643`, `12645`, `12648`, `12649`, `12651`, `12653`, `12654`, `12656`, `12657`, `12658`, `12661`, `12662`, `12663`, `12665`, `12667`, `12668`, `12669`, `12671`, `12672`, `12674`, `12677`, `12679`, `12680`, `12682`, `12683`, `12685`, `12687`, `12690`, `12691`, `12692`, `12694`, `12695`, `12697`, `12698`, `12699`, `12700`, `12701`, `12702`, `12704`, `12705`, `12706`, `12707`, `12708`, `12709`, `12712`, `12713`, `12715`, `12716`, `12718`, `12719`, `12720`, `12722`, `12725`, `12726`, `12728`, `12729`, `12730`, `12732`, `12733`, `12734`, `12735`, `12737`, `12738`, `12739`, `12740`, `12742`, `12743`, `12744`, `12745`, `12747`, `12749`, `12751`, `12752`, `12754`, `12756`, `12757`, `12758`, `12760`, `12761`, `12762`, `12763`, `12764`, `12766`, `12768`, `12769`, `12770`, `12772`, `12773`, `12775`, `12777`, `12780`, `12781`, `12783`, `12784`, `12786`, `12789`, `12790`, `12792`, `12793`, `12794`, `12795`, `12796`, `12797`, `12798`, `12800`, `12802`, `12803`, `12805`, `12806`, `12808`, `12809`, `12810`, `12812`, `12813`, `12814`, `12815`, `12816`, `12817`, `12818`, `12820`, `12822`, `12826`, `12829`, `12831`, `12832`, `12833`, `12835`, `12836`, `12838`, `12839`, `12840`, `12841`, `12842`, `12844`, `12847`, `12848`, `12849`, `12850`, `12851`, `12852`, `12853`, `12854`, `12855`, `12856`, `12857`, `12859`, `12860`, `12861`, `12863`, `12864`, `12865`, `12868`, `12869`, `12870`, `12871`, `12872`, `12874`, `12875`, `12876`, `12878`, `12879`, `12881`, `12882`, `12884`, `12885`, `12886`, `12887`, `12888`, `12889`, `12890`, `12893`, `12894`, `12895`, `12896`, `12897`, `12899`, `12901`, `12902`, `12904`, `12906`, `12908`, `12910`, `12911`, `12912`, `12913`, `12914`, `12915`, `12916`, `12917`, `12918`, `12920`, `12921`, `12922`, `12924`, `12926`, `12928`, `12929`, `12932`, `12934`, `12936`, `12937`, `12939`, `12941`, `12946`, `12948`, `12949`, `12952`, `12955`, `12957`, `12958`, `12959`, `12960`, `12962`, `12963`, `12965`, `12969`, `12971`, `12972`, `12975`, `12976`, `12978`, `12980`, `12981`, `12983`, `12985`, `12986`, `12988`, `12990`, `12992`, `12996`, `12999`, `13001`, `13002`, `13003`, `13006`, `13008`, `13009`, `13010`, `13011`, `13012`, `13014`, `13015`, `13017`, `13018`, `13019`, `13020`, `13021`, `13022`, `13023`, `13024`, `13025`, `13027`, `13028`, `13029`, `13031`, `13032`, `13033`, `13034`, `13035`, `13037`, `13038`, `13039`, `13040`, `13041`, `13042`, `13043`, `13044`, `13046`, `13047`, `13048`, `13050`, `13052`, `13053`, `13054`, `13055`, `13057`, `13059`, `13060`, `13061`, `13062`, `13064`, `13067`, `13068`, `13069`, `13071`, `13072`, `13073`, `13075`, `13076`, `13077`, `13079`, `13080`, `13082`, `13083`, `13084`, `13085`, `13088`, `13091`, `13092`, `13094`, `13095`, `13096`, `13097`, `13098`, `13100`, `13101`, `13102`, `13103`, `13104`, `13105`, `13106`, `13107`, `13108`, `13110`, `13111`, `13113`, `13115`, `13116`, `13118`, `13119`, `13120`, `13121`, `13123`, `13124`, `13126`, `13127`, `13128`, `13129`, `13130`, `13131`, `13132`, `13134`, `13135`, `13136`, `13137`, `13138`, `13139`, `13141`, `13142`, `13143`, `13144`, `13146`, `13147`, `13148`, `13150`, `13151`, `13153`, `13154`, `13155`, `13157`, `13158`, `13159`, `13160`, `13161`, `13162`, `13164`, `13167`, `13168`, `13169`, `13172`, `13173`, `13174`, `13176`, `13177`, `13179`, `13181`, `13183`, `13184`, `13185`, `13187`, `13189`, `13190`, `13192`, `13194`, `13195`, `13198`, `13199`, `13200`, `13202`, `13203`, `13205`, `13207`, `13209`, `13210`, `13211`, `13212`, `13214`, `13215`, `13216`, `13218`, `13219`, `13220`, `13222`, `13224`, `13226`, `13230`, `13232`, `13233`, `13234`, `13235`, `13238`, `13241`, `13242`, `13245`, `13247`, `13248`, `13249`, `13250`, `13252`, `13254`, `13257`, `13259`, `13260`, `13261`, `13264`, `13265`, `13267`, `13269`, `13270`, `13273`, `13276`, `13279`, `13280`, `13281`, `13284`, `13286`, `13287`, `13290`, `13291`, `13292`, `13294`, `13295`, `13296`, `13297`, `13299`, `13300`, `13303`, `13304`, `13305`, `13306`, `13308`, `13311`, `13312`, `13313`, `13314`, `13315`, `13318`, `13320`, `13321`, `13322`, `13325`, `13326`, `13329`, `13331`, `13332`, `13334`, `13335`, `13337`, `13338`, `13339`, `13340`, `13342`, `13344`, `13347`, `13348`, `13349`, `13350`, `13351`, `13352`, `13353`, `13355`, `13357`, `13360`, `13363`, `13364`, `13365`, `13366`, `13368`, `13369`, `13372`, `13375`, `13376`, `13377`, `13379`, `13380`, `13381`, `13383`, `13384`, `13385`, `13386`, `13388`, `13390`, `13392`, `13394`, `13395`, `13396`, `13399`, `13400`, `13401`, `13403`, `13404`, `13405`, `13406`, `13407`, `13410`, `13412`, `13414`, `13415`, `13417`, `13420`, `13421`, `13422`, `13424`, `13426`, `13428`, `13430`, `13431`, `13433`, `13434`, `13436`, `13437`, `13440`, `13442`, `13444`, `13446`, `13448`, `13449`, `13451`, `13453`, `13454`, `13456`, `13458`, `13459`, `13461`, `13462`, `13464`, `13466`, `13467`, `13468`, `13469`, `13470`, `13471`, `13473`, `13478`, `13480`, `13481`, `13482`, `13483`, `13484`, `13485`, `13486`, `13488`, `13490`, `13491`, `13492`, `13496`, `13497`, `13499`, `13500`, `13503`, `13506`, `13507`, `13509`, `13510`, `13511`, `13513`, `13516`, `13517`, `13519`, `13520`, `13524`, `13527`, `13530`, `13531`, `13532`, `13535`, `13537`, `13540`, `13543`, `13545`, `13547`, `13550`, `13551`, `13552`, `13554`, `13555`, `13557`, `13559`, `13560`, `13562`, `13565`, `13566`, `13567`, `13568`, `13569`, `13571`, `13574`, `13576`, `13577`, `13578`, `13580`, `13581`, `13583`, `13584`, `13585`, `13586`, `13587`, `13588`, `13590`, `13591`, `13594`, `13595`, `13598`, `13600`, `13604`, `13606`, `13608`, `13609`, `13612`, `13613`, `13615`, `13616`, `13617`, `13619`, `13621`, `13622`, `13623`, `13624`, `13625`, `13627`, `13630`, `13632`, `13633`, `13634`, `13637`, `13638`, `13639`, `13641`, `13642`, `13643`, `13644`, `13645`, `13646`, `13647`, `13649`, `13651`, `13652`, `13653`, `13654`, `13656`, `13658`, `13660`, `13661`, `13663`, `13665`, `13667`, `13668`, `13669`, `13670`, `13673`, `13674`, `13678`, `13680`, `13682`, `13683`, `13684`, `13686`, `13687`, `13688`, `13689`, `13691`, `13693`, `13694`, `13697`, `13699`, `13700`, `13702`, `13705`, `13706`, `13708`, `13709`, `13710`, `13711`, `13712`, `13713`, `13714`, `13716`, `13719`, `13720`, `13721`, `13722`, `13724`, `13725`, `13726`, `13729`, `13731`, `13733`, `13734`, `13735`, `13737`, `13738`, `13739`, `13740`, `13741`, `13742`, `13743`, `13744`, `13745`, `13746`, `13747`, `13748`, `13749`, `13750`, `13751`, `13753`, `13754`, `13757`, `13759`, `13761`, `13763`, `13765`, `13767`, `13770`, `13771`, `13773`, `13775`, `13776`, `13777`, `13778`, `13779`, `13782`, `13783`, `13785`, `13787`, `13788`, `13791`, `13792`, `13794`, `13796`, `13797`, `13799`, `13800`, `13803`, `13805`, `13807`, `13808`, `13810`, `13813`, `13814`, `13816`, `13819`, `13822`, `13823`, `13824`, `13826`, `13827`, `13829`, `13831`, `13833`, `13834`, `13836`, `13838`, `13840`, `13841`, `13844`, `13845`, `13846`, `13847`, `13848`, `13849`, `13850`, `13852`, `13854`, `13855`, `13857`, `13858`, `13859`, `13860`, `13862`, `13865`, `13867`, `13869`, `13871`, `13873`, `13874`, `13876`, `13878`, `13879`, `13882`, `13885`, `13886`, `13888`, `13890`, `13893`, `13894`, `13895`, `13896`, `13897`, `13899`, `13901`, `13903`, `13904`, `13905`, `13906`, `13907`, `13909`, `13911`, `13912`, `13913`, `13914`, `13915`, `13916`, `13918`, `13919`, `13921`, `13922`, `13923`, `13924`, `13925`, `13928`, `13930`, `13932`, `13934`, `13937`, `13939`, `13940`, `13943`, `13944`, `13947`, `13949`, `13950`, `13952`, `13954`, `13956`, `13958`, `13959`, `13961`, `13963`, `13966`, `13968`, `13971`, `13972`, `13973`, `13974`, `13976`, `13978`, `13979`, `13980`, `13982`, `13983`, `13984`, `13986`, `13988`, `13989`, `13991`, `13992`, `13994`, `13995`, `13997`, `13998`, `13999`, `14001`, `14004`, `14006`, `14007`, `14008`, `14009`, `14010`, `14011`, `14014`, `14016`, `14018`, `14019`, `14023`, `14024`, `14025`, `14026`, `14027`, `14028`, `14029`, `14031`, `14032`, `14033`, `14034`, `14036`, `14037`, `14038`, `14041`, `14043`, `14044`, `14048`, `14051`, `14052`, `14054`, `14056`, `14059`, `14062`, `14063`, `14064`, `14066`, `14067`, `14068`, `14070`, `14071`, `14072`, `14073`, `14074`, `14076`, `14078`, `14080`, `14081`, `14082`, `14083`, `14085`, `14086`, `14087`, `14088`, `14089`, `14091`, `14093`, `14094`, `14095`, `14096`, `14098`, `14100`, `14102`, `14103`, `14104`, `14106`, `14107`, `14108`, `14110`, `14112`, `14113`, `14115`, `14118`, `14119`, `14120`, `14121`, `14124`, `14126`, `14127`, `14129`, `14132`, `14134`, `14136`, `14137`, `14139`, `14140`, `14142`, `14144`, `14147`, `14148`, `14149`, `14150`, `14152`, `14153`, `14155`, `14158`, `14160`, `14161`, `14162`, `14163`, `14164`, `14165`, `14166`, `14167`, `14168`, `14169`, `14170`, `14172`, `14174`, `14175`, `14176`, `14177`, `14178`, `14180`, `14181`, `14182`, `14183`, `14184`, `14185`, `14186`, `14189`, `14191`, `14193`, `14194`, `14197`, `14200`, `14201`, `14204`, `14206`, `14207`, `14208`, `14210`, `14211`, `14212`, `14214`, `14215`, `14216`, `14217`, `14218`, `14219`, `14220`, `14221`, `14222`, `14225`, `14228`, `14229`, `14232`, `14233`, `14235`, `14237`, `14238`, `14240`, `14241`, `14242`, `14244`, `14246`, `14248`, `14249`, `14251`, `14252`, `14253`, `14254`, `14258`, `14259`, `14260`, `14261`, `14263`, `14265`, `14266`, `14267`, `14270`, `14271`, `14272`, `14273`, `14274`, `14277`, `14278`, `14279`, `14280`, `14283`, `14284`, `14286`, `14288`, `14289`, `14290`, `14293`, `14294`, `14295`, `14297`, `14298`, `14299`, `14300`, `14301`, `14302`, `14305`, `14307`, `14308`, `14310`, `14311`, `14312`, `14313`, `14314`, `14316`, `14317`, `14319`, `14321`, `14322`, `14324`, `14325`, `14326`, `14328`, `14330`, `14332`, `14333`, `14334`, `14335`, `14336`, `14337`, `14338`, `14339`, `14341`, `14343`, `14344`, `14345`, `14346`, `14348`, `14349`, `14350`, `14351`, `14353`, `14354`, `14356`, `14358`, `14359`, `14361`, `14362`, `14363`, `14364`, `14365`, `14367`, `14368`, `14369`, `14371`, `14375`, `14376`, `14377`, `14380`, `14381`, `14382`, `14383`, `14385`, `14386`, `14388`, `14389`, `14390`, `14391`, `14392`, `14393`, `14396`, `14397`, `14399`, `14401`, `14403`, `14405`, `14406`, `14408`, `14409`, `14410`, `14411`, `14412`, `14414`, `14415`, `14416`, `14417`, `14418`, `14419`, `14420`, `14421`, `14422`, `14424`, `14428`, `14429`, `14431`, `14432`, `14434`, `14436`, `14438`, `14440`, `14441`, `14442`, `14443`, `14444`, `14445`, `14446`, `14447`, `14450`, `14451`, `14452`, `14453`, `14455`, `14456`, `14457`, `14460`, `14461`, `14462`, `14463`, `14465`, `14466`, `14468`, `14469`, `14472`, `14474`, `14475`, `14476`, `14478`, `14479`, `14480`, `14481`, `14482`, `14483`, `14486`, `14490`, `14491`, `14492`, `14494`, `14496`, `14497`, `14498`, `14499`, `14501`, `14502`, `14504`, `14505`, `14506`, `14508`, `14511`, `14512`, `14513`, `14516`, `14517`, `14518`, `14519`, `14520`, `14522`, `14523`, `14524`, `14526`, `14527`, `14528`, `14529`, `14530`, `14531`, `14532`, `14533`, `14534`, `14535`, `14537`, `14539`, `14540`, `14541`, `14542`, `14543`, `14545`, `14546`, `14548`, `14550`, `14551`, `14552`, `14554`, `14555`, `14556`, `14558`, `14559`, `14560`, `14561`, `14563`, `14564`, `14565`, `14567`, `14568`, `14570`, `14571`, `14572`, `14573`, `14575`, `14576`, `14577`, `14578`, `14579`, `14580`, `14581`, `14582`, `14583`, `14585`, `14586`, `14587`, `14590`, `14591`, `14592`, `14594`, `14595`, `14596`, `14597`, `14598`, `14599`, `14601`, `14602`, `14603`, `14605`, `14606`, `14608`, `14609`, `14612`, `14613`, `14614`, `14617`, `14618`, `14620`, `14621`, `14622`, `14624`, `14625`, `14626`, `14628`, `14630`, `14631`, `14633`, `14634`, `14635`, `14637`, `14638`, `14639`, `14640`, `14641`, `14642`, `14643`, `14644`, `14645`, `14647`, `14648`, `14649`, `14652`, `14655`, `14656`, `14658`, `14659`, `14660`, `14661`, `14662`, `14663`, `14664`, `14665`, `14667`, `14668`, `14672`, `14675`, `14678`, `14679`, `14680`, `14681`, `14682`, `14685`, `14686`, `14687`, `14688`, `14689`, `14691`, `14692`, `14694`, `14695`, `14696`, `14697`, `14698`, `14701`, `14702`, `14703`, `14704`, `14705`, `14707`, `14708`, `14711`, `14713`, `14715`, `14716`, `14717`, `14718`, `14720`, `14722`, `14723`, `14725`, `14726`, `14728`, `14729`, `14730`, `14731`, `14732`, `14734`, `14735`, `14736`, `14737`, `14738`, `14740`, `14743`, `14746`, `14747`, `14749`, `14751`, `14752`, `14753`, `14754`, `14756`, `14757`, `14759`, `14760`, `14763`, `14764`, `14766`, `14769`, `14770`, `14771`, `14772`, `14773`, `14774`, `14775`, `14778`, `14779`, `14780`, `14781`, `14784`, `14785`, `14786`, `14787`, `14788`, `14789`, `14791`, `14792`, `14794`, `14796`, `14797`, `14798`, `14800`, `14801`, `14804`, `14805`, `14806`, `14808`, `14812`, `14814`, `14815`, `14816`, `14818`, `14819`, `14821`, `14823`, `14824`, `14826`, `14827`, `14828`, `14830`, `14831`, `14832`, `14834`, `14835`, `14836`, `14837`, `14839`, `14840`, `14841`, `14843`, `14844`, `14845`, `14846`, `14847`, `14848`, `14849`, `14850`, `14851`, `14852`, `14854`, `14857`, `14858`, `14859`, `14861`, `14862`, `14863`, `14865`, `14866`, `14867`, `14868`, `14869`, `14870`, `14871`, `14873`, `14874`, `14875`, `14876`, `14877`, `14879`, `14881`, `14883`, `14884`, `14887`, `14889`, `14891`, `14893`, `14895`, `14896`, `14898`, `14900`, `14902`, `14904`, `14905`, `14906`, `14908`, `14909`, `14910`, `14912`, `14913`, `14914`, `14915`, `14916`, `14918`, `14919`, `14920`, `14921`, `14922`, `14924`, `14926`, `14928`, `14930`, `14931`, `14932`, `14933`, `14934`, `14935`, `14937`, `14938`, `14939`, `14941`, `14942`, `14943`, `14945`, `14946`, `14948`, `14949`, `14951`, `14952`, `14954`, `14956`, `14957`, `14958`, `14960`, `14961`, `14962`, `14963`, `14964`, `14965`, `14966`, `14967`, `14968`, `14969`, `14970`, `14971`, `14974`, `14976`, `14979`, `14980`, `14981`, `14982`, `14984`, `14985`, `14986`, `14987`, `14988`, `14990`, `14992`, `14993`, `14995`, `14997`, `15000`, `15001`, `15002`, `15003`, `15004`, `15005`, `15006`, `15007`, `15009`, `15010`, `15012`, `15013`, `15014`, `15015`, `15017`, `15018`, `15020`, `15021`, `15022`, `15023`, `15024`, `15025`, `15026`, `15027`, `15028`, `15029`, `15030`, `15031`, `15032`, `15033`, `15034`, `15035`, `15036`, `15037`, `15040`, `15041`, `15043`, `15044`, `15046`, `15048`, `15049`, `15050`, `15051`, `15052`, `15053`, `15054`, `15055`, `15058`, `15061`, `15065`, `15066`, `15067`, `15068`, `15069`, `15070`, `15072`, `15075`, `15076`, `15077`, `15078`, `15079`, `15081`, `15084`, `15086`, `15089`, `15091`, `15092`, `15094`, `15095`, `15096`, `15098`, `15099`, `15102`, `15103`, `15105`, `15106`, `15107`, `15108`, `15109`, `15110`, `15111`, `15113`, `15114`, `15116`, `15117`, `15118`, `15120`, `15121`, `15122`, `15123`, `15124`, `15126`, `15128`, `15129`, `15130`, `15133`, `15134`, `15135`, `15136`, `15137`, `15138`, `15139`, `15140`, `15142`, `15144`, `15145`, `15147`, `15148`, `15149`, `15152`, `15153`, `15155`, `15156`, `15158`, `15159`, `15160`, `15161`, `15164`, `15165`, `15166`, `15168`, `15169`, `15171`, `15172`, `15173`, `15174`, `15175`, `15176`, `15177`, `15178`, `15179`, `15180`, `15181`, `15183`, `15184`, `15185`, `15186`, `15188`, `15191`, `15192`, `15194`, `15196`, `15198`, `15199`, `15201`, `15202`, `15203`, `15204`, `15205`, `15207`, `15208`, `15209`, `15210`, `15213`, `15214`, `15216`, `15217`, `15218`, `15219`, `15221`, `15222`, `15224`, `15225`, `15227`, `15228`, `15230`, `15231`, `15232`, `15233`, `15236`, `15237`, `15238`, `15240`, `15242`, `15244`, `15246`, `15247`, `15248`, `15249`, `15252`, `15253`, `15254`, `15255`, `15257`, `15258`, `15259`, `15260`, `15261`, `15262`, `15263`, `15265`, `15266`, `15267`, `15269`, `15271`, `15273`, `15274`, `15275`, `15277`, `15280`, `15281`, `15282`, `15283`, `15285`, `15288`, `15291`, `15293`, `15294`, `15295`, `15296`, `15297`, `15298`, `15299`, `15301`, `15302`, `15304`, `15305`, `15307`, `15308`, `15309`, `15310`, `15313`, `15315`, `15317`, `15318`, `15319`, `15320`, `15322`, `15324`, `15325`, `15326`, `15327`, `15328`, `15331`, `15332`, `15333`, `15335`, `15336`, `15337`, `15338`, `15339`, `15340`, `15341`, `15342`, `15343`, `15345`, `15346`, `15347`, `15350`, `15351`, `15352`, `15354`, `15355`, `15356`, `15359`, `15361`, `15362`, `15363`, `15365`, `15366`, `15368`, `15369`, `15370`, `15372`, `15373`, `15374`, `15375`, `15378`, `15380`, `15381`, `15382`, `15383`, `15384`, `15385`, `15387`, `15391`, `15393`, `15394`, `15395`, `15397`, `15398`, `15399`, `15400`, `15403`, `15405`, `15406`, `15408`, `15409`, `15410`, `15412`, `15413`, `15415`, `15418`, `15419`, `15421`, `15423`, `15424`, `15425`, `15426`, `15428`, `15430`, `15432`, `15434`, `15438`, `15439`, `15440`, `15441`, `15442`, `15443`, `15445`, `15447`, `15451`, `15452`, `15453`, `15454`, `15455`, `15456`, `15457`, `15460`, `15461`, `15462`, `15463`, `15465`, `15466`, `15467`, `15468`, `15469`, `15472`, `15473`, `15475`, `15476`, `15477`, `15481`, `15484`, `15485`, `15487`, `15488`, `15489`, `15491`, `15492`, `15493`, `15495`, `15496`, `15498`, `15499`, `15501`, `15503`, `15504`, `15505`, `15507`, `15508`, `15510`, `15511`, `15512`, `15513`, `15514`, `15515`, `15516`, `15517`, `15518`, `15519`, `15520`, `15523`, `15524`, `15525`, `15526`, `15527`, `15528`, `15529`, `15530`, `15531`, `15532`, `15533`, `15535`, `15536`, `15537`, `15538`, `15539`, `15540`, `15541`, `15542`, `15543`, `15544`, `15545`, `15547`, `15548`, `15549`, `15550`, `15551`, `15554`, `15555`, `15556`, `15558`, `15559`, `15560`, `15562`, `15563`, `15564`, `15566`, `15567`, `15568`, `15569`, `15570`, `15571`, `15572`, `15573`, `15574`, `15575`, `15576`, `15578`, `15579`, `15581`, `15582`, `15584`, `15585`, `15586`, `15587`, `15588`, `15591`, `15592`, `15593`, `15594`, `15595`, `15596`, `15598`, `15602`, `15603`, `15605`, `15606`, `15607`, `15608`, `15609`, `15610`, `15612`, `15613`, `15614`, `15615`, `15616`, `15618`, `15619`, `15620`, `15622`, `15623`, `15625`, `15626`, `15629`, `15630`, `15632`, `15634`, `15635`, `15636`, `15638`, `15639`, `15640`, `15641`, `15642`, `15644`, `15645`, `15646`, `15648`, `15649`, `15650`, `15653`, `15654`, `15655`, `15656`, `15657`, `15658`, `15659`, `15660`, `15661`, `15663`, `15664`, `15665`, `15666`, `15667`, `15668`, `15669`, `15670`, `15671`, `15672`, `15673`, `15674`, `15675`, `15676`, `15677`, `15678`, `15679`, `15680`, `15682`, `15684`, `15687`, `15689`, `15691`, `15692`, `15693`, `15694`, `15695`, `15698`, `15699`, `15700`, `15703`, `15704`, `15707`, `15710`, `15712`, `15713`, `15714`, `15715`, `15717`, `15718`, `15719`, `15720`, `15721`, `15723`, `15724`, `15725`, `15726`, `15727`, `15729`, `15730`, `15732`, `15733`, `15734`, `15736`, `15738`, `15740`, `15741`, `15744`, `15745`, `15746`, `15748`, `15750`, `15751`, `15752`, `15754`, `15755`, `15756`, `15757`, `15758`, `15759`, `15761`, `15763`, `15765`, `15767`, `15770`, `15771`, `15772`, `15773`, `15774`, `15775`, `15776`, `15777`, `15780`, `15781`, `15783`, `15785`, `15786`, `15787`, `15788`, `15791`, `15792`, `15793`, `15794`, `15795`, `15798`, `15799`, `15800`, `15801`, `15804`, `15805`, `15806`, `15807`, `15808`, `15809`, `15810`, `15811`, `15812`, `15813`, `15814`, `15815`, `15816`, `15818`, `15819`, `15820`, `15821`, `15822`, `15823`, `15824`, `15827`, `15828`, `15829`, `15830`, `15831`, `15832`, `15833`, `15836`, `15837`, `15839`, `15841`, `15844`, `15845`, `15846`, `15850`, `15851`, `15852`, `15853`, `15854`, `15855`, `15856`, `15857`, `15858`, `15859`, `15861`, `15862`, `15863`, `15864`, `15865`, `15866`, `15868`, `15870`, `15871`, `15873`, `15874`, `15875`, `15876`, `15878`, `15880`, `15882`, `15885`, `15887`, `15888`, `15889`, `15891`, `15892`, `15893`, `15894`, `15895`, `15897`, `15899`, `15901`, `15903`, `15906`, `15909`, `15910`, `15911`, `15913`, `15916`, `15917`, `15918`, `15919`, `15920`, `15921`, `15923`, `15925`, `15926`, `15930`, `15932`, `15933`, `15934`, `15937`, `15939`, `15942`, `15944`, `15946`, `15947`, `15948`, `15949`, `15951`, `15952`, `15954`, `15955`, `15956`, `15957`, `15958`, `15960`, `15962`, `15964`, `15965`, `15966`, `15968`, `15970`, `15971`, `15972`, `15973`, `15974`, `15975`, `15977`, `15978`, `15980`, `15981`, `15982`, `15983`, `15984`, `15985`, `15986`, `15987`, `15988`, `15989`, `15990`, `15991`, `15992`, `15995`, `15996`, `15998`, `15999`, `16000`, `16001`, `16002`, `16003`, `16004`, `16005`, `16006`, `16008`, `16009`, `16011`, `16013`, `16014`, `16015`, `16017`, `16018`, `16019`, `16020`, `16021`, `16022`, `16023`, `16025`, `16026`, `16027`, `16029`, `16030`, `16031`, `16032`, `16033`, `16034`, `16037`, `16038`, `16041`, `16042`, `16043`, `16044`, `16045`, `16047`, `16050`, `16051`, `16052`, `16053`, `16056`, `16057`, `16060`, `16061`, `16063`, `16064`, `16066`, `16067`, `16070`, `16071`, `16073`, `16075`, `16077`, `16078`, `16079`, `16081`, `16082`, `16083`, `16084`, `16085`, `16086`, `16088`, `16089`, `16091`, `16092`, `16094`, `16095`, `16096`, `16098`, `16100`, `16102`, `16104`, `16106`, `16110`, `16112`, `16113`, `16114`, `16115`, `16116`, `16117`, `16119`, `16120`, `16121`, `16123`, `16127`, `16128`, `16130`, `16131`, `16132`, `16133`, `16134`, `16136`, `16137`, `16139`, `16140`, `16141`, `16142`, `16143`, `16144`, `16145`, `16146`, `16147`, `16149`, `16151`, `16153`, `16155`, `16157`, `16158`, `16159`, `16160`, `16161`, `16163`, `16165`, `16166`, `16167`, `16168`, `16169`, `16170`, `16171`, `16173`, `16174`, `16176`, `16178`, `16181`, `16183`, `16184`, `16185`, `16187`, `16188`, `16189`, `16190`, `16192`, `16193`, `16195`, `16196`, `16197`, `16200`, `16201`, `16202`, `16203`, `16205`, `16208`, `16209`, `16210`, `16211`, `16212`, `16213`, `16215`, `16216`, `16218`, `16219`, `16220`, `16221`, `16224`, `16225`, `16226`, `16227`, `16229`, `16230`, `16234`, `16235`, `16236`, `16237`, `16238`, `16239`, `16240`, `16242`, `16244`, `16245`, `16247`, `16249`, `16250`, `16251`, `16252`, `16254`, `16257`, `16258`, `16259`, `16260`, `16261`, `16262`, `16264`, `16265`, `16267`, `16268`, `16270`, `16271`, `16272`, `16274`, `16277`, `16278`, `16279`, `16280`, `16283`, `16286`, `16290`, `16291`, `16292`, `16294`, `16295`, `16296`, `16298`, `16299`, `16301`, `16302`, `16303`, `16305`, `16306`, `16307`, `16308`, `16309`, `16310`, `16312`, `16313`, `16314`, `16315`, `16316`, `16317`, `16319`, `16320`, `16321`, `16322`, `16328`, `16329`, `16331`, `16333`, `16334`, `16336`, `16337`, `16339`, `16341`, `16342`, `16344`, `16347`, `16349`, `16351`, `16353`, `16355`, `16357`, `16358`, `16359`, `16360`, `16361`, `16362`, `16363`, `16365`, `16367`, `16368`, `16371`, `16373`, `16375`, `16376`, `16378`, `16379`, `16380`, `16382`, `16383`, `16385`, `16386`, `16387`, `16388`, `16389`, `16391`, `16392`, `16394`, `16395`, `16396`, `16398`, `16399`, `16400`, `16401`, `16402`, `16403`, `16404`, `16405`, `16406`, `16407`, `16408`, `16410`, `16412`, `16413`, `16414`, `16416`, `16417`, `16418`, `16419`, `16421`, `16422`, `16423`, `16425`, `16427`, `16429`, `16430`, `16431`, `16432`, `16433`, `16434`, `16437`, `16438`, `16440`, `16441`, `16443`, `16444`, `16446`, `16448`, `16449`, `16451`, `16452`, `16454`, `16456`, `16458`, `16459`, `16460`, `16461`, `16462`, `16463`, `16465`, `16466`, `16467`, `16468`, `16469`, `16470`, `16471`, `16472`, `16475`, `16476`, `16477`, `16480`, `16482`, `16483`, `16485`, `16486`, `16487`, `16488`, `16489`, `16491`, `16492`, `16493`, `16494`, `16496`, `16497`, `16498`, `16499`, `16500`, `16501`, `16503`, `16504`, `16505`, `16506`, `16507`, `16508`, `16509`, `16512`, `16513`, `16514`, `16515`, `16517`, `16518`, `16519`, `16520`, `16521`, `16523`, `16524`, `16526`, `16528`, `16530`, `16531`, `16533`, `16534`, `16535`, `16536`, `16537`, `16538`, `16539`, `16540`, `16542`, `16546`, `16549`, `16550`, `16552`, `16554`, `16555`, `16556`, `16557`, `16559`, `16561`, `16562`, `16563`, `16564`, `16566`, `16567`, `16568`, `16569`, `16570`, `16572`, `16574`, `16575`, `16576`, `16580`, `16582`, `16583`, `16585`, `16586`, `16587`, `16588`, `16591`, `16593`, `16594`, `16595`, `16597`, `16599`, `16600`, `16601`, `16605`, `16606`, `16607`, `16608`, `16609`, `16610`, `16611`, `16612`, `16613`, `16614`, `16615`, `16616`, `16617`, `16618`, `16619`, `16620`, `16622`, `16623`, `16626`, `16627`, `16628`, `16629`, `16630`, `16632`, `16633`, `16634`, `16636`, `16637`, `16638`, `16640`, `16642`, `16644`, `16645`, `16646`, `16648`, `16650`, `16651`, `16653`, `16654`, `16655`, `16657`, `16658`, `16660`, `16661`, `16662`, `16663`, `16664`, `16666`, `16667`, `16668`, `16671`, `16672`, `16674`, `16675`, `16677`, `16678`, `16679`, `16680`, `16681`, `16682`, `16683`, `16684`, `16685`, `16686`, `16687`, `16688`, `16690`, `16691`, `16692`, `16693`, `16694`, `16695`, `16696`, `16697`, `16698`, `16699`, `16700`, `16701`, `16702`, `16704`, `16705`, `16707`, `16708`, `16709`, `16711`, `16712`, `16715`, `16717`, `16718`, `16719`, `16722`, `16723`, `16725`, `16728`, `16730`, `16731`, `16735`, `16736`, `16738`, `16739`, `16742`, `16743`, `16744`, `16745`, `16746`, `16747`, `16748`, `16749`, `16751`, `16752`, `16753`, `16754`, `16755`, `16757`, `16758`, `16760`, `16761`, `16762`, `16763`, `16766`, `16767`, `16770`, `16771`, `16772`, `16773`, `16776`, `16777`, `16780`, `16781`, `16783`, `16785`, `16788`, `16789`, `16791`, `16792`, `16794`, `16795`, `16798`, `16799`, `16801`, `16802`, `16803`, `16804`, `16805`, `16806`, `16808`, `16809`, `16812`, `16814`, `16816`, `16819`, `16820`, `16821`, `16822`, `16823`, `16825`, `16826`, `16827`, `16829`, `16830`, `16831`, `16832`, `16833`, `16834`, `16837`, `16840`, `16841`, `16842`, `16844`, `16845`, `16847`, `16848`, `16849`, `16851`, `16853`, `16855`, `16857`, `16859`, `16861`, `16862`, `16864`, `16865`, `16866`, `12423`, `16867`, `16869`, `16871`, `16872`, `16873`, `16874`, `16875`, `16877`, `16878`, `16880`, `16881`, `16882`, `16883`, `16884`, `16886`, `16889`, `16890`, `16892`, `16893`, `16894`, `16897`, `16898`, `16900`, `16901`, `16903`, `16905`, `16907`, `16908`, `16909`, `16911`, `16913`, `16915`, `16916`, `16917`, `16919`, `16922`, `16923`, `16924`, `16925`, `16926`, `16927`, `16928`, `16929`, `16930`, `16931`, `16932`, `16933`, `16934`, `16935`, `16936`, `16937`, `16938`, `16939`, `16940`, `16941`, `16942`, `16943`, `16944`, `16946`, `16947`, `16948`, `16949`, `16950`, `16951`, `16954`, `16955`, `16956`, `16957`, `16958`, `16961`, `16963`, `16964`, `16965`, `16967`, `16968`, `16970`, `16973`, `16975`, `16976`, `16977`, `16978`, `16979`, `16982`, `16984`, `16986`, `16988`, `16989`, `16991`, `16994`, `16995`, `16997`, `16999`, `17000`, `17001`, `17002`, `17003`, `17005`, `17006`, `17007`, `17008`, `17010`, `17013`, `17017`, `17018`, `17020`, `17021`, `17022`, `17025`, `17027`, `17028`, `17029`, `17030`, `17032`, `17033`, `17034`, `17036`, `17038`, `17039`, `17041`, `17042`, `17044`, `17045`, `17046`, `17047`, `17050`, `17051`, `17052`, `17054`, `17055`, `17057`, `17058`, `17061`, `17063`, `17064`, `17067`, `17069`, `17072`, `17073`, `17074`, `17076`, `17078`, `17079`, `17082`, `17083`, `17084`, `17086`, `17088`, `17090`, `17091`, `17092`, `17094`, `17095`, `17096`, `17097`, `17098`, `17099`, `17101`, `17102`, `17103`, `17104`, `17105`, `17106`, `17108`, `17109`, `17111`, `17113`, `17114`, `17116`, `17117`, `17119`, `17122`, `17123`, `17125`, `17126`, `17127`, `17129`, `17130`, `17132`, `17135`, `17136`, `17139`, `17141`, `17142`, `17143`, `17145`, `17148`, `17149`, `17150`, `17151`, `17154`, `17155`, `17156`, `17158`, `17159`, `17160`, `17161`, `17162`, `17163`, `17164`, `17165`, `17166`, `17168`, `17171`, `17173`, `17175`, `17177`, `17180`, `17181`, `17182`, `17183`, `17184`, `17185`, `17187`, `17188`, `17190`, `17193`, `17195`, `17199`, `17200`, `17202`, `17204`, `17205`, `17206`, `17209`, `17210`, `17211`, `17212`, `17214`, `17215`, `17216`, `17217`, `17220`, `17221`, `17222`, `17224`, `17227`, `17230`, `17232`, `17233`, `17234`, `17235`, `17236`, `17237`, `17238`, `17239`, `17241`, `17242`, `17245`, `17247`, `17249`, `17251`, `17252`, `17253`, `17255`, `17258`, `17260`, `17261`, `17262`, `17263`, `17264`, `17265`, `17266`, `17267`, `17268`, `17269`, `17270`, `17271`, `17272`, `17273`, `17274`, `17275`, `17276`, `17278`, `17279`, `17281`, `17282`, `17283`, `17284`, `17288`, `17290`, `17292`, `17293`, `17295`, `17296`, `17297`, `17298`, `17300`, `17301`, `17302`, `17303`, `17305`, `17306`, `17307`, `17309`, `17310`, `17311`, `17312`, `17314`, `17315`, `17316`, `17318`, `17319`, `17321`, `17322`, `17323`, `17327`, `17328`, `17329`, `17330`, `17331`, `17332`, `17333`, `17335`, `17338`, `17340`, `17341`, `17342`, `17343`, `17345`, `17346`, `17349`, `17350`, `17352`, `17353`, `17354`, `17355`, `17356`, `17357`, `17359`, `17360`, `17363`, `17365`, `17366`, `17367`, `17368`, `17370`, `17371`, `17372`, `17373`, `17375`, `17377`, `17378`, `17381`, `17382`, `17383`, `17384`, `17385`, `17386`, `17389`, `17390`, `17391`, `17392`, `17393`, `17396`, `17398`, `17399`, `17401`, `17402`, `17404`, `17406`, `17407`, `17408`, `17409`, `17410`, `17412`, `17413`, `17414`, `17415`, `17416`, `17417`, `17418`, `17419`, `17420`, `17421`, `17423`, `17425`, `17426`, `17428`, `17429`, `17430`, `17431`, `17432`, `17434`, `17436`, `17438`, `17440`, `17441`, `17443`, `17445`, `17446`, `17447`, `17448`, `17450`, `17452`, `17454`, `17455`, `17456`, `17457`, `17458`, `17459`, `17461`, `17462`, `17463`, `17464`, `17465`, `17467`, `17468`, `17469`, `17470`, `17471`, `17472`, `17473`, `17474`, `17475`, `17476`, `17478`, `17479`, `17483`, `17485`, `17486`, `17489`, `17490`, `17491`, `17492`, `17493`, `17494`, `17496`, `17497`, `17499`, `17500`, `17501`, `17504`, `17505`, `17507`, `17508`, `17509`, `17512`, `17513`, `17514`, `17515`, `17517`, `17518`, `17519`, `17520`, `17522`, `17523`, `17524`, `17525`, `17526`, `17527`, `17529`, `17531`, `17532`, `17533`, `17534`, `17535`, `17536`, `17537`, `17538`, `17539`, `17540`, `17541`, `17542`, `17543`, `17545`, `17547`, `17548`, `17549`, `17550`, `17551`, `17552`, `17553`, `17556`, `17557`, `17560`, `17561`, `17562`, `17563`, `17566`, `17567`, `17568`, `17570`, `17572`, `17573`, `17574`, `17576`, `17578`, `17580`, `17581`, `17582`, `17583`, `17584`, `17585`, `17586`, `17587`, `17589`, `17590`, `17591`, `17593`, `17595`, `17596`, `17598`, `17599`, `17600`, `17601`, `17602`, `17603`, `17604`, `17605`, `17606`, `17607`, `17608`, `17609`, `17610`, `17613`, `17615`, `17616`, `17617`, `17619`, `17620`, `17622`, `17623`, `17624`, `17625`, `17626`, `17628`, `17629`, `17630`, `17632`, `17633`, `17634`, `17635`, `17638`, `17639`, `17641`, `17642`, `17643`, `17644`, `17645`, `17646`, `17647`, `17648`, `17650`, `17652`, `17653`, `17654`, `17656`, `17658`, `17661`, `17662`, `17663`, `17664`, `17666`, `17667`, `17668`, `17669`, `17672`, `17674`, `17675`, `17676`, `17677`, `17678`, `17679`, `17680`, `17681`, `17683`, `17684`, `17687`, `17688`, `17689`, `17691`, `17692`, `17694`, `17697`, `17698`, `17700`, `17701`, `17702`, `17703`, `17706`, `17707`, `17709`, `17713`, `17714`, `17715`, `17717`, `17718`, `17719`, `17720`, `17722`, `17723`, `17725`, `17726`, `17727`, `17729`, `17730`, `17732`, `17733`, `17737`, `17738`, `17739`, `17740`, `17741`, `17743`, `17744`, `17745`, `17747`, `17749`, `17752`, `17753`, `17754`, `17756`, `17760`, `17761`, `17762`, `17763`, `17764`, `17766`, `17767`, `17769`, `17771`, `17773`, `17775`, `17776`, `17778`, `17779`, `17780`, `17781`, `17782`, `17784`, `17787`, `17790`, `17792`, `17794`, `17795`, `17796`, `17797`, `17798`, `17802`, `17804`, `17805`, `17807`, `17808`, `17809`, `17811`, `17813`, `17816`, `17817`, `17818`, `17819`, `17820`, `17821`, `17822`, `17823`, `17825`, `17826`, `17827`, `17828`, `17829`, `17830`, `17831`, `17832`, `17833`, `17834`, `17835`, `17836`, `17837`, `17838`, `17840`, `17841`, `17842`, `17843`, `17844`, `17846`, `17848`, `17849`, `17850`, `17852`, `17853`, `17854`, `17855`, `17856`, `17858`, `17860`, `17863`, `17864`, `17865`, `17869`, `17871`, `17873`, `17875`, `17876`, `17879`, `17881`, `17884`, `17887`, `17890`, `17891`, `17892`, `17894`, `17895`, `17896`, `17897`, `17898`, `17901`, `17902`, `17903`, `17904`, `17907`, `17908`, `17909`, `17910`, `17912`, `17914`, `17917`, `17918`, `17923`, `17925`, `17927`, `17929`, `17930`, `17932`, `17934`, `17935`, `17937`, `17939`, `17940`, `17941`, `17942`, `17943`, `17945`, `17946`, `17948`, `17949`, `17951`, `17952`, `17954`, `17955`, `17956`, `17957`, `17958`, `17959`, `17961`, `17962`, `17964`, `17965`, `17967`, `17968`, `17971`, `17972`, `17973`, `17975`, `17976`, `17977`, `17978`, `17980`, `17982`, `17984`, `17985`, `17988`, `17989`, `17991`, `17993`, `17995`, `17996`, `17998`, `18000`, `18002`, `18004`, `18005`, `18006`, `18011`, `18012`, `18014`, `18015`, `18016`, `18017`, `18018`, `18019`, `18023`, `18024`, `18026`, `18028`, `18029`, `18030`, `18031`, `18032`, `18033`, `18034`, `18035`, `18036`, `18037`, `18038`, `18039`, `18040`, `18041`, `18042`, `18043`, `18044`, `18045`, `18047`, `18048`, `18050`, `18053`, `18054`, `18055`, `18056`, `18057`, `18059`, `18061`, `18062`, `18064`, `18065`, `18067`, `18068`, `18070`, `18071`, `18073`, `18074`, `18076`, `18078`, `18079`, `18081`, `18082`, `18085`, `18086`, `18087`, `18088`, `18091`, `18092`, `18093`, `18095`, `18096`, `18098`, `18099`, `18102`, `18104`, `18106`, `18108`, `18109`, `18110`, `18112`, `18113`, `18114`, `18115`, `18116`, `18117`, `18118`, `18120`, `18122`, `18124`, `18127`, `18129`, `18130`, `18132`, `18133`, `18134`, `18138`, `18140`, `18142`, `18144`, `18145`, `18147`, `18148`, `18149`, `18150`, `18153`, `18155`, `18157`, `18158`, `18159`, `18160`, `18162`, `18164`, `18167`, `18168`, `18169`, `18170`, `18172`, `18173`, `18176`, `18177`, `18179`, `18181`, `18182`, `18184`, `18186`, `18188`, `18189`, `18192`, `18193`, `18194`, `18195`, `18196`, `18197`, `18198`, `18200`, `18201`, `18203`, `18204`, `18205`, `18207`, `18208`, `18211`, `18213`, `18214`, `18215`, `18216`, `18218`, `18219`, `18220`, `18222`, `18224`, `18226`, `18227`, `18228`, `18229`, `18232`, `18234`, `18236`, `18237`, `18238`, `18239`, `18241`, `18242`, `18243`, `18244`, `18245` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.79 |
| `TOKEN_P` | 99.79 |
| `TOKEN_R` | 99.80 |
| `TOKEN_ACC` | 99.97 |
| `SENTS_F` | 96.20 |
| `SENTS_P` | 96.95 |
| `SENTS_R` | 95.45 |
| `TAG_ACC` | 98.33 |
| `POS_ACC` | 97.91 |
| `MORPH_ACC` | 95.92 |
| `DEP_UAS` | 91.92 |
| `DEP_LAS` | 89.41 |
| `LEMMA_ACC` | 88.22 |
|
{"language": ["fi"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/fi_udv25_finnishtdt_trf
| null |
[
"spacy",
"token-classification",
"fi",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fi"
] |
TAGS
#spacy #token-classification #fi #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Finnish-TDT
### Label Scheme
View label scheme (12912 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (12912 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #fi #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (12912 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_French-Sequoia
| Feature | Description |
| --- | --- |
| **Name** | `fr_udv25_frenchsequoia_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `LGPL-LR` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (916 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `ADJ`, `ADP`, `ADP_DET`, `ADP_PRON`, `ADV`, `AUX`, `CCONJ`, `DET`, `INTJ`, `NOUN`, `NUM`, `PART`, `PRON`, `PROPN`, `PUNCT`, `SCONJ`, `SYM`, `VERB`, `X` |
| **`morphologizer`** | `POS=PROPN`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `POS=ADP`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Ord\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=PUNCT`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Number=Plur\|POS=ADJ`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Ind\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Number=Sing\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `NumType=Card\|POS=NUM`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=CCONJ`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=PRON\|PronType=Rel`, `Number=Sing\|POS=DET\|Poss=Yes`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=ADP\|PronType=Art`, `Definite=Def\|Number=Plur\|POS=ADP\|PronType=Art`, `Definite=Ind\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Number=Plur\|POS=DET`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=ADV\|PronType=Int`, `POS=VERB\|Tense=Pres\|VerbForm=Part`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Definite=Ind\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Masc\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Number=Plur\|POS=DET\|Poss=Yes`, `POS=AUX\|VerbForm=Inf`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=ADV\|Polarity=Neg`, `Definite=Ind\|Number=Sing\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `POS=PRON\|Person=3\|Reflex=Yes`, `Gender=Masc\|POS=NOUN`, `POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=PRON\|Person=3`, `Number=Plur\|POS=NOUN`, `NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=VERB\|Tense=Past\|VerbForm=Part`, `POS=AUX\|Tense=Pres\|VerbForm=Part`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=3`, `Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|NumType=Ord\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PROPN`, `Number=Sing\|POS=PROPN`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=DET`, `Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes`, `Gender=Masc\|POS=PRON`, `POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON`, `Gender=Masc\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Number=Sing\|POS=PRON`, `Number=Sing\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|VerbForm=Fin`, `Number=Plur\|POS=DET\|PronType=Dem`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Dem`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|NumType=Ord\|Number=Sing\|POS=ADJ`, `POS=PRON`, `POS=NUM`, `Gender=Fem\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON`, `Number=Plur\|POS=PRON\|Person=3`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Number=Sing\|POS=PRON\|Person=1`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=INTJ`, `Number=Plur\|POS=PRON\|Person=2`, `NumType=Card\|POS=PRON`, `Definite=Ind\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Art`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `NumType=Card\|POS=NOUN`, `POS=PRON\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3`, `Gender=Fem\|Number=Sing\|POS=DET`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Sub\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Definite=Ind\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Art`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Dem`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Dem`, `Number=Sing\|POS=DET`, `Gender=Masc\|NumType=Card\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Dem`, `Mood=Ind\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|POS=PRON`, `Gender=Masc\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Sing\|POS=PRON\|PronType=Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part`, `POS=X`, `POS=SYM`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int`, `POS=DET`, `Gender=Masc\|Number=Plur\|POS=PRON`, `POS=PART`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|POS=VERB\|Person=3\|VerbForm=Fin`, `Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int`, `Gender=Masc\|Number=Plur\|POS=DET`, `Gender=Fem\|Number=Plur\|POS=PRON\|PronType=Rel`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Masc\|Number=Plur\|POS=PRON\|PronType=Rel`, `POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|NumType=Ord\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Fut\|VerbForm=Fin`, `Mood=Imp\|POS=VERB\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|Reflex=Yes`, `Gender=Masc\|NumType=Card\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Number=Sing\|POS=PRON\|Person=1\|Reflex=Yes`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Imp\|VerbForm=Fin`, `Mood=Sub\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Sub\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Fut\|VerbForm=Fin`, `Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Sub\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Gender=Fem\|POS=ADV`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Imp\|VerbForm=Fin`, `Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part`, `Gender=Fem\|Number=Plur\|POS=PROPN`, `Gender=Masc\|NumType=Card\|POS=NUM` |
| **`parser`** | `ROOT`, `acl`, `acl:relcl`, `advcl`, `advcl:cleft`, `advmod`, `amod`, `appos`, `aux:caus`, `aux:pass`, `aux:tense`, `case`, `cc`, `ccomp`, `conj`, `cop`, `csubj`, `dep`, `det`, `dislocated`, `expl:comp`, `expl:pass`, `expl:subj`, `fixed`, `flat:foreign`, `flat:name`, `iobj`, `mark`, `nmod`, `nsubj`, `nsubj:caus`, `nsubj:pass`, `nummod`, `obj`, `obl:agent`, `obl:arg`, `obl:mod`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `0`, `3`, `4`, `6`, `8`, `10`, `12`, `14`, `16`, `20`, `22`, `24`, `26`, `30`, `32`, `34`, `36`, `39`, `40`, `42`, `44`, `45`, `48`, `50`, `52`, `54`, `56`, `58`, `61`, `63`, `66`, `70`, `72`, `74`, `77`, `79`, `81`, `82`, `84`, `86`, `88`, `89`, `91`, `95`, `97`, `99`, `102`, `103`, `106`, `110`, `111`, `113`, `114`, `115`, `118`, `119`, `123`, `125`, `126`, `128`, `130`, `132`, `133`, `134`, `136`, `138`, `139`, `140`, `142`, `143`, `144`, `146`, `148`, `150`, `152`, `155`, `157`, `160`, `161`, `163`, `165`, `167`, `171`, `173`, `174`, `176`, `177`, `179`, `181`, `183`, `185`, `187`, `189`, `191`, `192`, `195`, `197`, `198`, `200`, `202`, `203`, `205`, `208`, `210`, `211`, `212`, `214`, `217`, `218`, `221`, `225`, `227`, `229`, `230`, `232`, `234`, `236`, `238`, `240`, `242`, `243`, `245`, `247`, `248`, `251`, `253`, `255`, `257`, `258`, `260`, `261`, `264`, `267`, `268`, `269`, `272`, `273`, `276`, `277`, `278`, `279`, `284`, `287`, `288`, `291`, `293`, `295`, `298`, `299`, `301`, `304`, `306`, `307`, `309`, `310`, `313`, `315`, `318`, `319`, `322`, `324`, `325`, `327`, `329`, `330`, `332`, `333`, `336`, `339`, `341`, `342`, `344`, `346`, `347`, `350`, `351`, `353`, `356`, `358`, `359`, `361`, `363`, `365`, `367`, `369`, `373`, `376`, `378`, `379`, `380`, `382`, `384`, `386`, `389`, `390`, `391`, `394`, `396`, `398`, `399`, `401`, `404`, `406`, `409`, `412`, `414`, `418`, `421`, `423`, `424`, `426`, `428`, `429`, `430`, `434`, `436`, `438`, `440`, `441`, `443`, `446`, `447`, `448`, `451`, `453`, `456`, `457`, `458`, `460`, `462`, `463`, `465`, `468`, `470`, `472`, `474`, `480`, `482`, `483`, `485`, `486`, `490`, `493`, `494`, `497`, `499`, `500`, `501`, `503`, `506`, `509`, `511`, `512`, `514`, `516`, `518`, `522`, `523`, `526`, `530`, `532`, `534`, `537`, `539`, `540`, `541`, `543`, `545`, `546`, `548`, `550`, `551`, `552`, `554`, `556`, `557`, `558`, `561`, `563`, `565`, `567`, `570`, `571`, `573`, `574`, `575`, `576`, `578`, `579`, `581`, `582`, `583`, `584`, `586`, `587`, `588`, `589`, `590`, `592`, `595`, `600`, `603`, `604`, `606`, `608`, `611`, `612`, `614`, `615`, `616`, `618`, `619`, `620`, `621`, `622`, `623`, `624`, `625`, `626`, `627`, `628`, `629`, `630`, `631`, `632`, `633`, `634`, `635`, `636`, `638`, `640`, `644`, `646`, `647`, `648`, `650`, `652`, `654`, `657`, `659`, `660`, `661`, `662`, `663`, `664`, `665`, `666`, `668`, `672`, `674`, `675`, `677`, `678`, `679`, `680`, `681`, `682`, `683`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `691`, `692`, `693`, `694`, `695`, `696`, `697`, `698`, `699`, `700`, `701`, `702`, `704`, `705`, `706`, `707`, `708`, `709`, `710`, `711`, `712`, `713`, `714`, `715`, `716`, `717`, `718`, `719`, `720`, `721`, `722`, `723`, `724`, `725`, `726`, `727`, `728`, `729`, `730`, `731`, `732`, `733`, `734`, `735`, `736`, `737`, `738`, `739`, `740`, `741`, `743`, `744`, `747`, `748`, `749`, `750`, `751`, `752`, `753`, `754`, `755`, `756`, `758`, `760`, `762`, `763`, `766`, `767`, `768`, `770`, `772`, `773`, `774`, `775`, `776`, `777`, `778`, `779`, `781`, `783`, `784`, `786`, `787`, `789`, `790`, `791`, `794`, `795`, `796`, `797`, `798`, `799`, `800`, `801`, `802`, `803`, `807`, `809`, `812`, `813`, `815`, `817`, `819`, `821`, `825`, `828`, `829`, `832`, `833`, `834`, `837`, `838`, `839`, `841`, `842`, `844`, `846`, `849`, `851`, `853`, `854`, `855`, `858`, `861`, `862`, `866`, `868`, `869`, `871`, `872`, `874`, `876`, `879`, `880`, `882`, `885`, `887`, `891`, `893`, `895`, `898`, `899`, `902`, `903`, `905`, `906`, `908`, `910`, `911`, `912`, `914`, `917`, `920`, `923`, `925`, `927`, `929`, `932`, `933`, `934`, `936`, `938`, `939`, `943`, `944`, `945`, `946`, `947`, `950`, `952`, `954`, `956`, `958`, `959`, `961`, `963`, `965`, `967`, `969`, `971`, `973`, `976`, `978`, `979`, `980`, `981`, `984`, `986`, `987`, `990`, `993`, `994`, `996`, `998`, `999`, `1000`, `1001`, `1002`, `1004`, `1006`, `1007`, `1009`, `1010`, `1012`, `1014`, `1016`, `1018`, `1021`, `1023`, `1026`, `1027`, `1029`, `1031`, `1033`, `1034`, `1036`, `1037`, `1039`, `1041`, `1043`, `1044`, `1045`, `1046`, `1049`, `1051`, `1053`, `1054`, `1055`, `1056`, `1057`, `1058`, `1059`, `1061`, `1063`, `1065`, `1067`, `1068`, `1070`, `1072`, `1073`, `1075`, `1077`, `1078`, `1080`, `1081`, `1082`, `1084`, `1085`, `1087`, `1088`, `1089`, `1090`, `1091`, `1092`, `1094`, `1095`, `1097`, `1098`, `1100`, `1103`, `1106`, `1108`, `1110`, `1111`, `1113`, `1116`, `1117`, `1119`, `1121`, `1124`, `1127`, `1129`, `1131`, `1132`, `1133`, `1135`, `1136`, `1138`, `1139`, `1141`, `1142`, `1145`, `1148`, `1153`, `1154`, `1156`, `1157`, `1159`, `1161` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.70 |
| `TOKEN_P` | 99.69 |
| `TOKEN_R` | 99.71 |
| `TOKEN_ACC` | 99.96 |
| `SENTS_F` | 94.42 |
| `SENTS_P` | 94.42 |
| `SENTS_R` | 94.42 |
| `TAG_ACC` | 98.65 |
| `POS_ACC` | 98.56 |
| `MORPH_ACC` | 97.55 |
| `DEP_UAS` | 94.68 |
| `DEP_LAS` | 92.60 |
| `LEMMA_ACC` | 97.41 |
|
{"language": ["fr"], "license": "lgpl-lr", "tags": ["spacy", "token-classification"]}
|
explosion/fr_udv25_frenchsequoia_trf
| null |
[
"spacy",
"token-classification",
"fr",
"license:lgpl-lr",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fr"
] |
TAGS
#spacy #token-classification #fr #license-lgpl-lr #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_French-Sequoia
### Label Scheme
View label scheme (916 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (916 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #fr #license-lgpl-lr #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (916 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Irish-IDT
| Feature | Description |
| --- | --- |
| **Name** | `ga_udv25_irishidt_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (1662 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `!`, `.`, `...`, `?`, `Abr`, `Ad`, `Adj`, `Art`, `CM`, `CU`, `Cmp`, `Cmpd`, `CmpdNoGen`, `Comp`, `Cond`, `Coord`, `Cop`, `Cp`, `Deg`, `Dem`, `Det`, `Dir`, `Foreign`, `FutInd`, `Gn`, `Idf`, `Imper`, `Inf`, `Item`, `Itj`, `Its`, `Loc`, `Nm`, `Noun`, `Num`, `PastImp`, `PastInd`, `Pat`, `Pers`, `Poss`, `Prep`, `PresImp`, `PresInd`, `PresSubj`, `Pron`, `Punct`, `Q`, `Ref`, `Rel`, `Simp`, `Subord`, `Subst`, `Sup`, `Temp`, `Unknown`, `VD`, `VI`, `VT`, `VTI`, `Vb`, `Voc`, `Web`, `cionn` |
| **`morphologizer`** | `POS=ADP`, `Case=NomAcc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=AUX\|Tense=Pres\|VerbForm=Cop`, `Number=Sing\|POS=PRON\|Person=3`, `Mood=Ind\|POS=VERB\|Tense=Fut`, `Definite=Def\|Number=Sing\|POS=DET\|PronType=Art`, `Case=NomAcc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=SCONJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3`, `POS=PART\|PartType=Inf`, `POS=NOUN\|VerbForm=Inf`, `Number=Sing\|POS=ADP\|PronType=Art`, `POS=ADV`, `POS=PUNCT`, `POS=PART\|PartType=Vb\|Polarity=Neg`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Fut`, `Number=Sing\|POS=NOUN`, `POS=CCONJ`, `Case=NomAcc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=NomAcc\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Int\|POS=AUX\|Polarity=Neg\|Tense=Pres\|VerbForm=Cop`, `Degree=Pos\|POS=ADJ`, `POS=PART\|PartType=Vb\|PronType=Rel`, `Form=Len\|Mood=Cnd\|POS=VERB`, `Case=NomAcc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Number=Sing\|POS=ADP\|Person=1`, `Case=NomAcc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Emp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|PronType=Rel\|Tense=Pres`, `Case=NomAcc\|Form=Ecl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Definite=Def\|Number=Plur\|POS=DET\|PronType=Art`, `Case=NomAcc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=DET\|Person=1\|Poss=Yes`, `POS=PART\|PartType=Cmpl`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Tense=Past`, `POS=PRON\|PronType=Dem`, `POS=PART\|PartType=Vb`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Past`, `Number=Sing\|POS=PRON\|Person=2`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=PART\|PartType=Comp`, `Degree=Cmp,Sup\|POS=ADJ`, `Case=NomAcc\|Form=Len\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Tense=Pres`, `NumType=Card\|POS=NUM`, `POS=ADJ\|VerbForm=Part`, `Number=Plur\|POS=ADP\|Person=1`, `Form=Len\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Pres`, `POS=PRON\|PronType=Int`, `Mood=Ind\|POS=VERB\|PronType=Rel\|Tense=Pres`, `Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres`, `Dialect=Munster\|POS=X`, `POS=ADP\|PrepForm=Cmpd`, `Case=NomAcc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=NomAcc\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Form=Ecl\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `POS=NOUN\|VerbForm=Vnoun`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3\|Poss=Yes`, `Case=Gen\|Gender=Masc\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Form=Len\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Form=Len\|Mood=Imp\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Number=Plur\|POS=DET\|Person=3\|Poss=Yes`, `Case=NomAcc\|Form=Ecl\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Tense=Past\|Voice=Auto`, `Number=Plur\|POS=PRON\|Person=3`, `Case=Gen\|Definite=Def\|Gender=Masc\|NounType=Weak\|Number=Plur\|POS=NOUN`, `Form=Len\|POS=NOUN\|VerbForm=Inf`, `POS=PART\|PartType=Ad`, `POS=PART\|PartType=Pat`, `POS=NUM`, `Mood=Ind\|POS=VERB\|Tense=Pres`, `Case=NomAcc\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Form=Len\|POS=VERB`, `POS=PRON\|Reflex=Yes`, `POS=VERB`, `Case=NomAcc\|Form=Len\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=NomAcc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=SCONJ\|VerbForm=Cop`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=NomAcc\|Form=HPref\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=DET\|PronType=Dem`, `Form=Len\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Past`, `Case=NomAcc\|Form=HPref\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Masc\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Case=Gen\|Definite=Def\|Gender=Fem\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Case=Dat\|Form=Ecl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=ADP\|Person=3`, `POS=PART\|PartType=Comp`, `POS=PART`, `Case=NomAcc\|Form=Ecl\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=NomAcc\|Form=Len\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=DET\|PronType=Ind`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Fut\|Voice=Auto`, `Case=Gen\|Gender=Fem\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Pres\|Voice=Auto`, `POS=X`, `POS=PART\|PronType=Rel`, `Form=VF\|POS=AUX\|Tense=Pres\|VerbForm=Cop`, `Gender=Masc\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes`, `POS=AUX\|Polarity=Neg\|PronType=Rel\|Tense=Pres\|VerbForm=Cop`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Pres`, `Case=Gen\|Form=Ecl\|Gender=Fem\|NounType=Strong\|Number=Plur\|POS=NOUN`, `POS=PART\|PartType=Vb\|Polarity=Neg\|PronType=Rel`, `Number=Sing\|POS=PRON\|PronType=Int`, `Abbr=Yes\|POS=X`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=AUX\|Tense=Past\|VerbForm=Cop`, `Number=Sing\|POS=PRON\|Person=1`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Tense=Pres\|Voice=Auto`, `Case=NomAcc\|Form=HPref\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Fut`, `Case=Gen\|POS=NOUN\|VerbForm=Inf`, `Form=HPref\|POS=DET\|PronType=Ind`, `Case=NomAcc\|Form=Len\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Number=Plur\|POS=PRON\|Person=1`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=NomAcc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|NounType=Weak\|Number=Plur\|POS=NOUN`, `Case=Gen\|NounType=Strong\|Number=Plur\|POS=ADJ`, `Foreign=Yes\|POS=X`, `Mood=Ind\|POS=VERB\|Tense=Fut\|Voice=Auto`, `Number=Plur\|POS=ADP\|Person=3\|PronType=Emp`, `Mood=Ind\|POS=VERB\|Tense=Past`, `POS=PART\|PartType=Cmpl\|Polarity=Neg\|Tense=Past`, `Number=Plur\|POS=ADP\|Person=3\|Poss=Yes`, `Form=Ecl\|POS=NOUN\|VerbForm=Inf`, `Case=Gen\|Form=Len\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=Len\|NumType=Card\|POS=NUM`, `Abbr=Yes\|POS=NUM`, `Case=NomAcc\|NounType=NotSlender\|Number=Plur\|POS=ADJ`, `Case=NomAcc\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|NounType=Weak\|Number=Plur\|POS=PROPN`, `Mood=Ind\|POS=VERB\|Tense=Pres\|Voice=Auto`, `POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Cop`, `Degree=Pos\|Form=Len\|POS=ADJ`, `Form=Len\|NumType=Ord\|POS=NUM`, `Number=Plur\|POS=ADP\|PronType=Art`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2`, `Form=Len\|Number=Plur\|POS=ADP\|Person=1`, `Case=NomAcc\|Form=Len\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=NomAcc\|Form=Len\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Degree=Pos\|Form=Ecl\|POS=ADJ`, `Mood=Imp\|POS=PART\|PartType=Vb`, `Mood=Cnd\|POS=VERB`, `Number=Sing\|POS=ADP\|Person=1\|Poss=Yes`, `Form=Ecl\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1`, `Form=Len\|Mood=Imp\|POS=VERB\|Tense=Past\|Voice=Auto`, `Case=Gen\|Gender=Masc\|NounType=Weak\|Number=Plur\|POS=NOUN`, `POS=PART\|PartType=Num`, `Form=HPref\|NumType=Card\|POS=NUM`, `Form=Len\|Mood=Sub\|POS=VERB\|Polarity=Neg\|Tense=Pres`, `Case=Gen\|Form=Len\|Gender=Masc\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADP\|Person=3`, `Number=Sing\|POS=PRON\|Person=2\|PronType=Emp`, `POS=PART\|PartType=Vb\|Tense=Past`, `Case=NomAcc\|Form=Ecl\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Definite=Def\|Dialect=Ulster\|POS=X`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Tense=Fut`, `POS=PART\|PartType=Vb\|Polarity=Neg\|Tense=Past`, `POS=PART\|PartType=Cmpl\|Polarity=Neg`, `Case=NomAcc\|Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=ADP\|Poss=Yes`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Form=Len\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Form=Len\|Mood=Imp\|POS=VERB\|Voice=Auto`, `Definite=Def\|POS=DET`, `POS=AUX\|PronType=Rel\|Tense=Pres\|VerbForm=Cop`, `Case=NomAcc\|NounType=Slender\|Number=Plur\|POS=ADJ`, `POS=AUX\|Polarity=Neg\|PronType=Rel\|Tense=Past\|VerbForm=Cop`, `Form=Ecl\|Mood=Cnd\|POS=VERB`, `Case=Gen\|Form=Ecl\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `POS=AUX\|Polarity=Neg\|Tense=Pres\|VerbForm=Cop`, `Form=Len\|Mood=Imp\|POS=VERB\|Tense=Past`, `Case=Gen\|Form=Ecl\|Gender=Masc\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Number=Sing\|POS=ADP\|Person=2`, `Degree=Pos\|Form=HPref\|POS=ADJ`, `Dialect=Munster\|POS=DET\|PronType=Dem`, `Gender=Fem\|Number=Sing\|POS=ADP\|Person=3\|Poss=Yes`, `Case=NomAcc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Number=Plur\|POS=ADP\|Person=1\|PronType=Emp`, `POS=PART\|PartType=Vb\|Polarity=Neg\|PronType=Rel\|Tense=Past`, `POS=PRON\|PronType=Ind`, `Number=Plur\|POS=ADP\|Person=1\|Poss=Yes`, `Gender=Fem\|Number=Sing\|POS=ADP\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|NounType=Weak\|Number=Plur\|POS=ADJ`, `Form=Emp\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `Case=NomAcc\|Form=Len\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=VF\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Cop`, `Case=NomAcc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Fem\|POS=PROPN`, `Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3`, `Form=VF\|POS=AUX\|Polarity=Neg\|PronType=Rel\|Tense=Past\|VerbForm=Cop`, `Case=NomAcc\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Form=Ecl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=NomAcc\|Form=Emp\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|Form=Ecl\|Gender=Masc\|Number=Plur\|POS=PROPN`, `POS=PROPN`, `Mood=Imp\|POS=PART\|PartType=Vb\|Polarity=Neg`, `Gender=Fem\|Number=Sing\|POS=DET\|Person=3\|Poss=Yes`, `Form=Ecl\|NumType=Card\|POS=NUM`, `Case=Gen\|Form=Len\|Gender=Masc\|NounType=Weak\|Number=Plur\|POS=NOUN`, `Dialect=Munster\|Mood=Ind\|POS=X\|Tense=Past\|Voice=Auto`, `Number=Sing\|POS=DET\|Person=2\|Poss=Yes`, `Case=Gen\|Number=Sing\|POS=NOUN`, `Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past\|Voice=Auto`, `Definite=Def\|NumType=Card\|POS=NUM`, `Form=Len\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1`, `Case=NomAcc\|Definite=Ind\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres`, `Form=Len\|Mood=Cnd\|POS=VERB\|Voice=Auto`, `Mood=Imp\|POS=VERB\|Tense=Past`, `Case=Gen\|Form=Ecl\|Gender=Masc\|NounType=Weak\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=ADP\|Person=3\|Poss=Yes\|PronType=Prs`, `Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=Len\|Mood=Ind\|POS=VERB\|Tense=Past\|Voice=Auto`, `Definite=Def\|Form=Ecl\|POS=DET`, `Number=Plur\|POS=ADJ`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Fut\|Voice=Auto`, `Form=VF\|POS=AUX\|Tense=Past\|VerbForm=Cop`, `Form=Len\|Number=Sing\|POS=NOUN`, `POS=AUX`, `Gender=Masc\|POS=PRON\|Person=3`, `Case=NomAcc\|Form=Len\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Gen\|Form=Len\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Mood=Int\|POS=PART\|PartType=Vb\|Polarity=Neg`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres`, `Form=Ecl\|Mood=Imp\|POS=VERB\|Tense=Past`, `Number=Sing\|POS=PRON\|Person=1\|PronType=Emp`, `Case=NomAcc\|Foreign=Yes\|Gender=Fem\|Number=Sing\|POS=X`, `Dialect=Munster\|Form=Len\|Mood=Ind\|Number=Sing\|POS=X\|Person=1\|Tense=Past`, `POS=PART\|PartType=Vb\|PronType=Rel\|Tense=Past`, `Form=Len\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=1`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2`, `POS=PART\|PartType=Voc`, `Form=HPref\|POS=NOUN\|VerbForm=Inf`, `Case=Gen\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Degree=Cmp,Sup\|Form=Len\|POS=ADJ`, `POS=NOUN`, `Form=Ecl\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Case=NomAcc\|Form=Ecl\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Number=Plur\|POS=ADP\|Person=2`, `POS=SCONJ\|Tense=Past\|VerbForm=Cop`, `NumType=Ord\|POS=NUM`, `Mood=Int\|POS=AUX\|Polarity=Neg\|Tense=Past\|VerbForm=Cop`, `Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=3\|PronType=Emp`, `Dialect=Ulster\|POS=X\|VerbForm=Cop`, `Mood=Int\|Number=Sing\|POS=AUX\|PronType=Art\|VerbForm=Cop`, `Case=NomAcc\|Definite=Def\|Gender=Fem\|POS=NOUN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Art`, `Form=Ecl\|POS=NOUN\|VerbForm=Vnoun`, `Case=NomAcc\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=Ecl\|Mood=Sub\|POS=VERB\|Tense=Pres`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Voc\|Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Number=Plur\|POS=ADJ\|PartType=Voc`, `Form=Len\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Polarity=Neg\|Tense=Past`, `Number=Sing\|POS=DET\|PronType=Int`, `Form=Len\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3`, `Dialect=Munster\|Form=Len\|Mood=Ind\|POS=VERB\|Tense=Past`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|Polarity=Neg`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Case=NomAcc\|Gender=Masc\|POS=PROPN`, `Case=Gen\|Form=Len\|Gender=Masc\|POS=PROPN`, `Form=Ecl\|POS=VERB`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `Form=Ecl\|Number=Sing\|POS=NOUN`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Fut\|Voice=Auto`, `POS=AUX\|PronType=Dem\|VerbForm=Cop`, `POS=AUX\|PronType=Rel\|Tense=Past\|VerbForm=Cop`, `Case=NomAcc\|Gender=Masc\|Mood=Ind\|Number=Sing\|POS=VERB\|Tense=Pres`, `Form=Ecl\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Form=Len\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past`, `Abbr=Yes\|POS=SYM`, `Case=Gen\|Form=Len\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Pres\|Voice=Auto`, `POS=PART\|PartType=Cop\|PronType=Rel`, `Form=VF\|POS=AUX\|PronType=Rel\|Tense=Past\|VerbForm=Cop`, `Case=Dat\|Definite=Ind\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Form=Len\|Number=Sing\|POS=PRON\|Person=2`, `Case=Voc\|Form=Len\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Masc\|Number=Sing\|POS=ADJ\|PartType=Voc`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Voc\|Form=Len\|Gender=Fem\|POS=PROPN`, `Case=Gen\|Form=HPref\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Dialect=Ulster\|Gender=Masc\|Number=Sing\|POS=X\|Person=3`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Fut`, `Form=Len\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Fut`, `Case=NomAcc\|Form=HPref\|Gender=Masc\|Number=Sing\|POS=PROPN`, `POS=ADV\|PronType=Int`, `Form=Ecl\|Mood=Cnd\|POS=VERB\|Voice=Auto`, `POS=ADP\|PronType=Art`, `Mood=Int\|POS=AUX\|Tense=Pres\|VerbForm=Cop`, `POS=PART\|PartType=Deg`, `Number=Sing\|POS=ADP\|Person=1\|PronType=Emp`, `Number=Plur\|POS=PRON\|Person=1\|PronType=Emp`, `Gender=Masc\|Number=Sing\|POS=AUX\|Person=3\|VerbForm=Cop`, `Foreign=Yes\|POS=ADJ`, `Foreign=Yes\|POS=NOUN`, `Foreign=Yes\|POS=VERB`, `Foreign=Yes\|POS=ADP`, `Abbr=Yes\|POS=PROPN`, `Form=Len\|Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2`, `Case=Voc\|Form=Len\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Form=Len\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past\|Voice=Auto`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Past`, `Case=NomAcc\|Form=Ecl\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=Len\|POS=ADV`, `Case=Voc\|Form=Len\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Number=Plur\|POS=PRON\|Person=2`, `POS=DET`, `Number=Sing\|POS=ADP\|Person=3`, `Mood=Cnd\|POS=VERB\|Voice=Auto`, `Form=Len\|Number=Sing\|POS=ADP\|Person=1`, `Dialect=Munster\|Mood=Imp\|Number=Sing\|POS=X\|Person=2\|Polarity=Neg`, `Dialect=Munster\|POS=X\|PronType=Dem`, `Form=Len\|POS=VERB\|Polarity=Neg`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Polarity=Neg\|Tense=Past`, `Case=Gen\|Gender=Masc\|POS=PROPN`, `Form=Ecl\|NumType=Ord\|POS=NUM`, `Mood=Ind\|POS=VERB\|PronType=Rel\|Tense=Fut`, `Form=Len\|Number=Plur\|POS=ADP\|Person=3`, `Case=NomAcc\|Form=HPref\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Form=Ecl\|Mood=Ind\|POS=VERB\|Tense=Fut\|Voice=Auto`, `Form=Len\|POS=ADJ\|VerbForm=Part`, `Case=Gen\|Form=Len\|Gender=Fem\|POS=PROPN`, `Form=Ecl\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Form=Len\|POS=NOUN\|VerbForm=Inf`, `Degree=Pos\|POS=NOUN`, `POS=AUX\|PartType=Comp\|Tense=Past\|VerbForm=Cop`, `Number=Plur\|POS=DET\|Person=1\|Poss=Yes`, `Case=Dat\|Form=Len\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Form=HPref\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=ADP\|Person=3\|Poss=Yes`, `POS=NOUN\|Reflex=Yes`, `Dialect=Ulster\|POS=X\|PartType=Vb\|Polarity=Neg`, `Form=Emp\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Gender=Masc\|Number=Sing\|POS=ADP\|Person=3\|PronType=Emp`, `Form=Ecl\|POS=PART\|PartType=Vb\|PronType=Rel`, `Form=Ecl\|Mood=Cnd\|POS=VERB\|Polarity=Neg`, `Case=Gen\|Form=Ecl\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Form=Len\|Mood=Cnd\|POS=VERB\|Polarity=Neg`, `Form=Len\|POS=PRON\|PronType=Ind`, `Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Len\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=PROPN`, `Gender=Masc\|Number=Plur\|POS=NOUN`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=NOUN`, `Form=Ecl\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Len\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Form=HPref\|Gender=Fem\|POS=PROPN`, `Form=Len\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Form=Len\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Form=Len\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=HPref\|Gender=Fem\|Number=Sing\|POS=NOUN`, `NounType=Slender\|Number=Plur\|POS=ADJ`, `Definite=Def\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Form=Ecl\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Form=Ecl\|Gender=Fem\|Number=Plur\|POS=NOUN`, `POS=PRON`, `Definite=Def\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Sing\|POS=PROPN`, `Number=Sing\|POS=NOUN\|PartType=Comp`, `Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Form=Ecl\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=PART\|PartType=Cmpl\|Tense=Past`, `Form=Ecl\|Mood=Int\|POS=VERB\|Polarity=Neg`, `Gender=Masc\|Number=Sing\|POS=ADJ`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Number=Plur\|POS=DET\|PronType=Art`, `NounType=NotSlender\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|POS=AUX\|VerbForm=Cop`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Form=Len\|Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres`, `Gender=Masc\|Number=Sing\|POS=INTJ`, `Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Emp`, `Gender=Fem\|Number=Sing\|POS=SCONJ`, `POS=PART\|Tense=Pres\|VerbForm=Cop`, `Case=Gen\|Definite=Def\|Gender=Fem\|NounType=Weak\|Number=Plur\|POS=NOUN`, `Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Number=Sing\|POS=ADJ`, `Form=Ecl\|Gender=Fem\|Number=Sing\|POS=PROPN`, `POS=DET\|PronType=Art`, `Form=Ecl,Emp\|Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `Form=Ecl\|Mood=Cnd,Int\|POS=VERB`, `Definite=Def\|Dialect=Munster\|Gender=Fem\|Number=Sing\|POS=X`, `POS=AUX\|PronType=Dem`, `POS=AUX\|PartType=Cmpl\|Tense=Pres\|VerbForm=Cop`, `Form=Len\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=1\|Tense=Past`, `POS=PART\|PartType=Inf\|PronType=Rel`, `Form=Ecl\|Number=Plur\|POS=NOUN`, `Form=Len\|Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres`, `POS=SCONJ\|Tense=Past`, `Form=HPref\|Gender=Masc\|Number=Sing\|POS=ADP\|Person=3`, `Form=Ecl\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Definite=Def\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Form=HPref\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3`, `POS=INTJ`, `Form=HPref\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Form=Len\|Gender=Fem\|NounType=Strong\|Number=Plur\|POS=NOUN`, `Form=Ecl\|Mood=Sub\|POS=VERB\|Tense=Pres\|Voice=Auto`, `Number=Sing\|POS=VERB\|Person=1`, `Gender=Masc\|POS=PROPN`, `POS=ADP\|PronType=Rel`, `Mood=Ind\|POS=NOUN\|PronType=Rel\|Tense=Pres`, `Form=Ecl\|Mood=Cnd\|Number=Plur\|POS=VERB\|Person=3`, `Gender=Masc\|Number=Plur\|POS=ADJ`, `Form=Ecl\|Mood=Cnd,Int\|POS=VERB\|Voice=Auto`, `Form=Len\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Gender=Fem\|POS=PROPN`, `Mood=Cnd\|Number=Sing\|POS=VERB\|Person=2`, `Form=HPref\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Dialect=Ulster\|Gender=Masc\|Number=Plur\|POS=X`, `Case=Gen\|Definite=Def\|Gender=Masc\|Number=Sing\|POS=PROPN` |
| **`parser`** | `ROOT`, `acl:relcl`, `advcl`, `advmod`, `amod`, `appos`, `case`, `case:voc`, `cc`, `ccomp`, `compound`, `compound:prt`, `conj`, `cop`, `csubj:cleft`, `csubj:cop`, `dep`, `det`, `fixed`, `flat`, `flat:foreign`, `flat:name`, `list`, `mark`, `mark:prt`, `nmod`, `nmod:poss`, `nsubj`, `nummod`, `obj`, `obl`, `obl:prep`, `obl:tmod`, `parataxis`, `punct`, `vocative`, `xcomp`, `xcomp:pred` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `2`, `4`, `7`, `10`, `11`, `13`, `15`, `16`, `17`, `19`, `21`, `25`, `27`, `28`, `30`, `32`, `34`, `36`, `37`, `40`, `42`, `44`, `46`, `51`, `54`, `56`, `59`, `62`, `64`, `66`, `68`, `70`, `72`, `73`, `74`, `77`, `81`, `83`, `85`, `88`, `89`, `91`, `93`, `96`, `99`, `100`, `102`, `104`, `108`, `114`, `116`, `119`, `120`, `121`, `123`, `126`, `127`, `128`, `131`, `133`, `135`, `137`, `138`, `139`, `142`, `144`, `145`, `147`, `149`, `151`, `153`, `157`, `159`, `161`, `164`, `165`, `169`, `171`, `173`, `176`, `181`, `183`, `185`, `186`, `188`, `189`, `191`, `193`, `194`, `195`, `197`, `199`, `201`, `202`, `205`, `207`, `209`, `210`, `213`, `216`, `217`, `220`, `221`, `223`, `225`, `227`, `228`, `230`, `232`, `233`, `236`, `238`, `240`, `241`, `242`, `244`, `246`, `247`, `249`, `251`, `252`, `254`, `256`, `257`, `259`, `264`, `267`, `268`, `271`, `273`, `275`, `276`, `278`, `279`, `280`, `282`, `283`, `285`, `286`, `289`, `291`, `293`, `295`, `296`, `299`, `301`, `302`, `303`, `304`, `305`, `306`, `308`, `310`, `311`, `312`, `315`, `318`, `319`, `320`, `321`, `323`, `325`, `327`, `328`, `332`, `334`, `336`, `339`, `341`, `343`, `346`, `348`, `350`, `353`, `355`, `358`, `359`, `361`, `363`, `365`, `366`, `367`, `368`, `370`, `371`, `373`, `376`, `378`, `380`, `381`, `384`, `385`, `386`, `389`, `390`, `392`, `396`, `398`, `400`, `401`, `402`, `405`, `407`, `409`, `410`, `411`, `413`, `415`, `416`, `419`, `421`, `422`, `423`, `426`, `427`, `428`, `429`, `430`, `431`, `432`, `433`, `434`, `437`, `438`, `439`, `440`, `441`, `442`, `443`, `446`, `449`, `453`, `455`, `457`, `458`, `459`, `461`, `462`, `464`, `466`, `469`, `471`, `473`, `475`, `478`, `479`, `480`, `482`, `483`, `485`, `487`, `490`, `491`, `492`, `495`, `496`, `497`, `500`, `502`, `505`, `507`, `509`, `512`, `513`, `515`, `516`, `518`, `520`, `522`, `523`, `525`, `527`, `530`, `531`, `532`, `534`, `536`, `537`, `538`, `540`, `541`, `542`, `545`, `546`, `548`, `549`, `551`, `554`, `557`, `560`, `562`, `564`, `565`, `567`, `570`, `571`, `573`, `574`, `578`, `579`, `581`, `585`, `587`, `590`, `591`, `592`, `596`, `597`, `598`, `599`, `600`, `602`, `604`, `605`, `606`, `608`, `609`, `611`, `613`, `614`, `616`, `618`, `619`, `621`, `623`, `624`, `627`, `628`, `629`, `630`, `632`, `633`, `635`, `636`, `637`, `640`, `642`, `644`, `646`, `648`, `649`, `651`, `653`, `655`, `656`, `657`, `659`, `660`, `663`, `665`, `667`, `669`, `673`, `675`, `676`, `678`, `682`, `683`, `686`, `688`, `690`, `691`, `693`, `696`, `698`, `702`, `705`, `708`, `710`, `711`, `712`, `714`, `715`, `717`, `719`, `721`, `722`, `724`, `725`, `727`, `729`, `734`, `736`, `738`, `739`, `742`, `743`, `744`, `746`, `750`, `751`, `753`, `755`, `756`, `758`, `759`, `760`, `761`, `762`, `764`, `766`, `767`, `769`, `770`, `771`, `772`, `773`, `774`, `777`, `778`, `780`, `781`, `783`, `784`, `785`, `787`, `789`, `790`, `793`, `794`, `796`, `798`, `800`, `802`, `803`, `805`, `808`, `809`, `810`, `811`, `813`, `815`, `816`, `817`, `820`, `822`, `827`, `828`, `830`, `833`, `836`, `837`, `838`, `841`, `842`, `843`, `845`, `847`, `849`, `850`, `852`, `24`, `854`, `856`, `859`, `860`, `861`, `862`, `863`, `864`, `866`, `868`, `869`, `870`, `873`, `874`, `877`, `878`, `879`, `881`, `884`, `886`, `888`, `889`, `890`, `893`, `894`, `897`, `898`, `900`, `902`, `905`, `908`, `909`, `910`, `911`, `912`, `913`, `915`, `916`, `917`, `919`, `921`, `924`, `926`, `927`, `928`, `929`, `930`, `932`, `935`, `937`, `941`, `943`, `945`, `946`, `948`, `950`, `951`, `953`, `954`, `955`, `958`, `960`, `963`, `965`, `966`, `967`, `968`, `969`, `971`, `974`, `976`, `978`, `979`, `981`, `982`, `983`, `984`, `985`, `986`, `988`, `990`, `992`, `994`, `997`, `998`, `999`, `1001`, `1003`, `1004`, `1006`, `1008`, `1010`, `1011`, `1012`, `1015`, `1017`, `1019`, `1020`, `1021`, `1022`, `1025`, `1028`, `1030`, `1032`, `1033`, `1035`, `1036`, `1039`, `1040`, `1041`, `1042`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1051`, `1053`, `1055`, `1056`, `1057`, `1058`, `1061`, `1062`, `1064`, `1065`, `1068`, `1070`, `1071`, `1073`, `1074`, `1076`, `1078`, `1080`, `1082`, `1084`, `1086`, `1087`, `1088`, `1089`, `1090`, `1091`, `1092`, `1093`, `1095`, `1097`, `1100`, `1101`, `1103`, `1105`, `1106`, `1108`, `1110`, `1113`, `1114`, `1115`, `1117`, `1118`, `1120`, `1123`, `1127`, `1128`, `1129`, `1131`, `1135`, `1137`, `1138`, `1140`, `1141`, `1143`, `1144`, `1145`, `818`, `1146`, `1148`, `1149`, `1150`, `1152`, `1154`, `1157`, `1159`, `1160`, `1163`, `1166`, `1168`, `1170`, `1171`, `1173`, `1174`, `1176`, `1179`, `1180`, `1182`, `1183`, `1184`, `1186`, `1187`, `1188`, `1189`, `1191`, `1192`, `1195`, `1198`, `1199`, `1200`, `1201`, `1202`, `1205`, `1206`, `1208`, `1210`, `1212`, `1214`, `1215`, `1217`, `1218`, `1219`, `1220`, `1223`, `1227`, `1228`, `1230`, `1231`, `1233`, `1235`, `1236`, `1240`, `1242`, `1244`, `1245`, `1247`, `1248`, `1249`, `1251`, `1252`, `1253`, `1254`, `1255`, `1256`, `1259`, `1260`, `1263`, `1264`, `1267`, `1270`, `1272`, `1273`, `1275`, `1277`, `1279`, `1281`, `1282`, `1283`, `1285`, `1286`, `1288`, `1290`, `1292`, `1295`, `1297`, `1298`, `1299`, `1301`, `1302`, `1305`, `1306`, `1308`, `1309`, `1310`, `1311`, `1313`, `1315`, `1317`, `1318`, `1319`, `1321`, `1323`, `1325`, `1326`, `1327`, `1330`, `1333`, `1336`, `1338`, `1339`, `1340`, `1341`, `1343`, `0`, `1345`, `1347`, `1350`, `1352`, `1356`, `1359`, `1360`, `1361`, `1362`, `1365`, `1367`, `1368`, `1369`, `1371`, `1373`, `1375`, `1378`, `1379`, `1382`, `1384`, `1387`, `1390`, `1392`, `1395`, `1396`, `1397`, `1400`, `1403`, `1406`, `1407`, `1410`, `1411`, `1412`, `1414`, `1416`, `1418`, `1421`, `1422`, `1423`, `1424`, `1426`, `1429`, `1431`, `1433`, `1436`, `1437`, `1442`, `1443`, `1445`, `1446`, `1448`, `1449`, `1450`, `1451`, `1452`, `1453`, `1454`, `1457`, `1460`, `1462`, `1463`, `1466`, `1467`, `1470`, `1471`, `1473`, `1474`, `1477`, `1479`, `1480`, `1481`, `1484`, `1486`, `1489`, `1492`, `1495`, `1496`, `1497`, `1498`, `1501`, `1502`, `1505`, `1506`, `1508`, `1509`, `1510`, `1511`, `1513`, `1514`, `1516`, `1518`, `1521`, `1523`, `1527`, `1528`, `1531`, `1532`, `1534`, `1537`, `1540`, `1541`, `1544`, `1545`, `1547`, `1548`, `1549`, `1550`, `1551`, `1552`, `1553`, `1554`, `1555`, `1557`, `1558`, `1559`, `1560`, `1561`, `1563`, `1565`, `1566`, `1567`, `1569`, `1571`, `1573`, `1576`, `1578`, `1579`, `1580`, `1582`, `1583`, `1211`, `1585`, `1587`, `1588`, `1590`, `1593`, `1595`, `1596`, `1597`, `1598`, `1599`, `1602`, `1604`, `1606`, `1608`, `1610`, `1611`, `1612`, `1613`, `1615`, `1617`, `1618`, `1620`, `1622`, `1623`, `1624`, `1625`, `1626`, `1629`, `1630`, `1632`, `1633`, `1634`, `1637`, `1639`, `65`, `1641`, `1643`, `1644`, `1646`, `1648`, `1649`, `1650`, `1651`, `1652`, `1654`, `1655`, `1658`, `1660`, `1661`, `1662`, `1663`, `1665`, `1666`, `1668`, `1669`, `1671`, `1672`, `1675`, `1676`, `1680`, `1681`, `1682`, `1684`, `1687`, `1689`, `1690`, `1691`, `1692`, `1693`, `1695`, `1696`, `1698`, `1699`, `1700`, `1702`, `1703`, `1704`, `1706`, `1707`, `1708`, `1709`, `1712`, `1715`, `1716`, `1719`, `1722`, `1724`, `1725`, `1726`, `1727`, `1729`, `1730`, `1731`, `1733`, `1736`, `1738`, `1739`, `1742`, `1745`, `1746`, `1747`, `1749`, `1750`, `1752`, `1753`, `1754`, `1757`, `1758`, `1761`, `1764`, `1765`, `1766`, `1767`, `1768`, `1769`, `1771`, `1772`, `1774`, `1776`, `1777`, `1780`, `1783`, `1784`, `1787`, `1789`, `1791`, `1792`, `1794`, `1797`, `1798`, `1800`, `1803`, `1804`, `1807`, `1808`, `1810`, `1812`, `1814`, `1815`, `1817`, `1819`, `1820`, `1822`, `1824`, `1825`, `1826`, `1827`, `1830`, `1832`, `1833`, `1836`, `1840`, `1843`, `1844`, `1846`, `1849`, `1851`, `1853`, `1854`, `1857`, `1859`, `1860`, `1861`, `1862`, `1863`, `1864`, `1865`, `1868`, `1869`, `1872`, `1873`, `1875`, `1877`, `1878`, `1879`, `1882`, `1884`, `1886`, `1888`, `1889`, `1892`, `1895`, `1898`, `1899`, `1901`, `1903`, `1904`, `1905`, `1907`, `1910`, `1912`, `1913`, `1914`, `1917`, `1919`, `1921`, `1924`, `1925`, `1926`, `1928`, `1931`, `1934`, `1936`, `1938`, `1939`, `1636`, `1942`, `1945`, `1947`, `1948`, `1949`, `1950`, `1952`, `1954`, `1956`, `1957`, `1959`, `1961`, `1963`, `1964`, `1965`, `1968`, `1969`, `1970`, `1971`, `1973`, `1974`, `1978`, `1980`, `1981`, `1983`, `1984`, `1987`, `1990`, `1991`, `1994`, `1995`, `1996`, `1997`, `1998`, `1999`, `2001`, `2003`, `2004`, `2006`, `2008`, `2010` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.74 |
| `TOKEN_P` | 99.73 |
| `TOKEN_R` | 99.74 |
| `TOKEN_ACC` | 99.95 |
| `SENTS_F` | 97.57 |
| `SENTS_P` | 97.35 |
| `SENTS_R` | 97.78 |
| `TAG_ACC` | 93.34 |
| `POS_ACC` | 92.17 |
| `MORPH_ACC` | 68.98 |
| `DEP_UAS` | 83.61 |
| `DEP_LAS` | 74.65 |
| `LEMMA_ACC` | 89.81 |
|
{"language": ["ga"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/ga_udv25_irishidt_trf
| null |
[
"spacy",
"token-classification",
"ga",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ga"
] |
TAGS
#spacy #token-classification #ga #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Irish-IDT
### Label Scheme
View label scheme (1662 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (1662 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #ga #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (1662 labels for 6 components)",
"### Accuracy"
] |
token-classification
|
spacy
|
UD v2.5 benchmarking pipeline for UD_Croatian-SET
| Feature | Description |
| --- | --- |
| **Name** | `hr_udv25_croatianset_trf` |
| **Version** | `0.0.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `experimental_char_ner_tokenizer`, `transformer`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Components** | `experimental_char_ner_tokenizer`, `transformer`, `senter`, `tagger`, `morphologizer`, `parser`, `experimental_edit_tree_lemmatizer` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | [Universal Dependencies v2.5](https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-3105) (Zeman, Daniel; et al.) |
| **License** | `CC BY-SA 4.0` |
| **Author** | [Explosion](https://explosion.ai) |
### Label Scheme
<details>
<summary>View label scheme (3855 labels for 6 components)</summary>
| Component | Labels |
| --- | --- |
| **`experimental_char_ner_tokenizer`** | `TOKEN` |
| **`senter`** | `I`, `S` |
| **`tagger`** | `Agcfpay`, `Agcfpdy`, `Agcfpgy`, `Agcfpiy`, `Agcfply`, `Agcfpny`, `Agcfsay`, `Agcfsdy`, `Agcfsgy`, `Agcfsiy`, `Agcfsly`, `Agcfsny`, `Agcmpay`, `Agcmpgy`, `Agcmpiy`, `Agcmply`, `Agcmpny`, `Agcmsayn`, `Agcmsdy`, `Agcmsgy`, `Agcmsiy`, `Agcmsly`, `Agcmsny`, `Agcnpdy`, `Agcnpgy`, `Agcnpny`, `Agcnsay`, `Agcnsdy`, `Agcnsgy`, `Agcnsiy`, `Agcnsly`, `Agcnsny`, `Agpfpay`, `Agpfpdy`, `Agpfpgy`, `Agpfpiy`, `Agpfply`, `Agpfpny`, `Agpfsay`, `Agpfsdy`, `Agpfsgy`, `Agpfsiy`, `Agpfsly`, `Agpfsny`, `Agpfsvy`, `Agpmpay`, `Agpmpdy`, `Agpmpgy`, `Agpmpiy`, `Agpmply`, `Agpmpny`, `Agpmpvy`, `Agpmsann`, `Agpmsany`, `Agpmsayn`, `Agpmsayy`, `Agpmsdy`, `Agpmsgn`, `Agpmsgy`, `Agpmsiy`, `Agpmsln`, `Agpmsly`, `Agpmsnn`, `Agpmsny`, `Agpmsvy`, `Agpnpay`, `Agpnpdy`, `Agpnpgy`, `Agpnpiy`, `Agpnply`, `Agpnpny`, `Agpnsay`, `Agpnsdy`, `Agpnsgn`, `Agpnsgy`, `Agpnsiy`, `Agpnsln`, `Agpnsly`, `Agpnsny`, `Agsfpay`, `Agsfpdy`, `Agsfpgy`, `Agsfpiy`, `Agsfply`, `Agsfpny`, `Agsfsay`, `Agsfsdy`, `Agsfsgy`, `Agsfsiy`, `Agsfsly`, `Agsfsny`, `Agsmpay`, `Agsmpdy`, `Agsmpgy`, `Agsmpiy`, `Agsmply`, `Agsmpny`, `Agsmpvy`, `Agsmsayn`, `Agsmsayy`, `Agsmsdy`, `Agsmsgy`, `Agsmsiy`, `Agsmsly`, `Agsmsny`, `Agsnpay`, `Agsnpgy`, `Agsnply`, `Agsnpny`, `Agsnsay`, `Agsnsdy`, `Agsnsiy`, `Agsnsly`, `Agsnsny`, `Appfpay`, `Appfpdy`, `Appfpgy`, `Appfpiy`, `Appfply`, `Appfpny`, `Appfsay`, `Appfsgy`, `Appfsiy`, `Appfsly`, `Appfsny`, `Appmpay`, `Appmpdy`, `Appmpgy`, `Appmpiy`, `Appmply`, `Appmpny`, `Appmsann`, `Appmsany`, `Appmsayn`, `Appmsayy`, `Appmsdy`, `Appmsgn`, `Appmsgy`, `Appmsiy`, `Appmsly`, `Appmsnn`, `Appmsny`, `Appnpay`, `Appnpdy`, `Appnpgy`, `Appnpiy`, `Appnply`, `Appnpny`, `Appnsay`, `Appnsgy`, `Appnsly`, `Appnsny`, `Aspfpay`, `Aspfpgy`, `Aspfply`, `Aspfpny`, `Aspfsay`, `Aspfsdy`, `Aspfsgy`, `Aspfsiy`, `Aspfsly`, `Aspfsny`, `Aspmpay`, `Aspmpgy`, `Aspmply`, `Aspmpny`, `Aspmsann`, `Aspmsdy`, `Aspmsgn`, `Aspmsgy`, `Aspmsiy`, `Aspmsln`, `Aspmsly`, `Aspmsnn`, `Aspnpay`, `Aspnpgy`, `Aspnpny`, `Aspnsay`, `Aspnsdn`, `Aspnsgn`, `Aspnsgy`, `Aspnsly`, `Aspnsny`, `Cc`, `Cs`, `I`, `Mdc`, `Mdm`, `Mdo`, `Mds`, `Mlc`, `Mlc--g`, `Mlc--i`, `Mlc--l`, `Mlcf-a`, `Mlcf-d`, `Mlcf-g`, `Mlcf-n`, `Mlcfsa`, `Mlcfsd`, `Mlcfsg`, `Mlcfsi`, `Mlcfsl`, `Mlcfsn`, `Mlcm-a`, `Mlcm-g`, `Mlcm-l`, `Mlcm-n`, `Mlcmpl`, `Mlcmpn`, `Mlcmsan`, `Mlcmsay`, `Mlcmsg`, `Mlcmsi`, `Mlcmsl`, `Mlcmsn`, `Mlcn-n`, `Mlcnsa`, `Mlcnsg`, `Mlcnsl`, `Mlcnsn`, `Mlofpa`, `Mlofpd`, `Mlofpg`, `Mlofpi`, `Mlofpl`, `Mlofpn`, `Mlofsa`, `Mlofsd`, `Mlofsg`, `Mlofsi`, `Mlofsl`, `Mlofsn`, `Mlompa`, `Mlompd`, `Mlompg`, `Mlompi`, `Mlompl`, `Mlompn`, `Mlomsan`, `Mlomsay`, `Mlomsg`, `Mlomsi`, `Mlomsl`, `Mlomsn`, `Mlomsv`, `Mlonpa`, `Mlonpg`, `Mlonpl`, `Mlonpn`, `Mlonsa`, `Mlonsd`, `Mlonsg`, `Mlonsi`, `Mlonsl`, `Mlonsn`, `Mls`, `Mlsf-a`, `Mlsf-d`, `Mlsf-g`, `Mlsf-i`, `Mlsf-l`, `Mlsf-n`, `Mlsm-a`, `Mlsm-g`, `Mlsm-l`, `Mlsm-n`, `Mlsn-n`, `Mro`, `Ncfpa`, `Ncfpd`, `Ncfpg`, `Ncfpi`, `Ncfpl`, `Ncfpn`, `Ncfpv`, `Ncfsa`, `Ncfsd`, `Ncfsg`, `Ncfsi`, `Ncfsl`, `Ncfsn`, `Ncfsv`, `Ncmpa`, `Ncmpd`, `Ncmpg`, `Ncmpi`, `Ncmpl`, `Ncmpn`, `Ncmpv`, `Ncmsan`, `Ncmsay`, `Ncmsd`, `Ncmsg`, `Ncmsi`, `Ncmsl`, `Ncmsn`, `Ncmsv`, `Ncnpa`, `Ncnpd`, `Ncnpg`, `Ncnpi`, `Ncnpl`, `Ncnpn`, `Ncnsa`, `Ncnsd`, `Ncnsg`, `Ncnsi`, `Ncnsl`, `Ncnsn`, `Ncnsv`, `Npfpa`, `Npfpg`, `Npfpl`, `Npfpn`, `Npfsa`, `Npfsd`, `Npfsg`, `Npfsi`, `Npfsl`, `Npfsn`, `Npmpa`, `Npmpd`, `Npmpg`, `Npmpi`, `Npmpl`, `Npmpn`, `Npmsan`, `Npmsay`, `Npmsd`, `Npmsg`, `Npmsi`, `Npmsl`, `Npmsn`, `Npmsv`, `Npnpg`, `Npnpn`, `Npnsa`, `Npnsd`, `Npnsg`, `Npnsi`, `Npnsl`, `Npnsn`, `Pd-fpa`, `Pd-fpd`, `Pd-fpg`, `Pd-fpi`, `Pd-fpl`, `Pd-fpn`, `Pd-fsa`, `Pd-fsd`, `Pd-fsg`, `Pd-fsi`, `Pd-fsl`, `Pd-fsn`, `Pd-mpa`, `Pd-mpd`, `Pd-mpg`, `Pd-mpi`, `Pd-mpl`, `Pd-mpn`, `Pd-msan`, `Pd-msay`, `Pd-msd`, `Pd-msg`, `Pd-msi`, `Pd-msl`, `Pd-msn`, `Pd-npa`, `Pd-npd`, `Pd-npg`, `Pd-npi`, `Pd-npn`, `Pd-nsa`, `Pd-nsd`, `Pd-nsg`, `Pd-nsi`, `Pd-nsl`, `Pd-nsn`, `Pi-fpa`, `Pi-fpd`, `Pi-fpg`, `Pi-fpi`, `Pi-fpl`, `Pi-fpn`, `Pi-fsa`, `Pi-fsd`, `Pi-fsg`, `Pi-fsi`, `Pi-fsl`, `Pi-fsn`, `Pi-mpa`, `Pi-mpd`, `Pi-mpg`, `Pi-mpi`, `Pi-mpl`, `Pi-mpn`, `Pi-msan`, `Pi-msay`, `Pi-msd`, `Pi-msg`, `Pi-msi`, `Pi-msl`, `Pi-msn`, `Pi-npa`, `Pi-npd`, `Pi-npg`, `Pi-npi`, `Pi-npl`, `Pi-npn`, `Pi-nsa`, `Pi-nsd`, `Pi-nsg`, `Pi-nsi`, `Pi-nsl`, `Pi-nsn`, `Pi3m-a`, `Pi3m-d`, `Pi3m-g`, `Pi3m-i`, `Pi3m-n`, `Pi3n-a`, `Pi3n-d`, `Pi3n-g`, `Pi3n-i`, `Pi3n-l`, `Pi3n-n`, `Pp1-pa`, `Pp1-pd`, `Pp1-pg`, `Pp1-pi`, `Pp1-pl`, `Pp1-pn`, `Pp1-sa`, `Pp1-sd`, `Pp1-si`, `Pp1-sl`, `Pp1-sn`, `Pp2-pa`, `Pp2-pd`, `Pp2-pg`, `Pp2-pn`, `Pp2-sa`, `Pp2-sd`, `Pp2-sg`, `Pp2-sl`, `Pp2-sn`, `Pp2-sv`, `Pp3-pa`, `Pp3-pd`, `Pp3-pg`, `Pp3-pi`, `Pp3-pl`, `Pp3fpn`, `Pp3fsa`, `Pp3fsd`, `Pp3fsg`, `Pp3fsi`, `Pp3fsl`, `Pp3fsn`, `Pp3mpn`, `Pp3msa`, `Pp3msd`, `Pp3msg`, `Pp3msi`, `Pp3msl`, `Pp3msn`, `Pp3npn`, `Pp3nsa`, `Pp3nsi`, `Pp3nsn`, `Pq-fpa`, `Pq-fpn`, `Pq-fsa`, `Pq-fsl`, `Pq-fsn`, `Pq-mpn`, `Pq-msn`, `Pq-nsn`, `Pq3m-d`, `Pq3m-n`, `Pq3n-a`, `Pq3n-l`, `Pq3n-n`, `Ps1fpa`, `Ps1fpd`, `Ps1fpg`, `Ps1fpl`, `Ps1fpn`, `Ps1fsa`, `Ps1fsd`, `Ps1fsg`, `Ps1fsi`, `Ps1fsl`, `Ps1fsn`, `Ps1fsv`, `Ps1mpa`, `Ps1mpd`, `Ps1mpg`, `Ps1mpi`, `Ps1mpl`, `Ps1mpn`, `Ps1mpv`, `Ps1msan`, `Ps1msay`, `Ps1msd`, `Ps1msg`, `Ps1msi`, `Ps1msl`, `Ps1msn`, `Ps1msv`, `Ps1npn`, `Ps1nsa`, `Ps1nsg`, `Ps1nsi`, `Ps1nsl`, `Ps1nsn`, `Ps2fpa`, `Ps2fpl`, `Ps2fpn`, `Ps2fsa`, `Ps2fsd`, `Ps2fsg`, `Ps2fsn`, `Ps2mpa`, `Ps2mpg`, `Ps2mpl`, `Ps2mpn`, `Ps2msan`, `Ps2msd`, `Ps2msg`, `Ps2msi`, `Ps2msl`, `Ps2msn`, `Ps2npn`, `Ps2nsa`, `Ps2nsg`, `Ps2nsi`, `Ps2nsl`, `Ps2nsn`, `Ps3fpa`, `Ps3fpg`, `Ps3fpl`, `Ps3fpn`, `Ps3fsa`, `Ps3fsd`, `Ps3fsg`, `Ps3fsi`, `Ps3fsl`, `Ps3fsn`, `Ps3mpa`, `Ps3mpd`, `Ps3mpg`, `Ps3mpi`, `Ps3mpl`, `Ps3mpn`, `Ps3msan`, `Ps3msay`, `Ps3msd`, `Ps3msg`, `Ps3msi`, `Ps3msl`, `Ps3msn`, `Ps3npa`, `Ps3npg`, `Ps3npl`, `Ps3npn`, `Ps3nsa`, `Ps3nsg`, `Ps3nsi`, `Ps3nsl`, `Ps3nsn`, `Px--sa`, `Px--sd`, `Px--sg`, `Px--si`, `Px--sl`, `Px-fpa`, `Px-fpg`, `Px-fpi`, `Px-fpl`, `Px-fsa`, `Px-fsd`, `Px-fsg`, `Px-fsi`, `Px-fsl`, `Px-mpa`, `Px-mpd`, `Px-mpg`, `Px-mpi`, `Px-mpl`, `Px-msan`, `Px-msay`, `Px-msd`, `Px-msg`, `Px-msi`, `Px-msl`, `Px-npa`, `Px-npg`, `Px-npi`, `Px-npl`, `Px-nsa`, `Px-nsg`, `Px-nsi`, `Px-nsl`, `Qo`, `Qq`, `Qr`, `Qz`, `Rgc`, `Rgp`, `Rgs`, `Rr`, `Sa`, `Sd`, `Sg`, `Si`, `Sl`, `Vaa1p`, `Vaa1s`, `Vaa2p`, `Vaa2s`, `Vaa3p`, `Vaa3s`, `Vae3s`, `Vam2p`, `Van`, `Vap-pf`, `Vap-pm`, `Vap-pn`, `Vap-sf`, `Vap-sm`, `Vap-sn`, `Var1p`, `Var1s`, `Var2p`, `Var2s`, `Var3p`, `Var3s`, `Vma3s`, `Vmm1p`, `Vmm2p`, `Vmm2s`, `Vmn`, `Vmp-pf`, `Vmp-pm`, `Vmp-pn`, `Vmp-sf`, `Vmp-sm`, `Vmp-sn`, `Vmr1p`, `Vmr1s`, `Vmr2p`, `Vmr2s`, `Vmr3p`, `Vmr3s`, `X`, `Xf`, `Y`, `Z` |
| **`morphologizer`** | `Case=Nom\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Loc\|POS=ADP`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `POS=SCONJ`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Acc\|POS=ADP`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=NOUN`, `POS=PUNCT`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `POS=ADV`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `NumType=Card\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Gen\|POS=ADP`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=CCONJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `POS=VERB\|VerbForm=Inf`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=X`, `Case=Gen\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `NumType=Ord\|POS=ADJ`, `Gender=Neut\|Number=Sing\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Gender=Masc\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=PART\|Polarity=Neg`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|POS=ADP`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Pres\|VerbForm=Fin`, `Degree=Pos\|POS=ADV\|PronType=Dem`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `POS=PART`, `Gender=Neut\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Degree=Cmp\|POS=ADV`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=NOUN`, `POS=ADV\|Tense=Past\|VerbForm=Conv`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Gender=Masc\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Dat\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=VERB\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Tense=Pres\|VerbForm=Fin`, `POS=AUX\|VerbForm=Inf`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Gender=Fem\|Number=Sing\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Degree=Pos\|POS=ADV\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Degree=Pos\|POS=ADV\|PronType=Neg`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `POS=NOUN`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=NOUN`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Degree=Pos\|POS=ADV\|PronType=Tot`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=NOUN`, `POS=DET\|Polarity=Neg`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=1\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Gender=Fem\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `NumType=Ord\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Ins\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|POS=ADP`, `Degree=Sup\|POS=ADV`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `POS=ADV\|Tense=Pres\|VerbForm=Conv`, `Case=Ins\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Gender=Neut\|Number=Plur\|POS=VERB\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `NumType=Mult\|POS=NUM`, `Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Gen\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Gender=Fem\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=NOUN`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Masc\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Gender=Neut\|Number=Plur\|POS=AUX\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=PROPN`, `NumType=Mult\|POS=SYM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `POS=SYM`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=1\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=NUM`, `Case=Gen\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=DET`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Masc\|POS=PRON\|PronType=Tot`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Acc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Case=Gen\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=PROPN`, `Case=Gen\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=PRON\|PronType=Neg`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=3\|Tense=Past\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|NumType=Mult\|POS=NUM`, `Mood=Ind\|Number=Plur\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=ADJ`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=NUM`, `Case=Loc\|Number=Plur\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Ins\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Mood=Imp\|Number=Plur\|POS=VERB\|Person=1\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=NUM`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Degree=Cmp\|POS=DET`, `Case=Loc\|Gender=Fem\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Loc\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Dem`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Loc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `POS=PROPN`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Nom\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Acc\|Gender=Masc\|POS=PRON\|PronType=Neg`, `Case=Acc\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Acc\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Dat\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Neut\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Case=Dat\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Loc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Loc\|Gender=Masc\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Number=Plur\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=DET`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Mood=Cnd\|Number=Plur\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Loc\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Dat\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Tot`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Int,Rel`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|POS=PRON\|PronType=Prs\|Reflex=Yes`, `Case=Ins\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Dat\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Gen\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Nom\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `POS=INTJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `POS=PART\|Polarity=Pos`, `Case=Acc\|Gender=Neut\|POS=PRON\|PronType=Tot`, `Mood=Ind\|Number=Sing\|POS=VERB\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Case=Dat\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET`, `Case=Nom\|Gender=Masc\|NumType=Card\|Number=Sing\|POS=DET`, `Case=Dat\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Dem`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Masc\|Gender[psor]=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Sing\|POS=VERB\|Person=2\|VerbForm=Fin`, `Case=Nom\|Gender=Neut\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Cmp\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Animacy=Anim\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Dat\|Definite=Def\|Degree=Cmp\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Nom\|Gender=Neut\|NumType=Mult\|POS=NUM`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=PRON\|Person=3\|PronType=Prs`, `Case=Acc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Nom\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Voc\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Gender[psor]=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Voc\|Gender=Fem\|Number=Plur\|POS=NOUN`, `Case=Ins\|Gender=Fem\|Gender[psor]=Masc,Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Gen\|Number=Plur\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Mood=Imp\|Number=Plur\|POS=AUX\|Person=2\|VerbForm=Fin`, `Case=Ins\|Gender=Neut\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Ins\|Gender=Masc\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Ins\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Fem\|Number=Plur\|POS=PROPN`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Gen\|Gender=Masc\|POS=PRON\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Gen\|Gender=Neut\|POS=PRON\|PronType=Neg`, `Case=Gen\|Gender=Neut\|NumType=Card\|Number=Sing\|POS=NUM`, `Case=Nom\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Neg`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Ins\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=NUM`, `Case=Dat\|Gender=Neut\|POS=PRON\|PronType=Int,Rel`, `Case=Loc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Plur\|POS=NUM`, `Case=Gen\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Case=Gen\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|POS=NOUN`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Anim\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Fem\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=3\|Tense=Imp\|VerbForm=Fin`, `Case=Acc\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Tense=Pres\|VerbForm=Fin`, `Mood=Ind\|Number=Sing\|POS=AUX\|Person=2\|Polarity=Neg\|Tense=Pres\|VerbForm=Fin`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ`, `Case=Gen\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Degree=Pos\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Dat\|Gender=Neut\|Number=Plur\|POS=DET\|PronType=Ind`, `Case=Loc\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Ins\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Loc\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Voc\|Gender=Masc\|Number=Sing\|POS=PROPN`, `Case=Acc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Fem\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|POS=PRON\|PronType=Ind`, `Case=Ins\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ\|Poss=Yes`, `Case=Nom\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Case=Nom\|Gender=Neut\|Number=Plur\|POS=PROPN`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Loc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Ind`, `Case=Voc\|Definite=Def\|Degree=Pos\|Gender=Fem\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Definite=Def\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Gen\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Ins\|Gender=Masc\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Gender=Neut\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Case=Voc\|Gender=Fem\|Number=Sing\|POS=NOUN`, `Case=Gen\|Gender=Fem\|Number=Plur\|POS=DET\|PronType=Tot`, `Case=Ins\|Definite=Def\|Degree=Sup\|Gender=Neut\|Number=Sing\|POS=ADJ`, `Animacy=Inan\|Case=Acc\|Definite=Def\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=DET`, `Mood=Ind\|Number=Sing\|POS=DET\|Person=1\|Tense=Pres\|VerbForm=Fin`, `Case=Loc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Nom\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Ind`, `Case=Nom\|Gender=Fem\|Number=Sing\|Number[psor]=Sing\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Nom\|Degree=Pos\|Gender=Neut\|Number=Plur\|POS=DET`, `Case=Gen\|Gender=Masc\|NumType=Mult\|POS=NUM`, `Gender=Neut\|Number=Sing\|POS=DET\|Tense=Past\|VerbForm=Part\|Voice=Act`, `Case=Ins\|Number=Sing\|POS=PRON\|Person=1\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Fem\|Number=Sing\|POS=DET\|PronType=Tot`, `Case=Voc\|Number=Sing\|POS=PRON\|Person=2\|PronType=Prs`, `Case=Voc\|Gender=Neut\|Number=Sing\|POS=NOUN`, `Case=Dat\|Gender=Fem\|NumType=Card\|Number=Plur\|POS=NUM`, `Case=Dat\|Definite=Def\|Degree=Sup\|Gender=Fem\|Number=Plur\|POS=ADJ`, `Case=Acc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Gender=Neut\|Number=Plur\|POS=DET\|Poss=Yes\|PronType=Prs\|Reflex=Yes`, `Mood=Cnd\|Number=Sing\|POS=AUX\|Person=2\|Tense=Past\|VerbForm=Fin`, `Case=Dat\|Gender=Neut\|POS=PRON\|PronType=Ind`, `Animacy=Anim\|Case=Acc\|Definite=Ind\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ\|VerbForm=Part\|Voice=Pass`, `Case=Voc\|Definite=Def\|Degree=Sup\|Gender=Masc\|Number=Plur\|POS=ADJ`, `Case=Voc\|Gender=Masc\|Number=Plur\|Number[psor]=Sing\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Dat\|Gender=Masc\|Number=Sing\|POS=DET\|PronType=Tot`, `Animacy=Inan\|Case=Acc\|Gender=Masc\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Int,Rel`, `Case=Acc\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=1\|Poss=Yes\|PronType=Prs`, `Case=Acc\|Gender=Neut\|Number=Sing\|POS=DET\|PronType=Neg`, `Case=Loc\|Gender=Neut\|POS=PRON\|PronType=Tot`, `Case=Acc\|Gender=Fem\|Number=Sing\|POS=DET\|Poss=Yes\|PronType=Neg`, `Case=Dat\|Gender=Fem\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=3\|Poss=Yes\|PronType=Prs`, `Case=Ins\|Degree=Pos\|Gender=Masc\|Number=Sing\|POS=ADJ`, `Case=Loc\|Gender=Masc\|Number=Plur\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs`, `Case=Gen\|Gender=Neut\|Number=Sing\|Number[psor]=Plur\|POS=DET\|Person=2\|Poss=Yes\|PronType=Prs` |
| **`parser`** | `ROOT`, `acl`, `advcl`, `advmod`, `advmod:emph`, `amod`, `appos`, `aux`, `aux:pass`, `case`, `cc`, `ccomp`, `compound`, `conj`, `cop`, `csubj`, `csubj:pass`, `dep`, `det`, `discourse`, `dislocated`, `expl`, `expl:pv`, `fixed`, `flat`, `iobj`, `list`, `mark`, `nmod`, `nsubj`, `nsubj:pass`, `nummod`, `obj`, `obl`, `orphan`, `parataxis`, `punct`, `vocative`, `xcomp` |
| **`experimental_edit_tree_lemmatizer`** | `1`, `3`, `5`, `7`, `9`, `10`, `12`, `14`, `16`, `17`, `19`, `21`, `23`, `25`, `27`, `29`, `31`, `33`, `35`, `38`, `40`, `41`, `43`, `45`, `47`, `49`, `51`, `53`, `56`, `58`, `60`, `62`, `64`, `65`, `67`, `69`, `71`, `73`, `75`, `76`, `79`, `82`, `84`, `86`, `88`, `90`, `92`, `94`, `96`, `98`, `100`, `102`, `104`, `106`, `110`, `112`, `114`, `116`, `118`, `120`, `124`, `126`, `128`, `129`, `130`, `132`, `134`, `136`, `139`, `141`, `142`, `145`, `148`, `150`, `151`, `153`, `154`, `156`, `158`, `159`, `161`, `162`, `164`, `165`, `166`, `168`, `170`, `172`, `175`, `176`, `177`, `179`, `181`, `186`, `188`, `189`, `192`, `194`, `197`, `199`, `201`, `202`, `204`, `205`, `208`, `210`, `212`, `214`, `217`, `219`, `221`, `223`, `225`, `227`, `229`, `231`, `233`, `235`, `237`, `241`, `243`, `245`, `247`, `249`, `251`, `253`, `255`, `173`, `256`, `257`, `259`, `262`, `264`, `266`, `269`, `270`, `271`, `273`, `275`, `276`, `279`, `280`, `281`, `282`, `283`, `284`, `286`, `288`, `290`, `291`, `293`, `295`, `296`, `298`, `300`, `302`, `303`, `304`, `305`, `307`, `308`, `309`, `311`, `313`, `315`, `317`, `318`, `321`, `323`, `325`, `326`, `329`, `331`, `333`, `335`, `336`, `337`, `338`, `339`, `340`, `341`, `343`, `226`, `346`, `347`, `349`, `350`, `351`, `353`, `355`, `358`, `360`, `363`, `365`, `366`, `368`, `371`, `374`, `376`, `379`, `381`, `382`, `384`, `385`, `387`, `389`, `391`, `393`, `396`, `398`, `400`, `402`, `403`, `405`, `406`, `408`, `410`, `412`, `415`, `418`, `419`, `421`, `425`, `426`, `428`, `429`, `431`, `432`, `433`, `435`, `436`, `438`, `439`, `440`, `441`, `442`, `444`, `446`, `448`, `450`, `452`, `454`, `456`, `457`, `459`, `461`, `462`, `465`, `466`, `468`, `469`, `471`, `473`, `475`, `477`, `479`, `481`, `483`, `484`, `485`, `487`, `488`, `489`, `492`, `493`, `494`, `495`, `497`, `500`, `501`, `502`, `507`, `508`, `512`, `513`, `516`, `517`, `519`, `520`, `521`, `524`, `525`, `526`, `528`, `529`, `531`, `532`, `534`, `535`, `536`, `538`, `539`, `543`, `545`, `546`, `547`, `548`, `550`, `552`, `554`, `555`, `557`, `559`, `561`, `564`, `566`, `567`, `570`, `571`, `572`, `573`, `575`, `576`, `579`, `583`, `584`, `585`, `586`, `588`, `590`, `592`, `594`, `597`, `598`, `601`, `603`, `605`, `606`, `608`, `610`, `613`, `614`, `617`, `619`, `622`, `623`, `624`, `627`, `629`, `631`, `633`, `635`, `637`, `639`, `641`, `643`, `644`, `646`, `647`, `648`, `650`, `652`, `656`, `657`, `661`, `663`, `664`, `666`, `669`, `670`, `672`, `674`, `676`, `678`, `680`, `681`, `682`, `684`, `685`, `686`, `687`, `688`, `689`, `690`, `692`, `693`, `695`, `696`, `698`, `699`, `701`, `702`, `705`, `706`, `707`, `708`, `710`, `712`, `714`, `715`, `716`, `717`, `718`, `719`, `721`, `722`, `723`, `724`, `727`, `728`, `729`, `732`, `733`, `737`, `738`, `739`, `740`, `741`, `742`, `744`, `745`, `748`, `750`, `752`, `754`, `756`, `758`, `760`, `762`, `763`, `764`, `765`, `768`, `770`, `772`, `773`, `774`, `775`, `777`, `778`, `780`, `782`, `783`, `786`, `788`, `789`, `790`, `792`, `795`, `796`, `798`, `801`, `802`, `805`, `807`, `809`, `811`, `813`, `816`, `818`, `820`, `822`, `824`, `825`, `826`, `829`, `833`, `835`, `838`, `840`, `841`, `842`, `843`, `845`, `847`, `849`, `851`, `854`, `855`, `857`, `859`, `860`, `861`, `863`, `865`, `866`, `867`, `868`, `869`, `872`, `874`, `876`, `878`, `879`, `882`, `884`, `885`, `886`, `887`, `890`, `891`, `892`, `893`, `894`, `895`, `896`, `898`, `899`, `901`, `903`, `904`, `906`, `907`, `909`, `910`, `911`, `912`, `913`, `915`, `916`, `918`, `920`, `922`, `923`, `924`, `925`, `926`, `927`, `929`, `930`, `933`, `934`, `935`, `937`, `938`, `940`, `942`, `944`, `945`, `946`, `947`, `948`, `949`, `951`, `953`, `955`, `958`, `959`, `960`, `961`, `962`, `964`, `965`, `967`, `969`, `974`, `976`, `977`, `979`, `980`, `981`, `983`, `985`, `986`, `988`, `989`, `990`, `991`, `993`, `994`, `996`, `998`, `1001`, `1002`, `1004`, `1007`, `1008`, `1010`, `1011`, `1014`, `1015`, `1017`, `1018`, `383`, `1020`, `1021`, `1022`, `1023`, `1024`, `1025`, `1027`, `1030`, `1032`, `1033`, `1035`, `1037`, `1038`, `1039`, `1041`, `1043`, `1044`, `1045`, `1046`, `1047`, `1048`, `1049`, `1051`, `1053`, `1055`, `1056`, `1059`, `1060`, `1063`, `1065`, `1066`, `1068`, `1069`, `1070`, `1072`, `1073`, `1074`, `1075`, `1077`, `1080`, `1081`, `1084`, `1085`, `1087`, `1089`, `1091`, `1092`, `1093`, `1094`, `1096`, `1097`, `1099`, `1101`, `1102`, `1103`, `1104`, `1105`, `1107`, `1109`, `1110`, `1113`, `1114`, `1115`, `1117`, `1118`, `1119`, `1120`, `1121`, `1122`, `1123`, `1124`, `1125`, `1126`, `1128`, `1131`, `1133`, `1134`, `1135`, `1137`, `1138`, `1140`, `1142`, `1143`, `1144`, `1146`, `1148`, `1149`, `1150`, `1151`, `1153`, `1154`, `1156`, `1158`, `1159`, `1160`, `1163`, `1165`, `1169`, `1172`, `1174`, `1177`, `1179`, `1181`, `1182`, `1184`, `1187`, `1189`, `1190`, `1193`, `1195`, `1196`, `1198`, `1199`, `1200`, `1203`, `1204`, `1205`, `1206`, `1207`, `1208`, `1209`, `1210`, `1211`, `1213`, `1214`, `1215`, `1216`, `1217`, `1218`, `1220`, `1222`, `1223`, `1225`, `1229`, `1230`, `1232`, `1233`, `1237`, `1238`, `1239`, `1242`, `1243`, `1244`, `1246`, `1249`, `1250`, `1251`, `1252`, `1255`, `1256`, `1257`, `1260`, `1262`, `1264`, `1266`, `1268`, `1270`, `1272`, `1274`, `1276`, `1277`, `1278`, `1279`, `1281`, `1282`, `1283`, `1285`, `1288`, `1290`, `1292`, `1294`, `1296`, `1298`, `1299`, `1300`, `1302`, `1303`, `1304`, `1306`, `1307`, `1309`, `1311`, `1313`, `1314`, `1315`, `1316`, `1318`, `1319`, `1320`, `1321`, `1322`, `1323`, `1325`, `1326`, `1327`, `1329`, `1331`, `1333`, `1334`, `1335`, `1337`, `1338`, `1340`, `1341`, `1342`, `1343`, `1345`, `1346`, `1349`, `1351`, `1353`, `1355`, `1357`, `1358`, `1361`, `1362`, `1363`, `1364`, `1365`, `1366`, `1367`, `1368`, `1369`, `1372`, `1324`, `1373`, `1374`, `1375`, `1376`, `1378`, `1380`, `1382`, `1383`, `1386`, `1388`, `1390`, `1392`, `1393`, `1394`, `1395`, `1396`, `1397`, `1398`, `1400`, `1401`, `1403`, `1404`, `1406`, `1407`, `1410`, `1411`, `1412`, `1416`, `1418`, `1419`, `1421`, `1422`, `1423`, `1425`, `1426`, `1427`, `1430`, `1431`, `1433`, `1434`, `1435`, `1436`, `1437`, `1438`, `1439`, `1441`, `1442`, `1443`, `1444`, `1445`, `1447`, `1448`, `1451`, `1452`, `1453`, `1454`, `1456`, `1457`, `1458`, `1460`, `1461`, `1463`, `1464`, `1467`, `1469`, `1471`, `1473`, `1475`, `1476`, `1478`, `1479`, `1480`, `1482`, `1484`, `1485`, `1488`, `1489`, `1490`, `1491`, `1494`, `1495`, `1496`, `1498`, `1499`, `1501`, `1503`, `1506`, `1507`, `1508`, `1510`, `1512`, `1514`, `1516`, `1519`, `1520`, `1523`, `1524`, `1526`, `1527`, `1528`, `1529`, `1530`, `1531`, `1532`, `1533`, `1534`, `1536`, `1537`, `1538`, `1540`, `1541`, `1542`, `1543`, `1544`, `1545`, `1546`, `1549`, `1550`, `1552`, `1553`, `1554`, `1556`, `1557`, `1558`, `1559`, `1560`, `1561`, `1562`, `1564`, `1566`, `1568`, `1569`, `1570`, `1573`, `1575`, `1577`, `1578`, `1579`, `1581`, `1582`, `1583`, `1584`, `1585`, `1587`, `1589`, `1591`, `1592`, `1593`, `1595`, `1597`, `1599`, `1601`, `1603`, `1605`, `1606`, `1608`, `1609`, `1611`, `1612`, `1613`, `1614`, `1615`, `1616`, `1617`, `1618`, `1620`, `1623`, `1624`, `1625`, `1627`, `1629`, `1630`, `1631`, `1632`, `1633`, `1635`, `1637`, `1640`, `1641`, `1642`, `1643`, `1644`, `1645`, `1647`, `1648`, `1650`, `1651`, `1652`, `1653`, `1654`, `1655`, `1657`, `1658`, `1659`, `1660`, `1661`, `1663`, `1666`, `1667`, `1668`, `1669`, `1670`, `1671`, `1672`, `1673`, `1674`, `1675`, `1676`, `1677`, `1679`, `1680`, `1683`, `1685`, `1686`, `160`, `1687`, `1689`, `1691`, `1693`, `1694`, `1695`, `1696`, `1697`, `1698`, `1699`, `1700`, `1702`, `1704`, `1705`, `1707`, `1708`, `1709`, `1710`, `1712`, `1713`, `1714`, `1716`, `1718`, `1720`, `1722`, `1724`, `1725`, `1726`, `1417`, `1727`, `1728`, `1729`, `1730`, `1732`, `1734`, `1735`, `1736`, `1738`, `1740`, `1741`, `1743`, `1744`, `1745`, `1747`, `1749`, `1752`, `1754`, `1756`, `1759`, `1761`, `1764`, `1766`, `1768`, `1770`, `1772`, `1774`, `1775`, `1776`, `1778`, `1779`, `1781`, `1783`, `1784`, `1786`, `1788`, `1789`, `1790`, `1792`, `1794`, `1795`, `1797`, `1798`, `1799`, `1801`, `1802`, `1805`, `1807`, `1809`, `1810`, `1811`, `1812`, `1814`, `1815`, `1816`, `1818`, `1819`, `1820`, `1821`, `1823`, `1824`, `1825`, `1826`, `1827`, `1828`, `1829`, `1831`, `1833`, `1834`, `1836`, `1838`, `1839`, `1841`, `1842`, `1845`, `1847`, `1850`, `1851`, `1853`, `1854`, `1856`, `1857`, `1858`, `1860`, `1861`, `1862`, `1864`, `1865`, `1866`, `1867`, `1869`, `1870`, `1871`, `1872`, `1873`, `1875`, `1878`, `1879`, `1880`, `1881`, `1883`, `1885`, `1886`, `1888`, `1890`, `1891`, `1892`, `1893`, `1894`, `1895`, `1896`, `1898`, `1900`, `1901`, `1908`, `1910`, `1911`, `1912`, `1913`, `1915`, `1916`, `1917`, `1919`, `1920`, `1921`, `1922`, `1924`, `1925`, `1926`, `1927`, `1928`, `1930`, `1931`, `1932`, `1934`, `1935`, `1936`, `1937`, `1938`, `1939`, `1941`, `1942`, `1944`, `542`, `1946`, `1947`, `1949`, `1951`, `1952`, `1953`, `1954`, `1955`, `1957`, `1959`, `1960`, `1963`, `1964`, `1965`, `1966`, `1967`, `1969`, `1971`, `1973`, `1974`, `1975`, `1977`, `1979`, `1981`, `1982`, `1984`, `1985`, `1986`, `1988`, `1989`, `1990`, `1991`, `1993`, `1994`, `1996`, `1998`, `1999`, `2000`, `2001`, `2003`, `2005`, `2006`, `2007`, `2009`, `2010`, `2012`, `2013`, `314`, `2015`, `2016`, `2017`, `2019`, `2021`, `2023`, `2025`, `2026`, `2028`, `2029`, `2031`, `2034`, `2036`, `2038`, `2039`, `2041`, `1565`, `2043`, `2045`, `2046`, `2047`, `2049`, `2051`, `2053`, `2054`, `2055`, `2057`, `2059`, `2060`, `2062`, `2064`, `2065`, `2067`, `2068`, `2070`, `2071`, `2072`, `2073`, `2074`, `2075`, `2078`, `2079`, `2080`, `2082`, `2085`, `2086`, `2089`, `2091`, `2092`, `2096`, `2098`, `2100`, `2102`, `2103`, `2104`, `2105`, `2106`, `2109`, `2110`, `2112`, `133`, `2113`, `2115`, `2117`, `2120`, `2121`, `2122`, `2126`, `2127`, `2129`, `2130`, `2132`, `2134`, `2135`, `2137`, `2138`, `2139`, `2141`, `2143`, `2145`, `2147`, `2148`, `2149`, `2151`, `2152`, `2154`, `1976`, `2156`, `2157`, `2158`, `2159`, `2160`, `2161`, `2162`, `2163`, `2164`, `2165`, `2168`, `2170`, `2171`, `2173`, `2174`, `2177`, `2178`, `2180`, `2181`, `2184`, `2188`, `2189`, `2190`, `2191`, `2192`, `2194`, `2195`, `2196`, `2197`, `2198`, `2199`, `2200`, `2201`, `2204`, `2207`, `2208`, `2211`, `2213`, `2214`, `2215`, `2216`, `2218`, `2221`, `2222`, `2223`, `2225`, `2227`, `2229`, `2231`, `2232`, `2233`, `2235`, `2236`, `2237`, `2238`, `2239`, `2240`, `2241`, `2243`, `2245`, `2247`, `2249`, `2250`, `2252`, `2254`, `2256`, `2258`, `2259`, `2261`, `2264`, `2266`, `2268`, `2269`, `2270`, `2272`, `2273`, `2275`, `2276`, `2277`, `2279`, `2281`, `2283`, `2284`, `2286`, `2287`, `2289`, `2291`, `2292`, `2293`, `2295`, `2296`, `2297`, `2299`, `2300`, `2302`, `2305`, `2306`, `2307`, `2308`, `2309`, `2310`, `2311`, `2312`, `2314`, `2315`, `2316`, `2318`, `2319`, `2320`, `2321`, `2322`, `2324`, `2326`, `2327`, `2328`, `2330`, `2331`, `2332`, `2334`, `2336`, `2339`, `2340`, `2341`, `2343`, `2344`, `2346`, `2348`, `2350`, `2352`, `2355`, `2356`, `2357`, `2358`, `2361`, `2363`, `2365`, `2367`, `2369`, `2370`, `2371`, `2374`, `2375`, `2376`, `2377`, `2378`, `2380`, `2381`, `2382`, `2383`, `2384`, `2386`, `2387`, `2390`, `2392`, `2395`, `2396`, `2399`, `2401`, `2403`, `2404`, `2405`, `2406`, `2407`, `2408`, `2409`, `2410`, `2412`, `2413`, `2416`, `2417`, `2418`, `2419`, `2420`, `2423`, `2424`, `2426`, `2428`, `2429`, `2430`, `2431`, `2432`, `2433`, `2435`, `2436`, `2437`, `2440`, `2441`, `2442`, `2443`, `2447`, `2448`, `2450`, `2451`, `2452`, `2454`, `2458`, `2460`, `2461`, `2463`, `2465`, `2467`, `2471`, `2473`, `2475`, `2477`, `2479`, `2481`, `2482`, `2484`, `2485`, `2487`, `2488`, `2490`, `2491`, `2493`, `2495`, `2496`, `2497`, `2498`, `2499`, `2500`, `2501`, `2502`, `2505`, `2506`, `2507`, `2509`, `2511`, `2513`, `2515`, `2516`, `2517`, `2519`, `2520`, `2522`, `2524`, `2525`, `2526`, `2527`, `2528`, `2530`, `2532`, `2534`, `2535`, `2537`, `2538`, `2539`, `2541`, `2542`, `2544`, `2545`, `2547`, `2548`, `2549`, `2550`, `2552`, `2553`, `2554`, `2556`, `2557`, `2558`, `2560`, `2562`, `2564`, `2566`, `2567`, `2569`, `2570`, `2571`, `2572`, `2573`, `2575`, `2576`, `2578`, `2579`, `2581`, `2583`, `2586`, `2588`, `2589`, `2590`, `2591`, `2592`, `2593`, `2594`, `2596`, `2598`, `2599`, `2601`, `2602`, `2604`, `2606`, `2607`, `2608`, `2609`, `2613`, `2615`, `2617`, `2619`, `2620`, `2621`, `2622`, `2623`, `2624`, `2626`, `2627`, `2628`, `2629`, `2630`, `2631`, `2632`, `2633`, `2634`, `2635`, `2637`, `2638`, `2639`, `2641`, `2643`, `2645`, `2647`, `2648`, `2649`, `2650`, `2652`, `2653`, `2654`, `2656`, `2657`, `2658`, `2659`, `2660`, `2662`, `2663`, `2665`, `2667`, `2669`, `2670`, `2672`, `2673`, `2675`, `2677`, `2678`, `2679`, `2680`, `2682`, `2684`, `2686`, `2687`, `2689`, `2692`, `2694`, `2696`, `2697`, `2698`, `2699`, `2700`, `2701`, `2703`, `2705`, `2707`, `2708`, `2709`, `2711`, `2713`, `2714`, `2716`, `2719`, `2720`, `2722`, `2723`, `2724`, `2725`, `2727`, `2728`, `2729`, `2732`, `574`, `2733`, `2734`, `2735`, `2736`, `2737`, `2738`, `2739`, `2740`, `2742`, `2743`, `2744`, `2746`, `2747`, `2748`, `2750`, `2751`, `2753`, `2755`, `2756`, `2757`, `2758`, `2759`, `2760`, `2762`, `2763`, `2765`, `2766`, `2767`, `2770`, `2773`, `2775`, `2777`, `2779`, `2780`, `2782`, `2783`, `2784`, `2786`, `2787`, `2788`, `2789`, `2790`, `2791`, `2793`, `2794`, `2795`, `2796`, `2797`, `2799`, `2800`, `2801`, `2802`, `2804`, `2806`, `2807`, `2808`, `2810`, `2811`, `2813`, `2814`, `2816`, `2817`, `2819`, `2820`, `2821`, `2823`, `2824`, `2825`, `2826`, `2827`, `2828`, `2829`, `2830`, `2832`, `2834`, `2836`, `248`, `2837`, `2838`, `2839`, `2841`, `2843`, `2845`, `2847`, `2849`, `2851`, `2853`, `2855`, `2856`, `2858`, `2859`, `2860`, `2861`, `2863`, `2864`, `2866`, `2867`, `2868`, `2869`, `2871`, `2873`, `2874`, `2875`, `2878`, `2879`, `2880`, `2881`, `2882`, `2883`, `2885`, `2886`, `2889`, `2892`, `2894`, `2895`, `2897`, `2899`, `2900`, `2902`, `0`, `2903`, `2904`, `2905`, `2906`, `2907`, `74`, `2908`, `2910`, `2912`, `2914`, `2915`, `2916`, `2917`, `2919`, `2920`, `2922`, `2925`, `2926`, `2929`, `2931`, `2933`, `2934`, `2935`, `2937`, `2938`, `2939`, `2940`, `2942`, `2943`, `2944`, `2946`, `2948`, `2949`, `2951`, `2952`, `2953`, `2954`, `2955`, `2958`, `2960`, `2961`, `2962`, `2964`, `2965`, `2967`, `2968`, `2969`, `2970`, `2971`, `2972`, `2973`, `2976`, `2977`, `2978`, `2979`, `2981`, `2983`, `2985`, `2987`, `2989`, `2991`, `2992`, `2993`, `2994`, `2995`, `2996`, `2998`, `2999`, `3001`, `3003`, `3004`, `3007`, `3009`, `3011`, `3012`, `3013`, `3014`, `3016`, `3018`, `3020`, `3021`, `3022`, `3024`, `3026`, `3027`, `3029`, `3030`, `3031`, `3033`, `3034`, `3036`, `3038`, `3039`, `3041`, `3043`, `3044`, `3046`, `3047`, `3048`, `3050`, `3051`, `3052`, `3053`, `3055`, `3057`, `3058`, `3061`, `3063`, `3064`, `3065`, `3066`, `3067`, `3068`, `3069`, `3070`, `3072`, `3074`, `3075`, `3077`, `3078`, `3079`, `3080`, `3081`, `3082`, `3083`, `3084`, `3085`, `3086`, `3088`, `3090`, `3091`, `3092`, `3094`, `3095`, `3096`, `3097`, `3099`, `3100`, `3101`, `3102`, `3103`, `3104`, `3105`, `3106`, `3107`, `3108`, `3110`, `3111`, `3112`, `3114`, `3115`, `3116`, `3117`, `3118`, `3119`, `3120`, `3122`, `3124`, `3126`, `3128`, `3129`, `3131`, `3132`, `3133`, `3135`, `3136`, `3138`, `3139`, `3141`, `3143`, `3145`, `3146`, `3147`, `3148`, `3150`, `3153`, `3154`, `3155`, `3156`, `3157`, `3158`, `3159`, `3161`, `3162`, `3164`, `3165`, `3167`, `3168`, `3169`, `3171`, `3172`, `3173`, `3174`, `3175`, `3177`, `3178`, `3179`, `3180`, `3181`, `3182`, `3183`, `3185`, `2764`, `3188`, `3190`, `3191`, `3193`, `3195`, `3196`, `3197`, `3198`, `3199`, `3200`, `3202`, `13`, `3205`, `3206`, `3208`, `3209`, `3210`, `3212`, `3213`, `3214`, `3215`, `3216`, `3217`, `3219`, `3220`, `3222`, `3224`, `3225`, `3227`, `3228`, `3229`, `3230`, `3231`, `3232`, `3233`, `3235`, `3236`, `3237`, `3238`, `3240`, `3242`, `3243`, `3244`, `3245`, `3246`, `3248`, `3249`, `3250`, `3253`, `3254`, `3256`, `3257`, `3258`, `3259`, `3262`, `3263`, `3264`, `3265`, `3266`, `3267`, `3268`, `3269`, `3271`, `3273`, `3274`, `3275`, `3276`, `3277`, `3278`, `3279`, `3280`, `3281`, `3282`, `3284`, `3286`, `3287`, `3289`, `3290`, `3292`, `3293`, `3295`, `3296`, `3297`, `3298`, `3300`, `3302`, `3303`, `3304`, `3306`, `3308`, `3310`, `3311`, `3312`, `3314`, `3316`, `3317`, `3318`, `3320`, `3321`, `3323`, `3324`, `3327`, `3329`, `3330`, `3332`, `3333`, `3336`, `3338`, `3341`, `3343`, `3345`, `3347`, `3348`, `3350`, `3351`, `3353`, `3354`, `3355`, `3356`, `3358`, `3359`, `3360`, `3363`, `3364`, `649`, `3366`, `3368`, `3369`, `3371`, `3372`, `3373`, `3375`, `3377`, `3378`, `3380`, `3382`, `3384`, `3385`, `3387`, `3388`, `3391`, `3393`, `3394`, `3395`, `3396`, `3397`, `3398`, `3399`, `3401`, `3402`, `3403`, `3404`, `3405`, `3406`, `3407`, `3408`, `3409`, `3410`, `3412`, `3413`, `3415`, `3416`, `3417`, `3420`, `3421`, `3423`, `3424`, `3425`, `3426`, `3427`, `3428`, `3430`, `3431`, `3433`, `3434`, `3435`, `3437`, `3439`, `3440`, `3442`, `3444`, `3445`, `3446`, `3447`, `3449`, `3451`, `3452`, `3453`, `3456`, `3457`, `3458`, `3459`, `3460`, `85`, `3461`, `3463`, `3464`, `3465`, `3467`, `3469`, `3471`, `3473`, `3475`, `3477`, `3478`, `3480`, `3481`, `3482`, `3483`, `3484`, `3485`, `3486`, `3487`, `3489`, `3491`, `3494`, `3496`, `3497`, `3498`, `3499`, `3501`, `3502`, `3504`, `3505`, `3506`, `3508`, `3509`, `3510`, `3512`, `3513`, `3517`, `3518`, `3519`, `3520`, `3521`, `3522`, `3524`, `3525`, `3526`, `3527`, `3529`, `3532`, `3533`, `3535`, `3536`, `3537`, `3539`, `3541`, `3542`, `3543` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `TOKEN_F` | 99.97 |
| `TOKEN_P` | 99.97 |
| `TOKEN_R` | 99.96 |
| `TOKEN_ACC` | 99.99 |
| `SENTS_F` | 98.90 |
| `SENTS_P` | 99.06 |
| `SENTS_R` | 98.75 |
| `TAG_ACC` | 96.40 |
| `POS_ACC` | 98.50 |
| `MORPH_ACC` | 96.78 |
| `DEP_UAS` | 92.41 |
| `DEP_LAS` | 87.03 |
| `LEMMA_ACC` | 96.35 |
|
{"language": ["hr"], "license": "cc-by-sa-4.0", "tags": ["spacy", "token-classification"]}
|
explosion/hr_udv25_croatianset_trf
| null |
[
"spacy",
"token-classification",
"hr",
"license:cc-by-sa-4.0",
"model-index",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"hr"
] |
TAGS
#spacy #token-classification #hr #license-cc-by-sa-4.0 #model-index #region-us
|
UD v2.5 benchmarking pipeline for UD\_Croatian-SET
### Label Scheme
View label scheme (3855 labels for 6 components)
### Accuracy
|
[
"### Label Scheme\n\n\n\nView label scheme (3855 labels for 6 components)",
"### Accuracy"
] |
[
"TAGS\n#spacy #token-classification #hr #license-cc-by-sa-4.0 #model-index #region-us \n",
"### Label Scheme\n\n\n\nView label scheme (3855 labels for 6 components)",
"### Accuracy"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.