pipeline_tag
stringclasses 48
values | library_name
stringclasses 198
values | text
stringlengths 1
900k
| metadata
stringlengths 2
438k
| id
stringlengths 5
122
| last_modified
null | tags
listlengths 1
1.84k
| sha
null | created_at
stringlengths 25
25
| arxiv
listlengths 0
201
| languages
listlengths 0
1.83k
| tags_str
stringlengths 17
9.34k
| text_str
stringlengths 0
389k
| text_lists
listlengths 0
722
| processed_texts
listlengths 1
723
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
text2text-generation
|
transformers
|
## daT5-large
A smaller version of [Google's mt5-large](https://huggingface.co/google/mt5-base) model, where the original model is reduced to only include Danish embeddings.
## How to use
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emillykkejensen/daT5-large")
model = AutoModel.from_pretrained("emillykkejensen/daT5-large")
```
## Further reading
[Gist](https://gist.github.com/emillykkejensen/8bf1b323495efc7252dee966e6bc1b5c) showing (in Danish) how the embeddings are extracted (for mt5-base)
[Article](https://towardsdatascience.com/how-to-adapt-a-multilingual-t5-model-for-a-single-language-b9f94f3d9c90) explaining how to do it by [David Dale](https://huggingface.co/cointegrated)
## Also check out
[daT5-base](https://huggingface.co/emillykkejensen/daT5-base)
|
{"language": ["da"], "license": "apache-2.0"}
|
emillykkejensen/daT5-large
| null |
[
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"da",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"da"
] |
TAGS
#transformers #pytorch #mt5 #text2text-generation #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
## daT5-large
A smaller version of Google's mt5-large model, where the original model is reduced to only include Danish embeddings.
## How to use
## Further reading
Gist showing (in Danish) how the embeddings are extracted (for mt5-base)
Article explaining how to do it by David Dale
## Also check out
daT5-base
|
[
"## daT5-large\nA smaller version of Google's mt5-large model, where the original model is reduced to only include Danish embeddings.",
"## How to use",
"## Further reading\n\nGist showing (in Danish) how the embeddings are extracted (for mt5-base)\n\nArticle explaining how to do it by David Dale",
"## Also check out\ndaT5-base"
] |
[
"TAGS\n#transformers #pytorch #mt5 #text2text-generation #da #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"## daT5-large\nA smaller version of Google's mt5-large model, where the original model is reduced to only include Danish embeddings.",
"## How to use",
"## Further reading\n\nGist showing (in Danish) how the embeddings are extracted (for mt5-base)\n\nArticle explaining how to do it by David Dale",
"## Also check out\ndaT5-base"
] |
fill-mask
|
transformers
|
# ClinicalBERT - Bio + Clinical BERT Model
The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Clinical BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on all MIMIC notes.
## Pretraining Data
The `Bio_ClinicalBERT` model was trained on all notes from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
### Pretraining Procedures
The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_ClinicalBERT")
```
## More Information
Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email emilya@mit.edu with any questions.
|
{"language": "en", "license": "mit", "tags": ["fill-mask"]}
|
emilyalsentzer/Bio_ClinicalBERT
| null |
[
"transformers",
"pytorch",
"tf",
"jax",
"bert",
"fill-mask",
"en",
"arxiv:1904.03323",
"arxiv:1901.08746",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.03323",
"1901.08746"
] |
[
"en"
] |
TAGS
#transformers #pytorch #tf #jax #bert #fill-mask #en #arxiv-1904.03323 #arxiv-1901.08746 #license-mit #endpoints_compatible #has_space #region-us
|
# ClinicalBERT - Bio + Clinical BERT Model
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes.
## Pretraining Data
The 'Bio_ClinicalBERT' model was trained on all notes from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).
### Pretraining Procedures
The model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
## More Information
Refer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the clinicalBERT repo or email emilya@URL with any questions.
|
[
"# ClinicalBERT - Bio + Clinical BERT Model\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries. \n\nThis model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes.",
"## Pretraining Data\nThe 'Bio_ClinicalBERT' model was trained on all notes from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).",
"## Model Pretraining",
"### Note Preprocessing\nEach note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into \"History of Present Illness\", \"Family History\", \"Brief Hospital Course\", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).",
"### Pretraining Procedures\nThe model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').",
"### Pretraining Hyperparameters\nWe used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15\nand max predictions per sequence = 20).",
"## How to use the model\n\nLoad the model via the transformers library:",
"## More Information\n\nRefer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.",
"## Questions?\n\nPost a Github issue on the clinicalBERT repo or email emilya@URL with any questions."
] |
[
"TAGS\n#transformers #pytorch #tf #jax #bert #fill-mask #en #arxiv-1904.03323 #arxiv-1901.08746 #license-mit #endpoints_compatible #has_space #region-us \n",
"# ClinicalBERT - Bio + Clinical BERT Model\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries. \n\nThis model card describes the Bio+Clinical BERT model, which was initialized from BioBERT & trained on all MIMIC notes.",
"## Pretraining Data\nThe 'Bio_ClinicalBERT' model was trained on all notes from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).",
"## Model Pretraining",
"### Note Preprocessing\nEach note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into \"History of Present Illness\", \"Family History\", \"Brief Hospital Course\", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).",
"### Pretraining Procedures\nThe model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').",
"### Pretraining Hyperparameters\nWe used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15\nand max predictions per sequence = 20).",
"## How to use the model\n\nLoad the model via the transformers library:",
"## More Information\n\nRefer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.",
"## Questions?\n\nPost a Github issue on the clinicalBERT repo or email emilya@URL with any questions."
] |
fill-mask
|
transformers
|
# ClinicalBERT - Bio + Discharge Summary BERT Model
The [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) paper contains four unique clinicalBERT models: initialized with BERT-Base (`cased_L-12_H-768_A-12`) or BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`) & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Discharge Summary BERT model, which was initialized from [BioBERT](https://arxiv.org/abs/1901.08746) & trained on only discharge summaries from MIMIC.
## Pretraining Data
The `Bio_Discharge_Summary_BERT` model was trained on all discharge summaries from [MIMIC III](https://www.nature.com/articles/sdata201635), a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see [here](https://mimic.physionet.org/). All notes from the `NOTEEVENTS` table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy (`en core sci md` tokenizer).
### Pretraining Procedures
The model was trained using code from [Google's BERT repository](https://github.com/google-research/bert) on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT (`BioBERT-Base v1.0 + PubMed 200K + PMC 270K`).
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
```
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT")
model = AutoModel.from_pretrained("emilyalsentzer/Bio_Discharge_Summary_BERT")
```
## More Information
Refer to the original paper, [Publicly Available Clinical BERT Embeddings](https://arxiv.org/abs/1904.03323) (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the [clinicalBERT repo](https://github.com/EmilyAlsentzer/clinicalBERT) or email emilya@mit.edu with any questions.
|
{"language": "en", "license": "mit", "tags": ["fill-mask"]}
|
emilyalsentzer/Bio_Discharge_Summary_BERT
| null |
[
"transformers",
"pytorch",
"jax",
"bert",
"fill-mask",
"en",
"arxiv:1904.03323",
"arxiv:1901.08746",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1904.03323",
"1901.08746"
] |
[
"en"
] |
TAGS
#transformers #pytorch #jax #bert #fill-mask #en #arxiv-1904.03323 #arxiv-1901.08746 #license-mit #endpoints_compatible #has_space #region-us
|
# ClinicalBERT - Bio + Discharge Summary BERT Model
The Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries.
This model card describes the Bio+Discharge Summary BERT model, which was initialized from BioBERT & trained on only discharge summaries from MIMIC.
## Pretraining Data
The 'Bio_Discharge_Summary_BERT' model was trained on all discharge summaries from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).
## Model Pretraining
### Note Preprocessing
Each note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into "History of Present Illness", "Family History", "Brief Hospital Course", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).
### Pretraining Procedures
The model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').
### Pretraining Hyperparameters
We used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15
and max predictions per sequence = 20).
## How to use the model
Load the model via the transformers library:
## More Information
Refer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.
## Questions?
Post a Github issue on the clinicalBERT repo or email emilya@URL with any questions.
|
[
"# ClinicalBERT - Bio + Discharge Summary BERT Model\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries. \n\nThis model card describes the Bio+Discharge Summary BERT model, which was initialized from BioBERT & trained on only discharge summaries from MIMIC.",
"## Pretraining Data\nThe 'Bio_Discharge_Summary_BERT' model was trained on all discharge summaries from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).",
"## Model Pretraining",
"### Note Preprocessing\nEach note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into \"History of Present Illness\", \"Family History\", \"Brief Hospital Course\", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).",
"### Pretraining Procedures\nThe model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').",
"### Pretraining Hyperparameters\nWe used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15\nand max predictions per sequence = 20).",
"## How to use the model\n\nLoad the model via the transformers library:",
"## More Information\n\nRefer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.",
"## Questions?\n\nPost a Github issue on the clinicalBERT repo or email emilya@URL with any questions."
] |
[
"TAGS\n#transformers #pytorch #jax #bert #fill-mask #en #arxiv-1904.03323 #arxiv-1901.08746 #license-mit #endpoints_compatible #has_space #region-us \n",
"# ClinicalBERT - Bio + Discharge Summary BERT Model\n\nThe Publicly Available Clinical BERT Embeddings paper contains four unique clinicalBERT models: initialized with BERT-Base ('cased_L-12_H-768_A-12') or BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K') & trained on either all MIMIC notes or only discharge summaries. \n\nThis model card describes the Bio+Discharge Summary BERT model, which was initialized from BioBERT & trained on only discharge summaries from MIMIC.",
"## Pretraining Data\nThe 'Bio_Discharge_Summary_BERT' model was trained on all discharge summaries from MIMIC III, a database containing electronic health records from ICU patients at the Beth Israel Hospital in Boston, MA. For more details on MIMIC, see here. All notes from the 'NOTEEVENTS' table were included (~880M words).",
"## Model Pretraining",
"### Note Preprocessing\nEach note in MIMIC was first split into sections using a rules-based section splitter (e.g. discharge summary notes were split into \"History of Present Illness\", \"Family History\", \"Brief Hospital Course\", etc. sections). Then each section was split into sentences using SciSpacy ('en core sci md' tokenizer).",
"### Pretraining Procedures\nThe model was trained using code from Google's BERT repository on a GeForce GTX TITAN X 12 GB GPU. Model parameters were initialized with BioBERT ('BioBERT-Base v1.0 + PubMed 200K + PMC 270K').",
"### Pretraining Hyperparameters\nWe used a batch size of 32, a maximum sequence length of 128, and a learning rate of 5 · 10−5 for pre-training our models. The models trained on all MIMIC notes were trained for 150,000 steps. The dup factor for duplicating input data with different masks was set to 5. All other default parameters were used (specifically, masked language model probability = 0.15\nand max predictions per sequence = 20).",
"## How to use the model\n\nLoad the model via the transformers library:",
"## More Information\n\nRefer to the original paper, Publicly Available Clinical BERT Embeddings (NAACL Clinical NLP Workshop 2019) for additional details and performance on NLI and NER tasks.",
"## Questions?\n\nPost a Github issue on the clinicalBERT repo or email emilya@URL with any questions."
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `eml914/streaming_transformer_asr_librispeech`
This model was trained by Emiru Tsunoo using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 12eb132418a1f69548f7998e53273cd05d989ed9
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model eml914/streaming_transformer_asr_librispeech
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Nov 17 18:18:46 JST 2021`
- python version: `3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.4.0`
- Git hash: `12eb132418a1f69548f7998e53273cd05d989ed9`
- Commit date: `Tue Nov 16 10:12:21 2021 +0900`
## asr_train_asr_streaming_fbank_pitch_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|54402|97.6|2.2|0.3|0.3|2.7|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|50948|93.5|5.8|0.7|0.9|7.4|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|52576|97.5|2.3|0.3|0.3|2.9|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|62|96.8|3.2|0.0|0.0|3.2|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|52343|93.5|5.7|0.8|0.9|7.4|53.7|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|288456|99.2|0.4|0.4|0.3|1.1|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|265951|97.2|1.6|1.2|0.9|3.7|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|281530|99.2|0.4|0.4|0.3|1.1|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|367|99.5|0.0|0.5|0.8|1.4|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|272758|97.3|1.5|1.3|0.9|3.6|53.7|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_clean|2703|68010|96.8|2.1|1.1|0.4|3.6|31.9|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/dev_other|2864|63110|91.9|5.9|2.2|1.5|9.6|50.4|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean|2620|65818|96.7|2.2|1.1|0.4|3.7|33.1|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_clean_dbg|2620|94|97.9|2.1|0.0|1.1|3.2|0.0|
|decode_asr_streaming_lm_lm_train_lm_adam_en_bpe5000_valid.loss.ave_asr_model_valid.acc.ave/test_other|2939|65101|91.8|5.5|2.7|1.2|9.4|53.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_streaming.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_streaming_fbank_pitch_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33851
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_fbank_pitch_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 800
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/fbank_pitch/train_960_sp/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/fbank_pitch/dev/feats.scp
- speech
- kaldi_ark
- - dump/fbank_pitch/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ▁I
- ▁HE
- ▁THAT
- ▁WAS
- ED
- ▁IT
- ''''
- ▁HIS
- ING
- ▁YOU
- ▁WITH
- ▁FOR
- ▁HAD
- T
- ▁AS
- ▁HER
- ▁IS
- ▁BE
- ▁BUT
- ▁NOT
- ▁SHE
- D
- ▁AT
- ▁ON
- LY
- ▁HIM
- ▁THEY
- ▁ALL
- ▁HAVE
- ▁BY
- ▁SO
- ▁THIS
- ▁MY
- ▁WHICH
- ▁ME
- ▁SAID
- ▁FROM
- ▁ONE
- Y
- E
- ▁WERE
- ▁WE
- ▁NO
- N
- ▁THERE
- ▁OR
- ER
- ▁AN
- ▁WHEN
- ▁ARE
- ▁THEIR
- ▁WOULD
- ▁IF
- ▁WHAT
- ▁THEM
- ▁WHO
- ▁OUT
- M
- ▁DO
- ▁WILL
- ▁UP
- ▁BEEN
- P
- R
- ▁MAN
- ▁THEN
- ▁COULD
- ▁MORE
- C
- ▁INTO
- ▁NOW
- ▁VERY
- ▁YOUR
- ▁SOME
- ▁LITTLE
- ES
- ▁TIME
- RE
- ▁CAN
- ▁LIKE
- LL
- ▁ABOUT
- ▁HAS
- ▁THAN
- ▁DID
- ▁UPON
- ▁OVER
- IN
- ▁ANY
- ▁WELL
- ▁ONLY
- B
- ▁SEE
- ▁GOOD
- ▁OTHER
- ▁TWO
- L
- ▁KNOW
- ▁GO
- ▁DOWN
- ▁BEFORE
- A
- AL
- ▁OUR
- ▁OLD
- ▁SHOULD
- ▁MADE
- ▁AFTER
- ▁GREAT
- ▁DAY
- ▁MUST
- ▁COME
- ▁HOW
- ▁SUCH
- ▁CAME
- LE
- ▁WHERE
- ▁US
- ▁NEVER
- ▁THESE
- ▁MUCH
- ▁DE
- ▁MISTER
- ▁WAY
- G
- ▁S
- ▁MAY
- ATION
- ▁LONG
- OR
- ▁AM
- ▁FIRST
- ▁BACK
- ▁OWN
- ▁RE
- ▁AGAIN
- ▁SAY
- ▁MEN
- ▁WENT
- ▁HIMSELF
- ▁HERE
- NESS
- ▁THINK
- V
- IC
- ▁EVEN
- ▁THOUGHT
- ▁HAND
- ▁JUST
- ▁O
- ▁UN
- VE
- ION
- ▁ITS
- 'ON'
- ▁MAKE
- ▁MIGHT
- ▁TOO
- K
- ▁AWAY
- ▁LIFE
- TH
- ▁WITHOUT
- ST
- ▁THROUGH
- ▁MOST
- ▁TAKE
- ▁DON
- ▁EVERY
- F
- O
- ▁SHALL
- ▁THOSE
- ▁EYES
- AR
- ▁STILL
- ▁LAST
- ▁HOUSE
- ▁HEAD
- ABLE
- ▁NOTHING
- ▁NIGHT
- ITY
- ▁LET
- ▁MANY
- ▁OFF
- ▁BEING
- ▁FOUND
- ▁WHILE
- EN
- ▁SAW
- ▁GET
- ▁PEOPLE
- ▁FACE
- ▁YOUNG
- CH
- ▁UNDER
- ▁ONCE
- ▁TELL
- AN
- ▁THREE
- ▁PLACE
- ▁ROOM
- ▁YET
- ▁SAME
- IL
- US
- U
- ▁FATHER
- ▁RIGHT
- EL
- ▁THOUGH
- ▁ANOTHER
- LI
- RI
- ▁HEART
- IT
- ▁PUT
- ▁TOOK
- ▁GIVE
- ▁EVER
- ▁E
- ▁PART
- ▁WORK
- ERS
- ▁LOOK
- ▁NEW
- ▁KING
- ▁MISSUS
- ▁SIR
- ▁LOVE
- ▁MIND
- ▁LOOKED
- W
- RY
- ▁ASKED
- ▁LEFT
- ET
- ▁LIGHT
- CK
- ▁DOOR
- ▁MOMENT
- RO
- ▁WORLD
- ▁THINGS
- ▁HOME
- UL
- ▁THING
- LA
- ▁WHY
- ▁MOTHER
- ▁ALWAYS
- ▁FAR
- FUL
- ▁WATER
- CE
- IVE
- UR
- ▁HEARD
- ▁SOMETHING
- ▁SEEMED
- I
- LO
- ▁BECAUSE
- OL
- ▁END
- ▁TOLD
- ▁CON
- ▁YES
- ▁GOING
- ▁GOT
- RA
- IR
- ▁WOMAN
- ▁GOD
- EST
- TED
- ▁FIND
- ▁KNEW
- ▁SOON
- ▁EACH
- ▁SIDE
- H
- TON
- MENT
- ▁OH
- NE
- Z
- LING
- ▁AGAINST
- TER
- ▁NAME
- ▁MISS
- ▁QUITE
- ▁WANT
- ▁YEARS
- ▁FEW
- ▁BETTER
- ENT
- ▁HALF
- ▁DONE
- ▁ALSO
- ▁BEGAN
- ▁HAVING
- ▁ENOUGH
- IS
- ▁LADY
- ▁WHOLE
- LESS
- ▁BOTH
- ▁SEEN
- ▁SET
- ▁WHITE
- ▁COURSE
- IES
- ▁VOICE
- ▁CALLED
- ▁D
- ▁EX
- ATE
- ▁TURNED
- ▁GAVE
- ▁C
- ▁POOR
- MAN
- UT
- NA
- ▁DEAR
- ISH
- ▁GIRL
- ▁MORNING
- ▁BETWEEN
- LED
- ▁NOR
- IA
- ▁AMONG
- MA
- ▁
- ▁SMALL
- ▁REST
- ▁WHOM
- ▁FELT
- ▁HANDS
- ▁MYSELF
- ▁HIGH
- ▁M
- ▁HOWEVER
- ▁HERSELF
- ▁P
- CO
- ▁STOOD
- ID
- ▁KIND
- ▁HUNDRED
- AS
- ▁ROUND
- ▁ALMOST
- TY
- ▁SINCE
- ▁G
- AM
- ▁LA
- SE
- ▁BOY
- ▁MA
- ▁PERHAPS
- ▁WORDS
- ATED
- ▁HO
- X
- ▁MO
- ▁SAT
- ▁REPLIED
- ▁FOUR
- ▁ANYTHING
- ▁TILL
- ▁UNTIL
- ▁BLACK
- TION
- ▁CRIED
- RU
- TE
- ▁FACT
- ▁HELP
- ▁NEXT
- ▁LOOKING
- ▁DOES
- ▁FRIEND
- ▁LAY
- ANCE
- ▁POWER
- ▁BROUGHT
- VER
- ▁FIRE
- ▁KEEP
- PO
- FF
- ▁COUNTRY
- ▁SEA
- ▁WORD
- ▁CAR
- ▁DAYS
- ▁TOGETHER
- ▁IMP
- ▁REASON
- KE
- ▁INDEED
- TING
- ▁MATTER
- ▁FULL
- ▁TEN
- TIC
- ▁LAND
- ▁RATHER
- ▁AIR
- ▁HOPE
- ▁DA
- ▁OPEN
- ▁FEET
- ▁EN
- ▁FIVE
- ▁POINT
- ▁CO
- OM
- ▁LARGE
- ▁B
- ▁CL
- ME
- ▁GONE
- ▁CHILD
- INE
- GG
- ▁BEST
- ▁DIS
- UM
- ▁HARD
- ▁LORD
- OUS
- ▁WIFE
- ▁SURE
- ▁FORM
- DE
- ▁DEATH
- ANT
- ▁NATURE
- ▁BA
- ▁CARE
- ▁BELIEVE
- PP
- ▁NEAR
- ▁RO
- ▁RED
- ▁WAR
- IE
- ▁SPEAK
- ▁FEAR
- ▁CASE
- ▁TAKEN
- ▁ALONG
- ▁CANNOT
- ▁HEAR
- ▁THEMSELVES
- CI
- ▁PRESENT
- AD
- ▁MASTER
- ▁SON
- ▁THUS
- ▁LI
- ▁LESS
- ▁SUN
- ▁TRUE
- IM
- IOUS
- ▁THOUSAND
- ▁MONEY
- ▁W
- ▁BEHIND
- ▁CHILDREN
- ▁DOCTOR
- AC
- ▁TWENTY
- ▁WISH
- ▁SOUND
- ▁WHOSE
- ▁LEAVE
- ▁ANSWERED
- ▁THOU
- ▁DUR
- ▁HA
- ▁CERTAIN
- ▁PO
- ▁PASSED
- GE
- TO
- ▁ARM
- ▁LO
- ▁STATE
- ▁ALONE
- TA
- ▁SHOW
- ▁NEED
- ▁LIVE
- ND
- ▁DEAD
- ENCE
- ▁STRONG
- ▁PRE
- ▁TI
- ▁GROUND
- SH
- TI
- ▁SHORT
- IAN
- UN
- ▁PRO
- ▁HORSE
- MI
- ▁PRINCE
- ARD
- ▁FELL
- ▁ORDER
- ▁CALL
- AT
- ▁GIVEN
- ▁DARK
- ▁THEREFORE
- ▁CLOSE
- ▁BODY
- ▁OTHERS
- ▁SENT
- ▁SECOND
- ▁OFTEN
- ▁CA
- ▁MANNER
- MO
- NI
- ▁BRING
- ▁QUESTION
- ▁HOUR
- ▁BO
- AGE
- ▁ST
- ▁TURN
- ▁TABLE
- ▁GENERAL
- ▁EARTH
- ▁BED
- ▁REALLY
- ▁SIX
- 'NO'
- IST
- ▁BECOME
- ▁USE
- ▁READ
- ▁SE
- ▁VI
- ▁COMING
- ▁EVERYTHING
- ▁EM
- ▁ABOVE
- ▁EVENING
- ▁BEAUTIFUL
- ▁FEEL
- ▁RAN
- ▁LEAST
- ▁LAW
- ▁ALREADY
- ▁MEAN
- ▁ROSE
- WARD
- ▁ITSELF
- ▁SOUL
- ▁SUDDENLY
- ▁AROUND
- RED
- ▁ANSWER
- ICAL
- ▁RA
- ▁WIND
- ▁FINE
- ▁WON
- ▁WHETHER
- ▁KNOWN
- BER
- NG
- ▁TA
- ▁CAPTAIN
- ▁EYE
- ▁PERSON
- ▁WOMEN
- ▁SORT
- ▁ASK
- ▁BROTHER
- ▁USED
- ▁HELD
- ▁BIG
- ▁RETURNED
- ▁STRANGE
- ▁BU
- ▁PER
- ▁FREE
- ▁EITHER
- ▁WITHIN
- ▁DOUBT
- ▁YEAR
- ▁CLEAR
- ▁SIGHT
- ▁GRA
- ▁LOST
- ▁KEPT
- ▁F
- PE
- ▁BAR
- ▁TOWN
- ▁SLEEP
- ARY
- ▁HAIR
- ▁FRIENDS
- ▁DREAM
- ▁FELLOW
- PER
- ▁DEEP
- QUE
- ▁BECAME
- ▁REAL
- ▁PAST
- ▁MAKING
- RING
- ▁COMP
- ▁ACT
- ▁BAD
- HO
- STER
- ▁YE
- ▁MEANS
- ▁RUN
- MEN
- ▁DAUGHTER
- ▁SENSE
- ▁CITY
- ▁SOMETIMES
- ▁TOWARDS
- ▁ROAD
- ▁SP
- ▁LU
- ▁READY
- ▁FOOT
- ▁COLD
- ▁SA
- ▁LETTER
- ▁ELSE
- ▁MAR
- ▁STA
- BE
- ▁TRUTH
- ▁LE
- BO
- ▁BUSINESS
- CHE
- ▁JOHN
- ▁SUBJECT
- ▁COURT
- ▁IDEA
- ILY
- ▁RIVER
- ATING
- ▁FAMILY
- HE
- ▁DIDN
- ▁GLAD
- ▁SEVERAL
- IAL
- ▁UNDERSTAND
- ▁SC
- ▁POSSIBLE
- ▁DIFFERENT
- ▁RETURN
- ▁ARMS
- ▁LOW
- ▁HOLD
- ▁TALK
- ▁RU
- ▁WINDOW
- ▁INTEREST
- ▁SISTER
- SON
- ▁SH
- ▁BLOOD
- ▁SAYS
- ▁CAP
- ▁DI
- ▁HUMAN
- ▁CAUSE
- NCE
- ▁THANK
- ▁LATE
- GO
- ▁CUT
- ▁ACROSS
- ▁STORY
- NT
- ▁COUNT
- ▁ABLE
- DY
- LEY
- ▁NUMBER
- ▁STAND
- ▁CHURCH
- ▁THY
- ▁SUPPOSE
- LES
- BLE
- OP
- ▁EFFECT
- BY
- ▁K
- ▁NA
- ▁SPOKE
- ▁MET
- ▁GREEN
- ▁HUSBAND
- ▁RESPECT
- ▁PA
- ▁FOLLOWED
- ▁REMEMBER
- ▁LONGER
- ▁AGE
- ▁TAKING
- ▁LINE
- ▁SEEM
- ▁HAPPY
- LAND
- EM
- ▁STAY
- ▁PLAY
- ▁COMMON
- ▁GA
- ▁BOOK
- ▁TIMES
- ▁OBJECT
- ▁SEVEN
- QUI
- DO
- UND
- ▁FL
- ▁PRETTY
- ▁FAIR
- WAY
- ▁WOOD
- ▁REACHED
- ▁APPEARED
- ▁SWEET
- ▁FALL
- BA
- ▁PASS
- ▁SIGN
- ▁TREE
- IONS
- ▁GARDEN
- ▁ILL
- ▁ART
- ▁REMAIN
- ▁OPENED
- ▁BRIGHT
- ▁STREET
- ▁TROUBLE
- ▁PAIN
- ▁CONTINUED
- ▁SCHOOL
- OUR
- ▁CARRIED
- ▁SAYING
- HA
- ▁CHANGE
- ▁FOLLOW
- ▁GOLD
- ▁SW
- ▁FEELING
- ▁COMMAND
- ▁BEAR
- ▁CERTAINLY
- ▁BLUE
- ▁NE
- CA
- ▁WILD
- ▁ACCOUNT
- ▁OUGHT
- UD
- ▁T
- ▁BREATH
- ▁WANTED
- ▁RI
- ▁HEAVEN
- ▁PURPOSE
- ▁CHARACTER
- ▁RICH
- ▁PE
- ▁DRESS
- OS
- FA
- ▁TH
- ▁ENGLISH
- ▁CHANCE
- ▁SHIP
- ▁VIEW
- ▁TOWARD
- AK
- ▁JOY
- ▁JA
- ▁HAR
- ▁NEITHER
- ▁FORCE
- ▁UNCLE
- DER
- ▁PLAN
- ▁PRINCESS
- DI
- ▁CHIEF
- ▁HAT
- ▁LIVED
- ▁AB
- ▁VISIT
- ▁MOR
- TEN
- ▁WALL
- UC
- ▁MINE
- ▁PLEASURE
- ▁SMILE
- ▁FRONT
- ▁HU
- ▁DEAL
- OW
- ▁FURTHER
- GED
- ▁TRIED
- DA
- VA
- ▁NONE
- ▁ENTERED
- ▁QUEEN
- ▁PAY
- ▁EL
- ▁EXCEPT
- ▁SHA
- ▁FORWARD
- ▁EIGHT
- ▁ADDED
- ▁PUBLIC
- ▁EIGHTEEN
- ▁STAR
- ▁HAPPENED
- ▁LED
- ▁WALKED
- ▁ALTHOUGH
- ▁LATER
- ▁SPIRIT
- ▁WALK
- ▁BIT
- ▁MEET
- LIN
- ▁FI
- LT
- ▁MOUTH
- ▁WAIT
- ▁HOURS
- ▁LIVING
- ▁YOURSELF
- ▁FAST
- ▁CHA
- ▁HALL
- ▁BEYOND
- ▁BOAT
- ▁SECRET
- ENS
- ▁CHAIR
- RN
- ▁RECEIVED
- ▁CAT
- RESS
- ▁DESIRE
- ▁GENTLEMAN
- UGH
- ▁LAID
- EVER
- ▁OCCASION
- ▁WONDER
- ▁GU
- ▁PARTY
- DEN
- ▁FISH
- ▁SEND
- ▁NEARLY
- ▁TRY
- CON
- ▁SEEMS
- RS
- ▁BELL
- ▁BRA
- ▁SILENCE
- IG
- ▁GUARD
- ▁DIE
- ▁DOING
- ▁TU
- ▁COR
- ▁EARLY
- ▁BANK
- ▁FIGURE
- IF
- ▁ENGLAND
- ▁MARY
- ▁AFRAID
- LER
- ▁FO
- ▁WATCH
- ▁FA
- ▁VA
- ▁GRE
- ▁AUNT
- PED
- ▁SERVICE
- ▁JE
- ▁PEN
- ▁MINUTES
- ▁PAN
- ▁TREES
- NED
- ▁GLASS
- ▁TONE
- ▁PLEASE
- ▁FORTH
- ▁CROSS
- ▁EXCLAIMED
- ▁DREW
- ▁EAT
- ▁AH
- ▁GRAVE
- ▁CUR
- PA
- URE
- CENT
- ▁MILES
- ▁SOFT
- ▁AGO
- ▁POSITION
- ▁WARM
- ▁LENGTH
- ▁NECESSARY
- ▁THINKING
- ▁PICTURE
- ▁PI
- SHIP
- IBLE
- ▁HEAVY
- ▁ATTENTION
- ▁DOG
- ABLY
- ▁STANDING
- ▁NATURAL
- ▁APPEAR
- OV
- ▁CAUGHT
- VO
- ISM
- ▁SPRING
- ▁EXPERIENCE
- ▁PAT
- OT
- ▁STOPPED
- ▁REGARD
- ▁HARDLY
- ▁SELF
- ▁STRENGTH
- ▁GREW
- ▁KNIGHT
- ▁OPINION
- ▁WIDE
- ▁INSTEAD
- ▁SOUTH
- ▁TRANS
- ▁CORNER
- ▁LEARN
- ▁ISLAND
- ▁MI
- ▁THIRD
- ▁STE
- ▁STRAIGHT
- ▁TEA
- ▁BOUND
- ▁SEEING
- ▁JU
- ▁DINNER
- ▁BEAUTY
- ▁PEACE
- AH
- ▁REP
- ▁SILENT
- ▁CRE
- ALLY
- RIC
- ▁STEP
- ▁VER
- ▁JO
- GER
- ▁SITTING
- ▁THIRTY
- ▁SAVE
- ENED
- ▁GLANCE
- ▁REACH
- ▁ACTION
- ▁SAL
- ▁SAD
- ▁STONE
- ITIES
- ▁FRENCH
- ▁STRUCK
- ▁PAPER
- ▁WHATEVER
- ▁SUB
- ▁DISTANCE
- ▁WRONG
- ▁KNOWLEDGE
- ▁SAFE
- ▁SNOW
- ▁MUSIC
- ▁FIFTY
- RON
- ▁ATTEMPT
- ▁GOVERNMENT
- TU
- ▁CROWD
- ▁BESIDES
- ▁LOVED
- ▁BOX
- ▁DIRECTION
- ▁TRAIN
- ▁NORTH
- ▁THICK
- ▁GETTING
- AV
- ▁FLOOR
- ▁COMPANY
- ▁BLOW
- ▁PLAIN
- TRO
- ▁BESIDE
- ▁ROCK
- ▁IMMEDIATELY
- FI
- ▁SHADOW
- ▁SIT
- ORS
- ILE
- ▁DRINK
- ▁SPOT
- ▁DANGER
- ▁AL
- ▁SAINT
- ▁SLOWLY
- ▁PALACE
- IER
- ▁RESULT
- ▁PETER
- ▁FOREST
- ▁BELONG
- ▁SU
- ▁PAR
- RIS
- ▁TEARS
- ▁APPEARANCE
- ▁GATE
- BU
- ITION
- ▁QUICKLY
- ▁QUIET
- ▁LONDON
- ▁START
- ▁BROWN
- TRA
- KIN
- ▁CONSIDER
- ▁BATTLE
- ▁ANNE
- ▁PIECE
- ▁DIED
- ▁SUCCESS
- ▁LIPS
- ▁FILLED
- ▁FORGET
- ▁POST
- IFIED
- ▁MARGARET
- ▁FOOD
- HAM
- ▁PLEASANT
- ▁FE
- ▁EXPRESSION
- ▁POCKET
- ▁FRESH
- ▁WEAR
- TRI
- ▁BROKEN
- ▁LAUGHED
- GING
- ▁FOLLOWING
- WN
- IP
- ▁TOUCH
- ▁YOUTH
- ATIVE
- ▁LEG
- ▁WEEK
- ▁REMAINED
- ▁EASY
- NER
- RK
- ▁ENTER
- ▁FIGHT
- ▁PLACED
- ▁TRAVEL
- ▁SIMPLE
- ▁GIRLS
- ▁WAITING
- ▁STOP
- ▁WAVE
- AU
- ▁WISE
- ▁CAMP
- TURE
- UB
- ▁VE
- ▁OFFICE
- ▁GRAND
- ▁FIT
- ▁JUDGE
- UP
- MENTS
- ▁QUICK
- HI
- ▁FLO
- RIES
- VAL
- ▁COMFORT
- ▁PARTICULAR
- ▁STARTED
- ▁SUIT
- ▁NI
- ▁PALE
- ▁IMPOSSIBLE
- ▁HOT
- ▁CONVERSATION
- ▁SCENE
- ▁BOYS
- ▁WIN
- ▁BRE
- ▁SOCIETY
- ▁OUTSIDE
- ▁WRITE
- ▁EFFORT
- ▁TALKING
- ▁FORTUNE
- ▁NINE
- ▁WA
- ▁SINGLE
- ▁RULE
- ▁PORT
- ▁WINTER
- ▁CAST
- ▁CRA
- ▁HAPPEN
- ▁CRO
- ▁SHUT
- NING
- ▁GUN
- ▁NOBLE
- ▁BEGIN
- ▁PATH
- ▁SKY
- ▁WONDERFUL
- ▁SUDDEN
- ▁ARMY
- ▁CHE
- ▁WORTH
- ▁MOUNTAIN
- ▁MIN
- AG
- ▁FLU
- ▁GRACE
- ▁CHAPTER
- ▁BELOW
- ▁RING
- ▁TURNING
- ▁IRON
- ▁TOP
- ▁AFTERNOON
- ORY
- ▁EVIL
- ▁TRUST
- ▁BOW
- ▁TRI
- ▁SAIL
- ▁CONTENT
- ▁HORSES
- ITE
- ▁SILVER
- AP
- ▁LAD
- ▁RUNNING
- ▁HILL
- ▁BEGINNING
- ▁MAD
- ▁HABIT
- GRA
- ▁CLOTHES
- ▁MORROW
- ▁CRY
- ▁FASHION
- ▁PRESENCE
- ▁Z
- FE
- ▁ARRIVED
- ▁QUARTER
- ▁PERFECT
- ▁WO
- ▁TRA
- ▁USUAL
- ▁NECK
- ▁MARRIED
- ▁SEAT
- ▁WI
- ▁GAR
- ▁SAND
- ▁SHORE
- ▁GIVING
- NY
- ▁PROBABLY
- ▁MINUTE
- ▁EXPECT
- ▁DU
- ▁SHOT
- ▁INSTANT
- ▁DEGREE
- ▁COLOR
- ▁WEST
- RT
- ▁MARCH
- ▁BIRD
- ▁SHOWED
- ▁GREATER
- ▁SERIOUS
- ▁CARRY
- ▁COVERED
- ▁FORMER
- ▁LOUD
- ▁MOVED
- ▁MASS
- ▁SEEK
- ▁CHO
- GEN
- ▁ROMAN
- IB
- ▁MOON
- ▁BOARD
- ▁STREAM
- ▁EASILY
- ▁WISHED
- ▁SEARCH
- ▁COULDN
- ▁MONTHS
- ▁SICK
- LIE
- ▁DUTY
- ▁TWELVE
- ▁FAINT
- ▁STRANGER
- ▁SURPRISE
- ▁KILL
- ▁LEAVING
- ▁JOURNEY
- ▁SCARCELY
- ▁RAISED
- ▁SPEAKING
- ▁TERRIBLE
- ▁TOM
- ▁FIELD
- ▁GAME
- ▁QUA
- ▁PROMISE
- ▁LIE
- ▁CONDITION
- ▁TRO
- ▁PERSONAL
- ▁TALL
- ▁STICK
- ▁THREW
- ▁MARRY
- ▁VAN
- ▁BURN
- ▁ACCORDING
- ▁RISE
- ▁ATTACK
- ▁SWORD
- ▁GUESS
- ▁THOUGHTS
- ▁THIN
- ▁THROW
- ▁CALM
- SIDE
- ▁VILLAGE
- ▁DEN
- ▁ANXIOUS
- ▁MER
- GI
- ▁EXPECTED
- ▁BALL
- ▁ESPECIALLY
- ▁CHARGE
- ▁MEASURE
- ISE
- ▁NICE
- ▁TRYING
- ▁ALLOW
- ▁SHARP
- ▁BREAD
- ▁HONOUR
- ▁HONOR
- ▁ENTIRELY
- ▁BILL
- ▁BRI
- ▁WRITTEN
- ▁AR
- ▁BROKE
- ▁KILLED
- ▁MARK
- ▁VEN
- ▁LADIES
- ▁LEARNED
- ▁FLOWERS
- PLE
- ▁FORTY
- ▁OFFER
- ▁HAPPINESS
- ▁PRAY
- ▁CLASS
- ▁FER
- ▁PRINCIPLE
- GU
- ▁BOOKS
- ▁SHAPE
- ▁SUMMER
- ▁JACK
- ▁DRAW
- ▁GOLDEN
- ▁DECIDED
- ▁LEAD
- ▁UNLESS
- ▁HARM
- ▁LISTEN
- HER
- ▁SHOOK
- ▁INFLUENCE
- ▁PERFECTLY
- ▁MARRIAGE
- ▁BROAD
- ▁ESCAPE
- ▁STATES
- ▁MIDDLE
- ▁PLANT
- ▁MIL
- ▁MOVEMENT
- ▁NOISE
- ▁ENEMY
- ▁HISTORY
- ▁BREAK
- ROUS
- ▁UNDERSTOOD
- ▁LATTER
- FER
- ▁COMES
- ▁MERELY
- ▁SIMPLY
- WI
- ▁IMAGINE
- ▁LOWER
- ▁CONDUCT
- ▁BORN
- WA
- ▁YARD
- ▁KA
- ▁CLOSED
- ▁NOTE
- GA
- ▁STRA
- RAN
- ▁EXIST
- EV
- ▁SPEECH
- ▁BITTER
- JO
- ▁MAKES
- ▁GRASS
- ▁REPLY
- ▁CHANGED
- ▁MON
- ▁LYING
- ▁DANCE
- ▁FINALLY
- ▁AMERICAN
- ▁ENJOY
- ▁CONTAIN
- ▁MEANT
- USE
- ▁OBSERVED
- THER
- ▁LAUGH
- ▁AFTERWARDS
- ▁BEAT
- ▁RACE
- ▁EQUAL
- ▁RAIN
- PS
- ▁STEPS
- ▁BENEATH
- ▁TAIL
- ▁TASTE
- IO
- EY
- ▁CHAR
- ▁GE
- GN
- TIN
- ▁GROW
- ▁TE
- IANS
- ▁MOVE
- ▁REPEATED
- ▁DRIVE
- TUR
- ▁SI
- CLOCK
- ▁BRAVE
- ▁MADAME
- ▁LOT
- ▁CASTLE
- ▁HI
- AND
- ▁FUTURE
- ▁RELATION
- ▁SORRY
- ▁HEALTH
- ▁DICK
- ▁R
- ▁BUILDING
- ▁EDGE
- ▁BLESS
- ▁SPITE
- WE
- ▁MIS
- ▁PRISONER
- ▁ALLOWED
- ▁PH
- ▁CATCH
- MER
- ETH
- ▁COAT
- ▁COMPLETE
- ▁WOULDN
- ▁CREATURE
- ▁YELLOW
- ▁IMPORTANT
- ▁ADD
- ▁PASSING
- ▁DARKNESS
- ▁CARRIAGE
- ▁MILL
- ▁FIFTEEN
- NCY
- ▁HUNG
- ▁OB
- ▁PLEASED
- ▁SPREAD
- ▁CURIOUS
- ▁WORSE
- ▁CIRCUMSTANCES
- ▁GI
- LAR
- ▁CAL
- ▁HY
- ▁MERE
- ▁JANE
- ▁EAST
- BI
- ▁CUP
- ▁BLIND
- ▁PASSION
- ▁DISCOVERED
- ▁NOTICE
- ▁REPORT
- ▁SPACE
- ▁PRESENTLY
- ▁SORROW
- ▁PACK
- ▁DIN
- CY
- ▁DRY
- ▁ANCIENT
- ▁DRESSED
- ▁COVER
- ▁VO
- ▁EXISTENCE
- ▁EXACTLY
- ▁BEAST
- ▁PROPER
- ▁DROPPED
- ▁CLEAN
- ▁COLOUR
- ▁HOST
- ▁CHAMBER
- ▁FAITH
- LET
- ▁DETERMINED
- ▁PRIEST
- ▁STORM
- ▁SKIN
- ▁DARE
- ▁PERSONS
- ▁PICK
- ▁NARROW
- ▁SUPPORT
- ▁PRIVATE
- ▁SMILED
- ▁COUSIN
- ▁DRAWING
- ▁ATTEND
- ▁COOK
- ▁PREVENT
- ▁VARIOUS
- ▁BLA
- ▁FIXED
- ▁WEAK
- THE
- ▁HOLE
- ▁BOTTOM
- ▁NOBODY
- ADE
- ▁LEGS
- ITCH
- ▁INDIVIDUAL
- ▁EARS
- LIKE
- ▁ADVANTAGE
- ▁FRANCE
- ▁BON
- ▁WINE
- ▁LIVES
- OD
- ▁WALLS
- ▁TIRED
- ▁SHOP
- ▁ANIMAL
- ▁CRU
- ▁WROTE
- ▁ROYAL
- ▁CONSIDERED
- ▁MORAL
- ▁COMPANION
- ▁LOSE
- ▁ISN
- ▁BAG
- ▁LAKE
- ▁INTER
- ▁COM
- ▁LETTERS
- ▁LUCK
- ▁EAR
- ▁GERMAN
- ▁PET
- ▁SAKE
- ▁DROP
- ▁PAID
- ▁BREAKFAST
- ▁LABOR
- ▁DESERT
- ▁DECLARED
- ▁HUM
- ▁STUDY
- ▁INSTANCE
- ONE
- ▁SOMEWHAT
- ▁CLOTH
- ▁SPECIAL
- ▁COLONEL
- ▁SONG
- ▁MAIN
- ▁VALUE
- ▁PROUD
- ▁EXPRESS
- ▁NATION
- ▁HANDSOME
- ▁CONFESS
- ▁PU
- ▁PASSAGE
- ▁PERIOD
- ▁CUSTOM
- ▁HURT
- ▁SHOULDER
- ▁CHRIST
- ZA
- ▁RECEIVE
- ▁DIFFICULT
- ▁DEPEND
- ▁MEETING
- ▁CHI
- ▁GEN
- LIGHT
- ▁BELIEVED
- ▁SOCIAL
- ▁DIFFICULTY
- ▁GREATEST
- ▁DRAWN
- ▁GRANT
- ▁BIRDS
- ▁ANGRY
- ▁HEAT
- UFF
- ▁DUE
- ▁PLACES
- ▁SIN
- ▁COURAGE
- ▁EVIDENTLY
- ▁GENTLE
- ▁CRUEL
- ▁GEORGE
- ▁GRI
- ▁SERVANT
- ▁U
- ▁PURE
- OOK
- ▁KNOWS
- ▁KNOWING
- LF
- ▁WRITING
- ▁REMEMBERED
- ▁CU
- ▁HOLDING
- ▁TENDER
- ▁QUI
- ▁BURST
- ▁SURELY
- IGN
- ▁VALLEY
- ▁FU
- ▁BUTTER
- ▁SPOKEN
- ▁STORE
- ▁DISC
- ▁CHRISTIAN
- ▁PARIS
- ▁HENRY
- ▁FINISHED
- ▁PROVE
- ▁FOOL
- ▁SOLDIERS
- ▁LANGUAGE
- ▁INSIDE
- ▁BAN
- ▁FALLEN
- ROW
- ▁MAL
- ▁BABY
- ▁SITUATION
- ▁WATCHED
- ANS
- ▁RUIN
- ▁GENTLEMEN
- ▁FRO
- ▁FANCY
- ▁ACCEPT
- ▁SEASON
- ▁OURSELVES
- ▁SAN
- ▁SPEED
- IZED
- ▁COOL
- ▁SERVE
- ▁VESSEL
- ▁WILLIAM
- ▁OBLIGED
- ▁GROUP
- FORM
- ▁GOES
- UOUS
- ▁LEAVES
- ▁PECULIAR
- ▁NEWS
- ▁VAIN
- ▁EVERYBODY
- ▁PIN
- UG
- ▁FORGOTTEN
- ▁FRA
- GAN
- ▁CAREFULLY
- ▁FLASH
- UCH
- ▁FUR
- ▁MURDER
- ▁DELIGHT
- ▁WAITED
- ▁RENDER
- ▁PROPERTY
- ▁NOTICED
- ▁ROLL
- ▁KNOCK
- ▁EARNEST
- KI
- ▁HONEST
- ▁PROMISED
- ▁BAL
- AW
- ▁WALKING
- ANG
- ▁SQUARE
- ▁QUIETLY
- ▁CLOUD
- WOOD
- ▁FORMED
- ▁HIGHER
- ▁BUILT
- ▁FATE
- ▁TEACH
- MY
- ▁FALSE
- ▁YORK
- ▁DUST
- ▁CLIMB
- ▁FOND
- ▁GROWN
- ▁DESCEND
- ▁RAG
- ▁FRUIT
- ▁GENERALLY
- ▁OFFERED
- ▁ER
- ▁NURSE
- POSE
- ▁SPENT
- ▁JOIN
- ▁STATION
- ▁MEANING
- ▁SMOKE
- HOOD
- ▁ROUGH
- JU
- ▁LIKELY
- ▁SURFACE
- ▁KE
- ▁MONTH
- ▁POSSESSION
- ▁TONGUE
- ▁DUKE
- ▁NOSE
- ▁LAUGHING
- ▁WEATHER
- ▁WHISPERED
- ▁SYSTEM
- ▁LAWS
- DDLE
- ▁TOUCHED
- ▁TRADE
- LD
- ▁SURPRISED
- RIN
- ▁ARCH
- ▁WEALTH
- FOR
- ▁TEMPER
- ▁FRANK
- ▁GAL
- ▁BARE
- ▁OPPORTUNITY
- ▁CLAIM
- ▁ANIMALS
- ▁REV
- ▁COST
- ▁WASH
- ZE
- ▁CORN
- ▁OPPOSITE
- ▁POLICE
- ▁IDEAS
- LON
- ▁KEY
- ▁READING
- ▁COLLECT
- CHED
- ▁H
- ▁CROWN
- ▁TAR
- ▁SWIFT
- ▁SHOULDERS
- ▁ICE
- ▁GRAY
- ▁SHARE
- ▁PREPARED
- ▁GRO
- ▁UND
- ▁TER
- ▁EMPTY
- CING
- ▁SMILING
- ▁AVOID
- ▁DIFFERENCE
- ▁EXPLAIN
- ▁POUR
- ▁ATTRACT
- ▁OPENING
- ▁WHEEL
- ▁MATERIAL
- ▁BREAST
- ▁SUFFERING
- ▁DISTINCT
- ▁BOOT
- ▁ROW
- ▁FINGERS
- HAN
- ▁ALTOGETHER
- ▁FAT
- ▁PAPA
- ▁BRAIN
- ▁ASLEEP
- ▁GREY
- ▁SUM
- ▁GAS
- ▁WINDOWS
- ▁ALIVE
- ▁PROCEED
- ▁FLOWER
- ▁LEAP
- ▁PUR
- ▁PIECES
- ▁ALTER
- ▁MEMORY
- IENT
- ▁FILL
- ▁CLO
- ▁THROWN
- ▁KINGDOM
- ▁RODE
- IUS
- ▁MAID
- ▁DIM
- ▁BAND
- ▁VIRTUE
- ▁DISH
- ▁GUEST
- ▁LOSS
- ▁CAUSED
- ▁MOTION
- ▁POT
- ▁MILLION
- ▁FAULT
- ▁LOVELY
- ▁HERO
- PPING
- ▁UNITED
- ▁SPI
- SOME
- BRA
- ▁MOUNTAINS
- ▁NU
- ▁SATISFIED
- ▁DOLLARS
- ▁LOVER
- ▁CONCEAL
- ▁VAST
- ▁PULL
- ▁HATH
- ▁RUSH
- ▁J
- ▁DESPAIR
- EX
- ▁HEIGHT
- ▁CE
- ▁BENT
- ▁PITY
- ▁RISING
- ATH
- ▁PRIDE
- ▁HURRY
- KA
- ▁SETTLED
- ▁JUSTICE
- ▁LIFTED
- PEN
- ▁SOLDIER
- ▁FINDING
- ▁REMARK
- ▁REGULAR
- ▁STRUGGLE
- ▁MACHINE
- ▁SING
- ▁HURRIED
- ▁SUFFICIENT
- ▁REPRESENT
- ▁DOUBLE
- ▁ALARM
- ▁SUPPER
- ▁DREADFUL
- ▁FORE
- ATOR
- ▁STOCK
- ▁TIN
- ▁EXAMPLE
- ▁ROOF
- ▁FLOW
- ▁SUPPOSED
- ▁PRESERV
- ▁L
- ▁LISTENED
- OC
- ▁STO
- ▁SECURE
- ▁FRIGHTENED
- ▁DISTURB
- ▁EMOTION
- ▁SERVANTS
- ▁YO
- ▁BUY
- ▁FORCED
- ▁KITCHEN
- ▁TERROR
- ▁STAIRS
- ▁SIXTY
- KER
- ▁ORDINARY
- ▁DIRECTLY
- ▁HEADS
- ▁METHOD
- ▁FORGIVE
- ▁AWFUL
- ▁REFLECT
- ▁GREATLY
- ▁TALKED
- ▁RIDE
- STONE
- ▁FAVOUR
- ▁WELCOME
- ▁SEIZED
- OU
- ▁CONTROL
- ▁ORDERED
- ▁ANGEL
- ▁USUALLY
- ▁POET
- ▁BOLD
- LINE
- ▁ADVENTURE
- ▁WATCHING
- ▁FOLK
- ▁MISTRESS
- IZE
- ▁GROWING
- ▁CAVE
- ▁EVIDENCE
- ▁FINGER
- ▁SEVENTEEN
- ▁MOVING
- EOUS
- ▁DOESN
- ▁COW
- ▁TYPE
- ▁BOIL
- ▁TALE
- ▁DELIVER
- ▁FARM
- ▁MONSIEUR
- ▁GATHERED
- ▁FEELINGS
- ▁RATE
- ▁REMARKED
- ▁PUTTING
- ▁MAT
- ▁CONTRARY
- ▁CRIME
- ▁PLA
- ▁COL
- ▁NEARER
- TES
- ▁CIVIL
- ▁SHAME
- ▁LOOSE
- ▁DISCOVER
- ▁FLAT
- ▁TWICE
- ▁FAIL
- VIS
- ▁UNC
- EA
- ▁EUROPE
- ▁PATIENT
- ▁UNTO
- ▁SUFFER
- ▁PAIR
- ▁TREASURE
- OSE
- ▁EAGER
- ▁FLY
- ▁N
- ▁VAL
- ▁DAN
- ▁SALT
- ▁BORE
- BBE
- ▁ARTHUR
- ▁AFFAIRS
- ▁SLOW
- ▁CONSIST
- ▁DEVIL
- LAN
- ▁AFFECTION
- ▁ENGAGED
- ▁KISS
- ▁YA
- ▁OFFICER
- IFICATION
- ▁LAMP
- ▁PARTS
- HEN
- ▁MILK
- ▁PROCESS
- ▁GIFT
- ▁PULLED
- ▁HID
- ▁RAY
- ▁EXCELLENT
- ▁IMPRESSION
- ▁AUTHORITY
- ▁PROVED
- ▁TELLING
- TTE
- ▁TOWER
- ▁CONSEQUENCE
- ▁FAVOR
- ▁FLEW
- ▁CHARLES
- ISTS
- ▁ADDRESS
- ▁FAMILIAR
- ▁LIMIT
- ▁CONFIDENCE
- ▁RARE
- ▁WEEKS
- ▁WOODS
- ▁INTENTION
- ▁DIRECT
- ▁PERFORM
- ▁SOLEMN
- ▁DISTANT
- ▁IMAGE
- ▁PRESIDENT
- ▁FIRM
- ▁INDIAN
- ▁RANK
- ▁LIKED
- ▁AGREE
- ▁HOUSES
- ▁WIL
- ▁MATTERS
- ▁PRISON
- ▁MODE
- ▁MAJOR
- ▁WORKING
- ▁SLIP
- ▁WEIGHT
- ▁AWARE
- ▁BUSY
- ▁LOOKS
- ▁WOUND
- ▁THOR
- ▁BATH
- ▁EXERCISE
- ▁SIMILAR
- ▁WORE
- ▁AMOUNT
- ▁QUESTIONS
- ▁VIOLENT
- ▁EXCUSE
- ▁ASIDE
- ▁TUR
- ▁DULL
- OF
- ▁EMPEROR
- ▁NEVERTHELESS
- ▁SHOUT
- ▁EXPLAINED
- ▁SIZE
- ▁ACCOMPLISH
- FORD
- CAN
- ▁MISTAKE
- ▁INSTANTLY
- ▁SMOOTH
- ▁STRIKE
- ▁BOB
- ISED
- ▁HORROR
- ▁SCIENCE
- ▁PROTEST
- ▁MANAGE
- ▁OBEY
- ▁NECESSITY
- ▁SPLENDID
- ▁PRESS
- ▁INTERESTING
- ▁RELIGION
- ▁UNKNOWN
- ▁FIERCE
- ▁DISAPPEARED
- ▁HOLY
- ▁HATE
- ▁PLAYED
- ▁LIN
- ▁NATURALLY
- ▁DROVE
- ▁LOUIS
- TIES
- ▁BRAND
- INESS
- RIE
- ▁SHOOT
- ▁CONSENT
- ▁SEATED
- ▁LINES
- GUE
- ▁AGREED
- ▁CIRCLE
- ▁STIR
- ▁STREETS
- ▁TASK
- ▁RID
- ▁PRODUCED
- ▁ACCIDENT
- ▁WITNESS
- ▁LIBERTY
- ▁DETAIL
- ▁MINISTER
- ▁POWERFUL
- ▁SAVAGE
- ▁SIXTEEN
- ▁PRETEND
- ▁COAST
- ▁SQU
- ▁UTTER
- ▁NAMED
- ▁CLEVER
- ▁ADMIT
- ▁COUPLE
- ▁WICKED
- ▁MESSAGE
- ▁TEMPLE
- ▁STONES
- ▁YESTERDAY
- ▁HILLS
- DAY
- ▁SLIGHT
- ▁DIAMOND
- ▁POSSIBLY
- ▁AFFAIR
- ▁ORIGINAL
- ▁HEARING
- ▁WORTHY
- ▁SELL
- NEY
- ICK
- ▁COTTAGE
- ▁SACRIFICE
- ▁PROGRESS
- ▁SHOCK
- ▁DESIGN
- ▁SOUGHT
- ▁PIT
- ▁SUNDAY
- ▁OTHERWISE
- ▁CABIN
- ▁PRAYER
- ▁DWELL
- ▁GAIN
- ▁BRIDGE
- ▁PARTICULARLY
- ▁YIELD
- ▁TREAT
- RIGHT
- ▁OAK
- ▁ROPE
- WIN
- ▁ORDERS
- ▁SUSPECT
- ▁EDWARD
- AB
- ▁ELEVEN
- ▁TEETH
- ▁OCCURRED
- DDING
- ▁AMERICA
- ▁FALLING
- ▁LION
- ▁DEPART
- ▁KEEPING
- ▁DEMAND
- ▁PAUSED
- ▁CEASED
- INA
- ▁FUN
- ▁CHEER
- ▁PARDON
- ▁NATIVE
- LUS
- LOW
- ▁DOGS
- ▁REQUIRED
- ILITY
- ▁ELECT
- ▁ENTERTAIN
- ITUDE
- ▁HUGE
- ▁CARRYING
- ▁BLU
- ▁INSIST
- ▁SATISFACTION
- ▁HUNT
- ▁COUNTENANCE
- ▁UPPER
- ▁MAIDEN
- ▁FAILED
- ▁JAMES
- ▁FOREIGN
- ▁GATHER
- ▁TEST
- BOARD
- ▁TERMS
- ▁SILK
- ▁BEG
- ▁BROTHERS
- ▁PAGE
- ▁KNEES
- ▁SHOWN
- ▁PROFESSOR
- ▁MIGHTY
- ▁DEFI
- ▁CHARM
- ▁REQUIRE
- ▁LOG
- MORE
- ▁PROOF
- ▁POSSESSED
- ▁SOFTLY
- ▁UNFORTUNATE
- ▁PRICE
- ▁SEVERE
- ▁SINGING
- ▁STAGE
- ▁FREEDOM
- ▁SHOUTED
- ▁FARTHER
- ▁MAJESTY
- ▁PREVIOUS
- ▁GUIDE
- ▁MATCH
- ▁CHEST
- ▁INTENDED
- ▁BI
- ▁EXCITEMENT
- ▁OFFICERS
- ▁SUR
- ▁SHAKE
- ▁SENTIMENT
- ▁GENTLY
- ▁SUCCEEDED
- ▁MENTION
- ▁LOCK
- ▁ACQUAINTANCE
- ▁IMAGINATION
- ▁PHYSICAL
- ▁LEADING
- ▁SLAVE
- ▁CART
- ▁POINTED
- ▁STEAM
- ▁SHADE
- ▁PIPE
- ▁BASE
- ▁INVENT
- ▁ALAS
- ▁WORKED
- ▁REGRET
- ▁BUR
- ▁FAITHFUL
- ▁MENTIONED
- ▁RECORD
- ▁COMPLAIN
- ▁SUPERIOR
- ▁BAY
- ▁PAL
- EMENT
- UE
- ▁SEVENTY
- ▁HOTEL
- ▁SHEEP
- ▁MEAL
- ▁ADVICE
- ▁HIDDEN
- ▁DEMANDED
- ▁CONSCIOUS
- ▁BROW
- ▁POSSESS
- ▁FOURTH
- ▁EVENTS
- ▁FRI
- ▁PRAISE
- ▁ADVANCED
- ▁RESOLVED
- ▁STUFF
- ▁CHEERFUL
- ▁BIRTH
- ▁GRIEF
- ▁AFFORD
- ▁FAIRY
- ▁WAKE
- ▁SIDES
- ▁SUBSTANCE
- ▁ARTICLE
- ▁LEVEL
- ▁MIST
- ▁JOINED
- ▁PRACTICAL
- ▁CLEARLY
- ▁TRACE
- ▁AWAKE
- ▁OBSERVE
- ▁BASKET
- ▁LACK
- VILLE
- ▁SPIRITS
- ▁EXCITED
- ▁ABANDON
- ▁SHINING
- ▁FULLY
- ▁CALLING
- ▁CONSIDERABLE
- ▁SPRANG
- ▁MILE
- ▁DOZEN
- ▁PEA
- ▁DANGEROUS
- ▁WIT
- ▁JEW
- ▁POUNDS
- ▁FOX
- ▁INFORMATION
- ▁LIES
- ▁DECK
- NNY
- ▁PAUL
- ▁STARS
- ▁ANGER
- ▁SETTLE
- ▁WILLING
- ▁ADAM
- ▁FACES
- ▁SMITH
- ▁IMPORTANCE
- ▁STRAIN
- WAR
- ▁SAM
- ▁FEATHER
- ▁SERVED
- ▁AUTHOR
- ▁PERCEIVED
- ▁FLAME
- ▁DIVINE
- ▁TRAIL
- ▁ANYBODY
- ▁SIGH
- ▁DELICATE
- KY
- ▁FOLD
- ▁HAVEN
- ▁DESIRED
- ▁CURIOSITY
- ▁PRACTICE
- ▁CONSIDERATION
- ▁ABSOLUTELY
- ▁CITIZEN
- ▁BOTTLE
- ▁INTERESTED
- ▁MEAT
- ▁OCCUPIED
- ▁CHOOSE
- ▁THROAT
- ETTE
- ▁CANDLE
- ▁DAWN
- ▁PROTECT
- ▁SENTENCE
- IED
- ▁ROCKS
- ▁PORTION
- ▁APPARENTLY
- ▁PRESENTED
- ▁TIGHT
- ▁ACTUALLY
- ▁DYING
- ▁HAM
- ▁DAILY
- ▁SUFFERED
- ▁POLITICAL
- ▁BODIES
- ▁MODERN
- ▁COMPLETELY
- ▁SOONER
- TAN
- ▁PROP
- ▁ADVANCE
- ▁REFUSED
- ▁FARMER
- ▁POLITE
- ▁THUNDER
- ▁BRIEF
- ▁ELSIE
- ▁SAILOR
- ▁SUGGESTED
- ▁PLATE
- ▁AID
- ▁FLESH
- ▁WEEP
- ▁BUCK
- ▁ANTI
- ▁OCEAN
- ▁SPEND
- WELL
- ▁ODD
- ▁GOVERNOR
- ▁ENTRANCE
- ▁SUSPICION
- ▁STEPPED
- ▁RAPIDLY
- ▁CHECK
- ▁HIDE
- ▁FLIGHT
- ▁CLUB
- ▁ENTIRE
- ▁INDIANS
- ASH
- ▁CAPITAL
- ▁MAMMA
- HAR
- ▁CORRECT
- ▁CRACK
- ▁SENSATION
- ▁WORST
- ▁PACE
- ▁MIDST
- ▁AUGUST
- ▁PROPORTION
- ▁INNOCENT
- LINESS
- ▁REGARDED
- ▁DRIVEN
- ORD
- ▁HASTE
- ▁EDUCATION
- ▁EMPLOY
- ▁TRULY
- ▁INSTRUMENT
- ▁MAG
- ▁FRAME
- ▁FOOLISH
- ▁TAUGHT
- ▁HANG
- ▁ARGUMENT
- ▁NINETEEN
- ▁ELDER
- ▁NAY
- ▁NEEDED
- ▁NEIGHBOR
- ▁INSTRUCT
- ▁PAPERS
- ▁REWARD
- ▁EQUALLY
- ▁FIELDS
- ▁DIG
- HIN
- ▁CONDITIONS
- JA
- ▁SPAR
- ▁REQUEST
- ▁WORN
- ▁REMARKABLE
- ▁LOAD
- ▁WORSHIP
- ▁PARK
- ▁KI
- ▁INTERRUPTED
- ▁SKILL
- ▁TERM
- LAC
- ▁CRITIC
- ▁DISTRESS
- ▁BELIEF
- ▁STERN
- IGHT
- ▁TRACK
- ▁HUNTING
- ▁JEWEL
- ▁GRADUALLY
- ▁GLOW
- ▁RUSHED
- ▁MENTAL
- ▁VISITOR
- ▁PICKED
- ▁BEHOLD
- ▁EXPRESSED
- ▁RUB
- ▁SKI
- ARTAGNAN
- ▁MOREOVER
- ▁OPERATION
- ▁CAREFUL
- ▁KEEN
- ▁ASSERT
- ▁WANDER
- ▁ENEMIES
- ▁MYSTERIOUS
- ▁DEPTH
- ▁PREFER
- ▁CROSSED
- ▁CHARMING
- ▁DREAD
- ▁FLOUR
- ▁ROBIN
- ▁TRE
- ▁RELIEF
- ▁INQUIRED
- ▁APPLE
- ▁HENCE
- ▁WINGS
- ▁CHOICE
- ▁JUD
- OO
- ▁SPECIES
- ▁DELIGHTED
- IUM
- ▁RAPID
- ▁APPEAL
- ▁FAMOUS
- ▁USEFUL
- ▁HELEN
- ▁NEWSPAPER
- ▁PLENTY
- ▁BEARING
- ▁NERVOUS
- ▁PARA
- ▁URGE
- ▁ROAR
- ▁WOUNDED
- ▁CHAIN
- ▁PRODUCE
- ▁REFLECTION
- ▁MERCHANT
- ▁QUARREL
- ▁GLORY
- ▁BEGUN
- ▁BARON
- CUS
- ▁QUEER
- ▁MIX
- ▁GAZE
- ▁WHISPER
- ▁BURIED
- ▁DIV
- ▁CARD
- ▁FREQUENTLY
- ▁TIP
- ▁KNEE
- ▁REGION
- ▁ROOT
- ▁LEST
- ▁JEALOUS
- CTOR
- ▁SAVED
- ▁ASKING
- ▁TRIP
- QUA
- ▁UNION
- HY
- ▁COMPANIONS
- ▁SHIPS
- ▁HALE
- ▁APPROACHED
- ▁HARRY
- ▁DRUNK
- ▁ARRIVAL
- ▁SLEPT
- ▁FURNISH
- HEAD
- ▁PIG
- ▁ABSENCE
- ▁PHIL
- ▁HEAP
- ▁SHOES
- ▁CONSCIOUSNESS
- ▁KINDLY
- ▁EVIDENT
- ▁SCAR
- ▁DETERMIN
- ▁GRASP
- ▁STEAL
- ▁OWE
- ▁KNIFE
- ▁PRECIOUS
- ▁ELEMENT
- ▁PROCEEDED
- ▁FEVER
- ▁LEADER
- ▁RISK
- ▁EASE
- ▁GRIM
- ▁MOUNT
- ▁MEANWHILE
- ▁CENTURY
- OON
- ▁JUDGMENT
- ▁AROSE
- ▁VISION
- ▁SPARE
- ▁EXTREME
- ▁CONSTANT
- ▁OBSERVATION
- ▁THRUST
- ▁DELAY
- ▁CENT
- ▁INCLUD
- ▁LIFT
- ▁ADMIRE
- ▁ISSUE
- ▁FRIENDSHIP
- ▁LESSON
- ▁PRINCIPAL
- ▁MOURN
- ▁ACCEPTED
- ▁BURNING
- ▁CAPABLE
- ▁EXTRAORDINARY
- ▁SANG
- ▁REMOVED
- ▁HOPED
- ▁HORN
- ▁ALICE
- ▁MUD
- ▁APARTMENT
- ▁FIGHTING
- ▁BLAME
- ▁TREMBLING
- ▁SOMEBODY
- ▁ANYONE
- ▁BRIDE
- ▁READER
- ▁ROB
- ▁EVERYWHERE
- ▁LABOUR
- ▁RECALL
- ▁BULL
- ▁HIT
- ▁COUNCIL
- ▁POPULAR
- ▁CHAP
- ▁TRIAL
- ▁DUN
- ▁WISHES
- ▁BRILLIANT
- ▁ASSURED
- ▁FORGOT
- ▁CONTINUE
- ▁ACKNOWLEDG
- ▁RETREAT
- ▁INCREASED
- ▁CONTEMPT
- ▁GRANDFATHER
- ▁SYMPATHY
- ▁GHOST
- ▁STRETCHED
- ▁CREATURES
- ▁CAB
- ▁HIND
- ▁PLAYING
- ▁MISERABLE
- ▁MEMBERS
- ▁KINDNESS
- ▁HIGHEST
- ▁PRIM
- ▁KISSED
- ▁DESERVE
- ▁HUT
- ▁BEGGED
- ▁EIGHTY
- ▁CLOSELY
- ▁WONDERED
- ▁MILITARY
- ▁REMIND
- ▁ACCORDINGLY
- ▁LARGER
- ▁MAINTAIN
- ▁ENGINE
- ▁MOTIVE
- ▁DESTROY
- ▁STRIP
- ▁HANS
- ▁AHEAD
- ▁INFINITE
- ▁PROMPT
- ▁INFORMED
- TTLE
- ▁PEER
- ▁PRESSED
- ▁TRAP
- ▁SOMEWHERE
- ▁BOUGHT
- ▁VISIBLE
- ▁ASHAMED
- ▁TEAR
- ▁NEIGHBOUR
- ▁CONSTITUTION
- ▁INTELLIGENCE
- ▁PROFESSION
- ▁HUNGRY
- RIDGE
- ▁SMELL
- ▁STORIES
- ▁LISTENING
- ▁APPROACH
- ▁STRING
- ▁EXPLANATION
- ▁IMMENSE
- ▁RELIGIOUS
- ▁THROUGHOUT
- ▁HOLLOW
- ▁AWAIT
- ▁FLYING
- ▁SCREAM
- ▁ACTIVE
- ▁RUM
- ▁PRODUCT
- ▁UNHAPPY
- ▁VAGUE
- ARIES
- ▁ELIZABETH
- ▁STUPID
- ▁DIGNITY
- ▁ISABEL
- GAR
- ▁BRO
- ▁PITCH
- ▁COMRADE
- ▁STIFF
- ▁RECKON
- ▁SOLD
- ▁SPARK
- ▁STRO
- ▁CRYING
- ▁MAGIC
- ▁REPEAT
- PORT
- ▁MARKED
- ▁COMFORTABLE
- ▁PROJECT
- ▁BECOMING
- ▁PARENTS
- ▁SHELTER
- ▁STOLE
- ▁HINT
- ▁NEST
- ▁TRICK
- ▁THOROUGHLY
- ▁HOSPITAL
- ▁WEAPON
- ▁ROME
- ▁STYLE
- ▁ADMITTED
- ▁SAFETY
- FIELD
- ▁UNDERSTANDING
- ▁TREMBLE
- ▁PRINT
- ▁SLAVES
- ▁WEARY
- ▁ARTIST
- ▁CREDIT
- BURG
- ▁CONCLUSION
- ▁SELDOM
- ▁UNUSUAL
- ▁CLOUDS
- ▁UNABLE
- ▁GAY
- ▁HANGING
- ▁SCR
- ▁BOWED
- ▁DAVID
- ▁VOL
- ▁PUSHED
- ▁ESCAPED
- MOND
- ▁WARN
- ▁BETRAY
- ▁EGGS
- ▁PLAINLY
- ▁EXHIBIT
- ▁DISPLAY
- ▁MEMBER
- ▁GRIN
- ▁PROSPECT
- ▁BRUSH
- ▁BID
- ▁SUCCESSFUL
- ▁EXTENT
- ▁PERSUADE
- ▁MID
- ▁MOOD
- ▁ARRANGED
- ▁UNIVERSAL
- ▁JIM
- ▁SIGNAL
- ▁WHILST
- ▁PHILIP
- ▁WOLF
- RATE
- ▁EAGERLY
- ▁BILLY
- ▁RETURNING
- ▁CONSCIENCE
- ▁FORTUNATE
- ▁FEMALE
- ▁GLEAM
- ▁HASTILY
- ▁PROVIDED
- ▁OBTAIN
- ▁INSTINCT
- ▁CONCERNED
- ▁CONCERNING
- ▁SOMEHOW
- ▁PINK
- ▁RAGE
- ▁ACCUSTOMED
- ▁UNCONSCIOUS
- ▁ADVISE
- ▁BRANCHES
- ▁TINY
- ▁REFUSE
- ▁BISHOP
- ▁SUPPLY
- ▁PEASANT
- ▁LAWYER
- ▁WASTE
- ▁CONNECTION
- ▁DEVELOP
- ▁CORRESPOND
- ▁PLUM
- ▁NODDED
- ▁SLIPPED
- ▁EU
- ▁CONSTANTLY
- CUM
- MMED
- ▁FAIRLY
- HOUSE
- ▁KIT
- ▁RANG
- ▁FEATURES
- ▁PAUSE
- ▁PAINFUL
- ▁JOE
- ▁WHENCE
- ▁LAUGHTER
- ▁COACH
- ▁CHRISTMAS
- ▁EATING
- ▁WHOLLY
- ▁APART
- ▁SUPER
- ▁REVOLUTION
- ▁LONELY
- ▁CHEEKS
- ▁THRONE
- ▁CREW
- ▁ATTAIN
- ▁ESTABLISHED
- TIME
- ▁DASH
- ▁FRIENDLY
- ▁OPERA
- ▁EARL
- ▁EXHAUST
- ▁CLIFF
- ▁REVEAL
- ▁ADOPT
- ▁CENTRE
- ▁MERRY
- ▁SYLVIA
- ▁IDEAL
- ▁MISFORTUNE
- ▁FEAST
- ▁ARAB
- ▁NUT
- ▁FETCH
- ▁FOUGHT
- ▁PILE
- ▁SETTING
- ▁SOURCE
- ▁PERSIST
- ▁MERCY
- ▁BARK
- ▁LUC
- ▁DEEPLY
- ▁COMPARE
- ▁ATTITUDE
- ▁ENDURE
- ▁DELIGHTFUL
- ▁BEARD
- ▁PATIENCE
- ▁LOCAL
- ▁UTTERED
- ▁VICTORY
- ▁TREATED
- ▁SEPARATE
- ▁WAG
- ▁DRAGG
- ▁TITLE
- ▁TROOPS
- ▁TRIUMPH
- ▁REAR
- ▁GAINED
- ▁SINK
- ▁DEFEND
- ▁TIED
- ▁FLED
- ▁DARED
- ▁INCREASE
- ▁POND
- ▁CONQUER
- ▁FOREHEAD
- ▁FAN
- ▁ANXIETY
- ▁ENCOUNTER
- ▁SEX
- ▁HALT
- ▁SANK
- ▁CHEEK
- ▁HUMBLE
- ▁WRITER
- ▁EMPLOYED
- ▁DISTINGUISHED
- ▁RAISE
- ▁WHIP
- ▁GIANT
- ▁RANGE
- ▁OBTAINED
- ▁FLAG
- ▁MAC
- ▁JUMPED
- ▁DISCOVERY
- ▁NATIONAL
- ▁COMMISSION
- ▁POSITIVE
- ▁LOVING
- ▁EXACT
- ▁MURMURED
- ▁GAZED
- ▁REFER
- ▁COLLEGE
- ▁ENCOURAGE
- ▁NOVEL
- ▁CLOCK
- ▁MORTAL
- ▁ROLLED
- ▁RAT
- IZING
- ▁GUILTY
- ▁VICTOR
- WORTH
- ▁PRA
- ▁APPROACHING
- ▁RELATIVE
- ▁ESTATE
- ▁UGLY
- ▁METAL
- ▁ROBERT
- ▁TENT
- ▁ADMIRATION
- ▁FOURTEEN
- ▁BARBAR
- ▁WITCH
- ELLA
- ▁CAKE
- ▁SHONE
- ▁MANAGED
- ▁VOLUME
- ▁GREEK
- ▁DANCING
- ▁WRETCHED
- ▁CONDEMN
- ▁MAGNIFICENT
- ▁CONSULT
- J
- ▁ORGAN
- ▁FLEET
- ▁ARRANGEMENT
- ▁INCIDENT
- ▁MISERY
- ▁ARROW
- ▁STROKE
- ▁ASSIST
- ▁BUILD
- ▁SUCCEED
- ▁DESPERATE
- ▁WIDOW
- UDE
- ▁MARKET
- ▁WISDOM
- ▁PRECISE
- ▁CURRENT
- ▁SPOIL
- ▁BADE
- ▁WOODEN
- ▁RESIST
- ▁OBVIOUS
- ▁SENSIBLE
- FALL
- ▁ADDRESSED
- ▁GIL
- ▁COUNSEL
- ▁PURCHASE
- ▁SELECT
- ▁USELESS
- ▁STARED
- ▁ARREST
- ▁POISON
- ▁FIN
- ▁SWALLOW
- ▁BLOCK
- ▁SLID
- ▁NINETY
- ▁SPORT
- ▁PROVIDE
- ▁ANNA
- ▁LAMB
- ▁INTERVAL
- ▁JUMP
- ▁DESCRIBED
- ▁STRIKING
- ▁PROVISION
- ▁PROPOSED
- ▁MELANCHOLY
- ▁WARRIOR
- ▁SUGGEST
- ▁DEPARTURE
- ▁BURDEN
- ▁LIMB
- ▁TROUBLED
- ▁MEADOW
- ▁SACRED
- ▁SOLID
- ▁TRU
- ▁LUCY
- ▁RECOVER
- ▁ENERGY
- ▁POWDER
- ▁RESUMED
- ▁INTENSE
- ▁BRITISH
- ▁STRAW
- ▁AGREEABLE
- ▁EVERYONE
- ▁CONCERN
- ▁VOYAGE
- ▁SOUTHERN
- ▁BOSOM
- ▁UTTERLY
- ▁FEED
- ▁ESSENTIAL
- ▁CONFINE
- ▁HOUSEHOLD
- ▁EXTREMELY
- ▁WONDERING
- ▁LIST
- ▁PINE
- PHA
- ▁EXPERIMENT
- ▁JOSEPH
- ▁MYSTERY
- ▁RESTORE
- ▁BLUSH
- FOLD
- ▁CHOSEN
- ▁INTELLECT
- ▁CURTAIN
- OLOGY
- ▁MOUNTED
- ▁LAP
- ▁EPI
- ▁PUNISH
- ▁WEDDING
- ▁RECOGNIZED
- ▁DRIFT
- ▁PREPARATION
- ▁RESOLUTION
- ▁OPPRESS
- ▁FIX
- ▁VICTIM
- OGRAPH
- ▁SUMMON
- ▁JULIA
- ▁FLOOD
- ▁WAL
- ULATION
- ▁SLIGHTLY
- ▁LODGE
- ▁WIRE
- ▁CONFUSION
- ▁UNEXPECTED
- ▁CONCEIVE
- ▁PRIZE
- ▁JESUS
- ▁ADDITION
- ▁RUDE
- ▁FATAL
- ▁CARELESS
- ▁PATCH
- ▁KO
- ▁CATHERINE
- ▁PARLIAMENT
- ▁PROFOUND
- ▁ALOUD
- ▁RELIEVE
- ▁PUSH
- ABILITY
- ▁ACCOMPANIED
- ▁SOVEREIGN
- ▁SINGULAR
- ▁ECHO
- ▁COMPOSED
- ▁SHAKING
- ATORY
- ▁ASSISTANCE
- ▁TEACHER
- ▁HORRIBLE
- ▁STRICT
- ▁VERSE
- ▁PUNISHMENT
- ▁GOWN
- ▁MISTAKEN
- ▁VARI
- ▁SWEPT
- ▁GESTURE
- ▁BUSH
- ▁STEEL
- ▁AFFECTED
- ▁DIRECTED
- ▁SURROUNDED
- ▁ABSURD
- ▁SUGAR
- ▁SCRAP
- ▁IMMEDIATE
- ▁SADDLE
- ▁TY
- ▁ARISE
- ▁SIGHED
- ▁EXCHANGE
- ▁IMPATIENT
- ▁SNAP
- ▁EMBRACE
- ▁DISEASE
- ▁PROFIT
- ▁RIDING
- ▁RECOVERED
- ▁GOVERN
- ▁STRETCH
- ▁CONVINCED
- ▁LEANING
- ▁DOMESTIC
- ▁COMPLEX
- ▁MANIFEST
- ▁INDULGE
- ▁GENIUS
- ▁AGENT
- ▁VEIL
- ▁DESCRIPTION
- ▁INCLINED
- ▁DECEIVE
- ▁DARLING
- ▁REIGN
- HU
- ▁ENORMOUS
- ▁RESTRAIN
- ▁DUTIES
- BURY
- TTERED
- ▁POLE
- ▁ENABLE
- ▁EXCEPTION
- ▁INTIMATE
- ▁COUNTESS
- ▁TRIBE
- ▁HANDKERCHIEF
- ▁MIDNIGHT
- ▁PROBLEM
- ▁TRAMP
- ▁OIL
- CAST
- ▁CRUSH
- ▁DISCUSS
- ▁RAM
- ▁TROT
- ▁UNRE
- ▁WHIRL
- ▁LOCKED
- ▁HORIZON
- ▁OFFICIAL
- ▁SCHEME
- ▁DROWN
- ▁PIERRE
- ▁PERMITTED
- ▁CONNECTED
- ▁ASSURE
- ▁COCK
- ▁UTMOST
- ▁DEVOTED
- ▁RELI
- ▁SUFFICIENTLY
- ▁INTELLECTUAL
- ▁CARPET
- ▁OBJECTION
- ▁AFTERWARD
- ▁REALITY
- ▁NEGRO
- ▁RETAIN
- ▁ASCEND
- ▁CEASE
- ▁KATE
- ▁MARVEL
- KO
- ▁BOND
- MOST
- ▁COAL
- GATE
- ▁IGNORANT
- ▁BREAKING
- ▁TWIN
- ▁ASTONISHMENT
- ▁COFFEE
- ▁JAR
- ▁CITIES
- ▁ORIGIN
- ▁EXECUT
- ▁FINAL
- ▁INHABITANTS
- ▁STABLE
- ▁CHIN
- ▁PARTIES
- ▁PLUNGE
- ▁GENEROUS
- ▁DESCRIBE
- ▁ANNOUNCED
- ▁MERIT
- ▁REVERE
- ▁ERE
- ACIOUS
- ZI
- ▁DISAPPOINT
- ▁SUGGESTION
- ▁DOUBTLESS
- ▁TRUNK
- ▁STAMP
- ▁JOB
- ▁APPOINTED
- ▁DIVIDED
- ▁ACQUAINTED
- CHI
- ▁ABSOLUTE
- ▁FEARFUL
- ▁PRIVILEGE
- ▁CRAFT
- ▁STEEP
- ▁HUNTER
- ▁FORBID
- ▁MODEST
- ▁ENDEAVOUR
- ▁SWEEP
- ▁BEHELD
- ▁ABSORB
- ▁CONSTRUCT
- ▁EMPIRE
- ▁EXPEDITION
- ▁ERECT
- ▁OFFEND
- ▁INTEND
- ▁PERMIT
- ▁DESTROYED
- ▁CONTRACT
- ▁THIRST
- ▁WAGON
- ▁EVA
- ▁GLOOM
- ▁ATMOSPHERE
- ▁RESERVE
- ▁VOTE
- ▁GER
- ▁NONSENSE
- ▁PREVAIL
- ▁QUALITY
- ▁CLASP
- ▁CONCLUDED
- ▁RAP
- ▁KATY
- ▁ETERNAL
- ▁MUTTERED
- ▁NEGLECT
- ▁SQUIRE
- ▁CREEP
- LOCK
- ▁ELECTRIC
- ▁HAY
- ▁EXPENSE
- ▁SCORN
- ▁RETIRED
- ▁STOUT
- ▁MURMUR
- ▁SHARPLY
- ▁DISTRICT
- ▁LEAF
- ▁FAILURE
- WICK
- ▁JEAN
- ▁NUMEROUS
- ▁INFANT
- ▁REALIZED
- ▁TRAVELLER
- ▁HUNGER
- ▁JUNE
- ▁MUN
- ▁RECOMMEND
- ▁CREP
- ZZLE
- ▁RICHARD
- WORK
- ▁MONTE
- ▁PREACH
- ▁PALM
- AVI
- ▁ANYWHERE
- ▁DISPOSITION
- ▁MIRROR
- ▁VENTURE
- ▁POUND
- ▁CIGAR
- ▁INVITED
- ▁BENCH
- ▁PROTECTION
- ▁BENEFIT
- ▁THOMAS
- ▁CLERK
- ▁REPROACH
- ▁UNIFORM
- ▁GENERATION
- ▁SEAL
- ▁COMPASS
- ▁WARNING
- ▁EXTENDED
- ▁DIFFICULTIES
- ▁MAYBE
- ▁GROAN
- ▁AFFECT
- ▁COMB
- ▁EARN
- ▁WESTERN
- ▁IDLE
- ▁SCORE
- ▁TAP
- ▁ASTONISHED
- ▁INTRODUCED
- ▁LEISURE
- ▁LIEUTENANT
- ▁VIOLENCE
- ▁FIRMLY
- ▁MONSTER
- ▁UR
- ▁PROPERLY
- ▁TWIST
- ▁PIRATE
- ▁ROBBER
- ▁BATTER
- ▁WEPT
- ▁LEANED
- ▁FOG
- ▁ORNAMENT
- ▁ANDREW
- ▁BUSHES
- ▁REPUBLIC
- ▁CONFIDENT
- ▁LEAN
- ▁DART
- ▁STOOP
- ▁CURL
- ▁COUNTER
- ▁NORTHERN
- ▁PEARL
- ▁NEAREST
- ▁FRANCIS
- ▁WANDERING
- ▁FREQUENT
- ▁STARTLED
- ▁STATEMENT
- ▁OCCUR
- ▁BLOOM
- ▁NERVE
- ▁INSPECT
- ▁INDUCE
- ▁FLATTER
- ▁DATE
- ▁AMBITION
- ▁SLOPE
- ▁MALE
- ▁MADAM
- ▁MONK
- ▁RENT
- ▁CONFIRM
- ▁INVESTIGAT
- ▁RABBIT
- ▁REGIMENT
- ▁SUBMIT
- ▁SPELL
- ▁FURIOUS
- ▁RAIL
- ▁BESTOW
- ▁RALPH
- ▁SCATTERED
- ▁COMPELLED
- ▁THREAD
- ▁CHILL
- ▁DENY
- ▁PRONOUNC
- ▁MANKIND
- ▁CATTLE
- ▁EXECUTION
- ▁REBEL
- ▁SUPREME
- ▁VALUABLE
- ▁LIKEWISE
- ▁CONVEY
- ▁TIDE
- ▁GLOOMY
- ▁COIN
- ▁ACTUAL
- ▁TAX
- ▁PROVINCE
- ▁GRATEFUL
- ▁SPIRITUAL
- ▁VANISHED
- ▁DIANA
- ▁HAUNT
- ▁DRAGON
- ▁CRAWL
- ▁CHINA
- ▁GRATITUDE
- ▁NEAT
- ▁FINISH
- ▁INTENT
- ▁FRIGHT
- ▁EMBARRASS
- ▁THIRTEEN
- ▁RUTH
- ▁SLIGHTEST
- ▁DEVELOPMENT
- ▁INTERVIEW
- ▁SPECTACLE
- ▁BROOK
- VIE
- ▁WEAKNESS
- ▁AUDIENCE
- ▁CONSEQUENTLY
- ▁ABROAD
- ▁ASPECT
- ▁PAINTED
- ▁RELEASE
- ▁INSULT
- ▁SOOTH
- ▁DISAPPOINTMENT
- ▁EMERG
- ▁BRIG
- ▁ESTEEM
- ▁INVITATION
- ▁PASSENGER
- ▁PUBLISH
- ▁PIANO
- ▁IRISH
- ▁DESK
- ▁BEATEN
- ▁FIFTH
- ▁IMPULSE
- ▁SWEAR
- ▁EATEN
- ▁PURPLE
- ▁COMMITTED
- ▁COUNTRIES
- ▁PERCEIVE
- ISON
- ▁CELEBRAT
- ▁GRANDMOTHER
- ▁SHUDDER
- ▁SUNSHINE
- ▁SPANISH
- ▁HITHERTO
- ▁MARILLA
- ▁SNAKE
- ▁MOCK
- ▁INTERFERE
- ▁WALTER
- ▁AMID
- ▁MARBLE
- ▁MISSION
- TERIOR
- ▁DRIVING
- ▁FURNITURE
- ▁STEADY
- ▁CIRCUMSTANCE
- ▁INTERPRET
- ▁ENCHANT
- ▁ERROR
- ▁CONVICTION
- ▁HELPLESS
- ▁MEDICINE
- ▁QUALITIES
- ▁ITALIAN
- ▁HASTENED
- ▁OCCASIONALLY
- ▁PURSUED
- ▁HESITATED
- ▁INDEPENDENT
- ▁OLIVER
- ▁LINGER
- UX
- ▁EXAMINED
- ▁REPENT
- ▁PHYSICIAN
- ▁CHASE
- ▁BELOVED
- ▁ATTACHED
- ▁FLORENCE
- ▁HONEY
- ▁MOUSE
- ▁CRIES
- ▁BAKE
- ▁POEM
- ▁DESTRUCTION
- ▁FULFIL
- ▁MESSENGER
- ▁TRISTRAM
- ▁FANCIED
- ▁EXCESS
- ▁CURSE
- ▁CHU
- ▁QUANTITY
- ▁THORNTON
- ▁CREATED
- ▁CONTINUALLY
- ▁LIGHTNING
- ▁BORNE
- ▁TOTAL
- ▁DISPOSED
- ▁RIFLE
- ▁POLLY
- ▁GOAT
- ▁BACKWARD
- ▁VIRGINIA
- ▁KICK
- ▁PERIL
- ▁QUO
- ▁GLORIOUS
- ▁MULTITUDE
- ▁LEATHER
- ▁ABSENT
- ▁DEMON
- ▁DEBT
- ▁TORTURE
- ▁ACCORD
- ▁MATE
- ▁CATHOLIC
- ▁PILL
- ▁LIBRARY
- ▁PURSUIT
- ▁SHIRT
- ▁DEAREST
- ▁COLLAR
- ▁BEACH
- ▁ROBE
- ▁DECLARE
- ▁BRANCH
- ▁TEMPT
- ▁STEADILY
- ▁DISGUST
- ▁SILLY
- ▁ARRIVE
- ▁DRANK
- ▁LEVI
- ▁COMMUNICAT
- ▁RACHEL
- ▁WASHINGTON
- ▁RESIGN
- ▁MEANTIME
- ▁LACE
- ▁ENGAGEMENT
- ▁QUIVER
- ▁SEPARATED
- ▁DISCUSSION
- ▁VENTURED
- ▁SURROUNDING
- ▁POLISH
- ▁NAIL
- ▁SWELL
- ▁JOKE
- ▁LINCOLN
- ▁STUDENT
- ▁GLITTER
- ▁RUSSIAN
- ▁READILY
- ▁CHRIS
- ▁POVERTY
- ▁DISGRACE
- ▁CHEESE
- ▁HEAVILY
- ▁SCALE
- ▁STAFF
- ▁ENTREAT
- ▁FAREWELL
- ▁LUNCH
- ▁PEEP
- ▁MULE
- ▁SOMEONE
- ▁DISAPPEAR
- ▁DECISION
- ▁PISTOL
- ▁PUN
- ▁SPUR
- ▁ASSUMED
- ▁EXTEND
- ▁ENTHUSIASM
- ▁DEFINITE
- ▁UNDERTAKE
- ▁COMMITTEE
- ▁SIMON
- ▁FENCE
- ▁APPLIED
- ▁RELATED
- ▁VICE
- ▁UNPLEASANT
- ▁PROBABLE
- ▁PROCURE
- ▁FROWN
- ▁CLOAK
- ▁HUMANITY
- ▁FAMILIES
- ▁PHILOSOPHER
- ▁DWARF
- ▁OVERCOME
- ▁DEFEAT
- ▁FASTENED
- ▁MARSH
- ▁CLASSES
- ▁TOMB
- ▁GRACIOUS
- ▁REMOTE
- ▁CELL
- ▁SHRIEK
- ▁RESCUE
- ▁POOL
- ▁ORGANIZ
- ▁CHOSE
- ▁CUTTING
- ▁COWARD
- ▁BORDER
- ▁DIRTY
- ▁MONKEY
- ▁HOOK
- ▁CHUCK
- ▁EMILY
- ▁JEST
- ▁PLAC
- ▁WEIGH
- ▁ASSOCIATE
- ▁GLIMPSE
- ▁STUCK
- ▁BOLT
- ▁MURDERER
- ▁PONY
- ▁DISTINGUISH
- ▁INSTITUTION
- ▁CUNNING
- ▁COMPLIMENT
- ▁APPETITE
- ▁REPUTATION
- ▁FEEBLE
- ▁KIN
- ▁SERIES
- ▁GRACEFUL
- ▁PLATFORM
- ▁BREEZE
- ▁PHRASE
- ▁CLAY
- MONT
- ▁RATTL
- ▁OPPOSITION
- ▁LANE
- ▁BOAST
- ▁GROWTH
- ▁INCLINATION
- ▁BEHAVE
- ▁SUSAN
- ▁DISTINCTION
- ▁DISLIKE
- ▁NICHOLAS
- ▁SATISFY
- ▁DRAMA
- ▁ELBOW
- ▁GAZING
- ▁CONSUM
- ▁SPIN
- ▁OATH
- ▁CHANNEL
- ▁CHARACTERISTIC
- ▁SPEAR
- ▁SLAIN
- ▁SAUCE
- ▁FROG
- ▁CONCEPTION
- ▁TIMID
- ▁ZEAL
- ▁APPARENT
- SHIRE
- ▁CENTER
- ▁VARIETY
- ▁DUSK
- ▁APT
- ▁COLUMN
- ▁REVENGE
- ▁RIVAL
- ▁IMITAT
- ▁PASSIONATE
- ▁SELFISH
- ▁NORMAN
- ▁REPAIR
- ▁THRILL
- ▁TREATMENT
- ▁ROSA
- ▁MARTIN
- ▁INDIFFERENT
- ▁THITHER
- ▁GALLANT
- ▁PEPPER
- ▁RECOLLECT
- ▁VINE
- ▁SCARCE
- ▁SHIELD
- ▁MINGLED
- CLOSE
- ▁HARSH
- ▁BRICK
- ▁HUMOR
- ▁MISCHIEF
- ▁TREMENDOUS
- ▁FUNCTION
- ▁SMART
- ▁SULTAN
- ▁DISMISS
- ▁THREATENED
- ▁CHEAP
- ▁FLOCK
- ▁ENDEAVOR
- ▁WHISK
- ▁ITALY
- ▁WAIST
- ▁FLUTTER
- ▁SMOKING
- ▁MONARCH
- ▁AFRICA
- ▁ACCUSE
- ▁HERBERT
- ▁REFRESH
- ▁REJOICE
- ▁PILLOW
- ▁EXPECTATION
- ▁POETRY
- ▁HOPELESS
- ▁PERISH
- ▁PHILOSOPHY
- ▁WHISTLE
- ▁BERNARD
- ▁LAMENT
- ▁IMPROVE
- ▁SUP
- ▁PERPLEX
- ▁FOUNTAIN
- ▁LEAGUE
- ▁DESPISE
- ▁IGNORANCE
- ▁REFERENCE
- ▁DUCK
- ▁GROVE
- ▁PURSE
- ▁PARTNER
- ▁PROPHET
- ▁SHIVER
- ▁NEIGHBOURHOOD
- ▁REPRESENTATIVE
- SAIL
- ▁WIP
- ▁ACQUIRED
- ▁CHIMNEY
- ▁DOCTRINE
- ▁MAXIM
- ▁ANGLE
- ▁MAJORITY
- ▁AUTUMN
- ▁CONFUSED
- ▁CRISTO
- ▁ACHIEVE
- ▁DISGUISE
- ▁REDUCED
- ▁EARLIER
- ▁THEATRE
- ▁DECIDE
- MINATED
- OLOGICAL
- ▁OCCUPATION
- ▁VIGOROUS
- ▁CONTINENT
- ▁DECLINE
- ▁COMMUNITY
- ▁MOTIONLESS
- ▁HATRED
- ▁COMMUNICATION
- ▁BOWL
- ▁COMMENT
- ▁APPROVE
- ▁CEREMONY
- ▁CRIMINAL
- ▁SCIENTIFIC
- ▁DUCHESS
- ▁VIVID
- ▁SHIFT
- ▁AVAIL
- ▁DAMP
- ▁JOHNSON
- ▁SLENDER
- ▁CONTRAST
- ▁AMUSEMENT
- ▁PLOT
- ▁LYN
- ▁ASSOCIATION
- ▁SNATCH
- ▁UNCERTAIN
- ▁PRESSURE
- ▁PERCH
- ▁APPLY
- ▁PLANET
- ▁NOTWITHSTANDING
- ▁SWUNG
- ▁STIRRED
- ▁ATTENDANT
- ▁ENJOYMENT
- ▁WORRY
- ▁ALBERT
- ▁NAKED
- ▁TALENT
- ▁MARIAN
- ▁REFORM
- ▁DELIBERATE
- ▁INTELLIGENT
- ▁SENSITIVE
- ▁YONDER
- ▁PUPIL
- ▁FRIGHTFUL
- ▁DOUBTFUL
- ▁STANDARD
- ▁MAGISTRATE
- ▁SHEPHERD
- ▁STOMACH
- ▁DEPOSIT
- ▁RENEW
- ▁HEDGE
- ▁FRANCS
- ▁POSSIBILITY
- ▁RESEMBLE
- ▁FATIGUE
- ▁PORTRAIT
- ▁FAVORITE
- ▁CREAM
- ▁BURG
- ▁SECRETARY
- ▁DIVERS
- ▁ACTIVITY
- ▁SPECULAT
- ▁HUMOUR
- ▁FITTED
- ▁EXTERNAL
- ▁CETERA
- ▁WRAPPED
- ▁WHIT
- ▁FRED
- ▁EXAMINATION
- ▁LODGING
- ▁OWING
- ▁JAW
- ▁CROW
- ▁BALANCE
- ▁PUFF
- ▁TENDERNESS
- ▁PORTHOS
- ▁ANCHOR
- ▁INTERRUPT
- ▁NECESSARILY
- ▁PERPETUAL
- ▁AGONY
- ▁POPE
- ▁SCHOLAR
- ▁SCOTLAND
- ▁SUPPRESS
- ▁WRATH
- ▁WRECK
- ▁EXCEED
- ▁PERFECTION
- ▁INDIA
- ▁TRADITION
- ▁SECTION
- ▁EASTERN
- ▁DOORWAY
- ▁WIVES
- ▁CONVENTION
- ▁ANNOUNC
- ▁EGYPT
- ▁CONTRADICT
- ▁SCRATCH
- ▁CENTRAL
- ▁GLOVE
- ▁WAX
- ▁PREPARE
- ▁ACCOMPANY
- ▁INCREASING
- ▁LIBERAL
- ▁RAISING
- ▁ORANGE
- ▁SHOE
- ▁ATTRIBUTE
- ▁LITERATURE
- ▁PUZZLED
- ▁WITHDRAW
- ▁WHITHER
- ▁HAWK
- ▁MOONLIGHT
- ▁EXAMINE
- ▁HAPPILY
- ▁PRECEDE
- ▁DETECTIVE
- ▁INCHES
- ▁SOLITARY
- ▁DUTCH
- ▁NAPOLEON
- ▁UNEASY
- ▁CARDINAL
- ▁BLEW
- ▁FOWL
- ▁DECORAT
- ▁CHILDHOOD
- ▁TORMENT
- ▁LOSING
- ▁PERMISSION
- ▁BLANK
- ▁UPSTAIRS
- ▁CAPACITY
- ▁TRIFLE
- ▁FOLLY
- ▁RECOGNIZE
- ▁REMOVE
- ▁VENGEANCE
- ▁ENTERPRISE
- ▁BEDROOM
- ▁ANYHOW
- ▁INQUIRY
- ▁ASHES
- ▁DRAG
- ▁HUSH
- ▁AWKWARD
- ▁SATURDAY
- ▁GENUINE
- ▁SURVIV
- ▁SKIRT
- ▁AFFECTIONATE
- ▁TANG
- ▁MUTUAL
- ▁DISPUTE
- ▁EAGLE
- ▁INCOME
- ▁BIND
- ▁FAME
- ▁IMPROVEMENT
- ROVING
- ▁DIFFER
- ▁AWOKE
- ▁SLEEVE
- ▁SOLITUDE
- ▁FAVOURITE
- JI
- ▁DETECT
- ▁COMPREHEND
- ▁PREPARING
- ▁SERPENT
- ▁SUMMIT
- ▁KNOT
- ▁KNIT
- ▁COPY
- ▁STOPPING
- ▁FADED
- ▁HIDEOUS
- ▁JULIE
- STEAD
- ▁SHINE
- ▁CONFLICT
- ▁PROPOSITION
- ▁REFUGE
- ▁GALLERY
- ▁BUNDLE
- ▁AXE
- ▁SLAVERY
- ▁MASK
- ▁ALYOSHA
- ▁LADDER
- ▁DEPARTMENT
- ▁DISCHARGE
- ▁DEPRESS
- ▁GALLOP
- ▁SCARLET
- ▁KITTY
- ▁RECEIVING
- ▁SURRENDER
- ▁SUSTAIN
- ▁TWILIGHT
- ▁CONGRESS
- ▁IRELAND
- ▁FUNNY
- ▁LEND
- ▁CONSTITUTE
- ▁FUNERAL
- ▁CRYSTAL
- ▁SPAIN
- ▁EXCEEDINGLY
- ▁DAMN
- ▁COMMUN
- ▁CIVILIZATION
- ▁PREJUDICE
- ▁PORCH
- ▁ASSISTANT
- ▁INDUSTRY
- ▁TUMBLE
- ▁DEFENCE
- ▁HITHER
- ▁SMOT
- ▁COLONI
- ▁AMAZEMENT
- ▁MARGUERITE
- ▁MIRACLE
- ▁INHERIT
- ▁BEGGAR
- ▁ENVELOPE
- ▁INDIGNATION
- ▁NATASHA
- ▁PROPOSAL
- ▁FRAGMENT
- ▁ROUSED
- ▁ROAST
- ENCIES
- ▁COMMENCED
- ▁RESOURCE
- ▁POPULATION
- ▁QUOTH
- ▁PURSUE
- ▁EDUCAT
- ▁AFFLICT
- ▁CONTACT
- ▁CRIMSON
- ▁DIVISION
- ▁DISORDER
- ▁COPPER
- ▁SOLICIT
- ▁MODERATE
- ▁DRUM
- ▁SWIM
- ▁SALUTE
- ▁ASSUME
- ▁MUSCLE
- ▁OVERWHELM
- ▁SHAKESPEARE
- ▁STRUGGLING
- ▁TRANQUIL
- ▁CHICKEN
- ▁TREAD
- ▁CLAW
- ▁BIBLE
- ▁RIDGE
- ▁THREAT
- ▁VELVET
- ▁EXPOSED
- ▁IDIOT
- ▁BARREL
- ▁PENNY
- ▁TEMPTATION
- ▁DANGLARS
- ▁CENTURIES
- ▁DISTRIBUT
- ▁REJECT
- ▁RETORTED
- ▁CONCENTRAT
- ▁CORDIAL
- ▁MOTOR
- ▁CANNON
- KEEP
- ▁WRETCH
- ▁ASSURANCE
- ▁THIEF
- ▁SURVEY
- ▁VITAL
- ▁RAILWAY
- ▁JACKSON
- ▁CRASH
- ▁GROWL
- ▁COMBAT
- ▁RECOLLECTION
- ▁SECURITY
- ▁JACOB
- ▁CLUTCH
- ▁BLANKET
- ▁NANCY
- ▁CELLAR
- ▁CONVENIENT
- ▁INDIGNANT
- ▁COARSE
- ▁WORM
- ▁SCREEN
- ▁TRANSPORT
- ▁BULLET
- ▁APPRECIATE
- ▁DEVOTION
- ▁INVISIBLE
- ▁DRIED
- ▁MIXTURE
- ▁CANDID
- ▁PERFORMANCE
- ▁RIPE
- ▁EXQUISITE
- ▁BARGAIN
- ▁TOBACCO
- ▁LOYAL
- ▁MOULD
- ▁ATTENTIVE
- ▁DOROTHY
- ▁BRUTE
- ▁ESTABLISHMENT
- ▁ABILITY
- ▁INHABIT
- ▁OBSCURE
- ▁BORROW
- ▁ESSENCE
- ▁DISMAY
- ▁FLEE
- ▁BLADE
- ▁PLUCK
- ▁COFFIN
- ▁SUNSET
- ▁STEPHEN
- ▁ECONOMIC
- ▁HOLIDAY
- ▁MECHANICAL
- ▁COTTON
- ▁AWAKENED
- ▁SEIZE
- ▁RIDICULOUS
- ▁SANCHO
- ▁HESITATION
- ▁CORPSE
- ▁SAVING
- HOLD
- FOOT
- ▁ELDEST
- ▁DESPITE
- ▁EDITH
- ▁CHERISH
- ▁RESISTANCE
- ▁WILSON
- ▁ARGUE
- ▁INQUIRE
- ▁APPREHENSION
- ▁AVENUE
- ▁DRAKE
- ▁PROPOSE
- HURST
- ▁INFERIOR
- ▁STAIRCASE
- ▁WHEREFORE
- ▁CARLYLE
- ▁COUCH
- ▁ROUTE
- ▁POLITICS
- ▁TOMORROW
- ▁THRONG
- ▁NAUGHT
- ▁SUNLIGHT
- ▁INDIFFERENCE
- ▁OBEDIENCE
- ▁RECEPTION
- ▁VEGETABLE
- ▁IMPERFECT
- ▁RESIDENCE
- ▁TURKEY
- ▁VIOLET
- ▁SARAH
- ▁ALTAR
- ▁GRIEVE
- ▁JERK
- ▁ENSU
- ▁MAGICIAN
- ▁BLOSSOM
- ▁LANTERN
- ▁RESOLUTE
- ▁THOUGHTFULLY
- ▁FORTNIGHT
- ▁TRUMPET
- ▁VALJEAN
- ▁UNWILLING
- ▁LECTURE
- ▁WHEREUPON
- ▁HOLLAND
- ▁CHANGING
- ▁CREEK
- ▁SLICE
- ▁NORMAL
- ▁ANNIE
- ▁ACCENT
- ▁FREDERICK
- ▁DISAGREEABLE
- ▁RUBBED
- ▁DUMB
- ▁ESTABLISH
- ▁IMPORT
- ▁AFFIRM
- ▁MATTHEW
- ▁BRISK
- ▁CONVERT
- ▁BENDING
- ▁IVAN
- ▁MADEMOISELLE
- ▁MICHAEL
- ▁EASIER
- ▁JONES
- ▁FACING
- ▁EXCELLENCY
- ▁LITERARY
- ▁GOSSIP
- ▁DEVOUR
- ▁STAGGER
- ▁PENCIL
- ▁AVERAGE
- ▁HAMMER
- ▁TRIUMPHANT
- ▁PREFERRED
- ▁APPLICATION
- ▁OCCUPY
- ▁AUTHORITIES
- BURN
- ▁ASCERTAIN
- ▁CORRIDOR
- ▁DELICIOUS
- ▁PRACTISE
- ▁UNIVERSE
- ▁SHILLING
- ▁CONTEST
- ▁ASHORE
- ▁COMMIT
- ▁ADMINISTRATION
- ▁STUDIED
- ▁RIGID
- ▁ADORN
- ▁ELSEWHERE
- ▁INNOCENCE
- ▁JOURNAL
- ▁LANDSCAPE
- ▁TELEGRAPH
- ▁ANGRILY
- ▁CAMPAIGN
- ▁UNJUST
- ▁CHALLENGE
- ▁TORRENT
- ▁RELATE
- ▁ASSEMBLED
- ▁IMPRESSED
- ▁CANOE
- ▁CONCLUD
- ▁QUIXOTE
- ▁SATISFACTORY
- ▁NIECE
- ▁DEAF
- ▁RAFT
- ▁JIMMY
- ▁GLID
- ▁REGULAT
- ▁CHATTER
- ▁GLACIER
- ▁ENVY
- ▁STATUE
- ▁BOSTON
- ▁RICHMOND
- ▁DENIED
- ▁FANNY
- ▁SOLOMON
- ▁VULGAR
- ▁STALK
- ▁REPLACE
- ▁SPOON
- ▁BASIN
- ▁FEATURE
- ▁CONVICT
- ▁ARCHITECT
- ▁ADMIRAL
- ▁RIBBON
- ▁PERMANENT
- ▁APRIL
- ▁JOLLY
- ▁NEIGHBORHOOD
- ▁IMPART
- BOROUGH
- CAMP
- ▁HORRID
- ▁IMMORTAL
- ▁PRUDENCE
- ▁SPANIARD
- ▁SUPPOSING
- ▁TELEPHONE
- ▁TEMPERATURE
- ▁PENETRATE
- ▁OYSTER
- ▁APPOINTMENT
- ▁EGYPTIAN
- ▁DWELT
- ▁NEPHEW
- ▁RAILROAD
- ▁SEPTEMBER
- ▁DEVICE
- ▁WHEAT
- ▁GILBERT
- ▁ELEGANT
- ▁ADVERTISE
- ▁RATIONAL
- ▁TURTLE
- ▁BROOD
- ▁ASSEMBLY
- ▁CULTIVATE
- ▁EDITOR
- ▁SPECIMEN
- ▁UNDOUBTEDLY
- ▁WHALE
- ▁DROPPING
- ▁BALLOON
- ▁MEDICAL
- COMB
- ▁COMPOSITION
- ▁FOOTSTEPS
- ▁LAUNCELOT
- ▁DISCOURSE
- ▁ERRAND
- ▁CONVERSE
- ▁ADVANCING
- ▁DOWNSTAIRS
- ▁TUMULT
- ▁CORRUPT
- ▁SUFFICE
- ▁ANGUISH
- ▁SHAGGY
- ▁RETIRE
- ▁TIMBER
- ▁BLAZE
- ▁ABSTRACT
- ▁EMBROIDER
- ▁PHOTOGRAPH
- ▁PROSPERITY
- ▁TERRIBLY
- ▁TERRITORY
- ▁THRESHOLD
- ▁PAVEMENT
- ▁INJURED
- ▁LIMP
- ▁AGITATION
- ▁RASCAL
- ▁PRESUME
- ▁OBSERVING
- ▁OBSTACLE
- ▁SIMPLICITY
- ▁SLUMBER
- ▁SUPPLIED
- ▁COMBINATION
- ▁DRAIN
- ▁WILDERNESS
- ▁BELIEVING
- ▁VILLAIN
- ▁RECKLESS
- ▁INJURY
- ▁CLAPP
- ▁FRIDAY
- ▁HERCULES
- ▁KENNEDY
- ▁SYMPTOM
- ▁SLEDGE
- ▁CEILING
- ▁LEMON
- ▁PLAGUE
- ▁MONDAY
- ▁CANVAS
- ▁IMPATIENCE
- ▁UNCOMFORTABLE
- ▁ACCESS
- ▁FROZEN
- ▁SENATOR
- ▁FRANZ
- ▁SWIMMING
- ▁BARRIER
- ▁ADJUST
- ▁COMPARISON
- ▁PROCLAIM
- ▁WRINKL
- ▁OVERLOOK
- ▁MITYA
- ▁GUILT
- ▁PERCEPTION
- ▁PRECAUTION
- ▁SPECTATOR
- ▁SURPRISING
- ▁DISTRACT
- ▁DISDAIN
- ▁BONNET
- ▁MAGNET
- ▁PROFESS
- ▁CONFOUND
- ▁NARRATIVE
- ▁STRUCTURE
- ▁SKETCH
- ▁ULTIMATE
- ▁GLOBE
- ▁INSECT
- FICIENCY
- ▁ORCHARD
- ▁AMIABLE
- ▁DESCENT
- ▁INDEPENDENCE
- ▁MANUFACTURE
- ▁SPRINKLE
- ▁NIGHTINGALE
- ▁CUSHION
- ▁EMINENT
- ▁SCOTT
- ▁ARRAY
- ▁COSETTE
- ▁WAVING
- ▁EXTRACT
- ▁IRREGULAR
- ▁PERSECUT
- ▁DERIVED
- ▁WITHDREW
- ▁CAUTION
- ▁SUSPICIOUS
- ▁MEMORIES
- ▁NOWHERE
- ▁SUBTLE
- ▁THOROUGH
- Q
- ▁APPROPRIATE
- ▁SLAUGHTER
- ▁YOURSELVES
- ▁THUMB
- ▁TWAS
- ▁ABODE
- ▁BIDDING
- ▁CONSPICUOUS
- ▁REBECCA
- ▁SERGEANT
- ▁APRON
- ▁ANTICIPATE
- ▁DISCIPLINE
- ▁GLANCING
- ▁PILGRIM
- ▁SULLEN
- ▁CONTRIBUTE
- ▁PRAIRIE
- ▁CARVED
- ▁COMMERCE
- ▁EXCLAMATION
- ▁MUSCULAR
- ▁NOVEMBER
- ▁PHENOMENA
- ▁SYMBOL
- ▁UMBRELLA
- ▁DIMINISH
- ▁PARLOUR
- ▁THREATENING
- ▁STUMP
- ▁EXTENSIVE
- ▁PLEASING
- ▁REMEMBRANCE
- ▁COMBINED
- ▁SHERIFF
- ▁SHAFT
- ▁LAURA
- ▁INTERCOURSE
- ▁STRICKEN
- ▁SUPPLIES
- ▁LANDLORD
- ▁SHRINK
- ▁PRICK
- ▁CAESAR
- ▁DRUG
- ▁BEWILDERED
- ▁NAUTILUS
- ▁BRUTAL
- ▁COMMERCIAL
- ▁MAGGIE
- ▁SPHERE
- ▁VIRGIN
- ▁BRETHREN
- ▁DESTINY
- ▁POLICY
- ▁TERRIFIED
- ▁HOUSEKEEPER
- ▁CRAZY
- ▁ARDENT
- ▁DISCERN
- ▁WRAP
- ▁MARQUIS
- ▁RUSSIA
- MOUTH
- ▁BRITAIN
- ▁HARBOUR
- ▁CONCERT
- ▁DONKEY
- ▁DAMAGE
- ▁SLIM
- ABOUT
- ▁LUXURY
- ▁MONSTROUS
- ▁TENDENCY
- ▁PARADISE
- ▁CULTURE
- ▁JULIUS
- ▁RAOUL
- ▁REMEDY
- ▁DECAY
- ▁SCOLD
- ▁SPLIT
- ▁ASSAULT
- ▁DECEMBER
- ▁MOSCOW
- ▁EXPLORE
- ▁TROUSERS
- ▁WRIST
- PIECE
- ▁MUSKET
- ▁VALENTINE
- ▁TYRANT
- ▁ABRAHAM
- ▁MEDIUM
- ▁ARTIFICIAL
- ▁FACULTY
- ▁OBLIGATION
- ▁RESEMBLANCE
- ▁INQUIRIES
- ▁DETAIN
- ▁SWARM
- ▁PLEDGE
- ▁ADMIRABLE
- ▁DEFECT
- ▁SUPERINTEND
- ▁PATRIOT
- ▁CLUNG
- ▁DISMAL
- ▁RECIT
- ▁IGNOR
- ▁AMELIA
- ▁JUSTIFY
- ▁ELEPHANT
- ▁ESTIMATE
- ▁KNELT
- ▁SERVING
- ▁WHIM
- ▁SHRILL
- ▁STUDIO
- ▁TEXT
- ▁ALEXANDER
- ▁WROUGHT
- ▁ABUNDANT
- ▁SITUATED
- ▁REGAIN
- ▁FIERY
- ▁SNEER
- ▁SWEAT
- ▁GLARE
- ▁NIGH
- ▁ESCORT
- ▁INEVITABLE
- ▁PSMITH
- ▁RELUCTANT
- ▁PRECEDING
- ▁RESORT
- ▁OUTRAGE
- ▁AMBASSADOR
- ▁CONSOLATION
- ▁RECOGNITION
- ▁REMORSE
- ▁BEHALF
- ▁FORMIDABLE
- ▁GRAVITY
- ▁DIVIDE
- ▁CONFRONT
- ▁GIGANTIC
- ▁OCTOBER
- ▁FLANK
- ▁SLEW
- ▁CLARA
- ▁FILM
- ▁BULK
- ▁POMP
- ▁ELEANOR
- ▁EMPHASIS
- ▁JAPANESE
- ▁CAVALRY
- ▁EXCLUSIVE
- ▁PERFUME
- ▁BRONZE
- ▁FEDERAL
- ▁LIQUID
- ▁RUBBING
- ▁OVEN
- DOLPH
- ▁CONVULS
- ▁DEPRIVED
- ▁RESPONSIBILITY
- ▁SIGNIFICANT
- ▁WAISTCOAT
- ▁CLUSTER
- ▁MARTHA
- ▁REVERSE
- ▁ATTORNEY
- ▁DROOP
- ▁SKILFUL
- ▁HABITUAL
- ▁PUMP
- ▁INTERVEN
- ▁OWL
- ▁CONJECTURE
- ▁FANTASTIC
- ▁RESPONSIBLE
- ▁DESTINED
- ▁DOCUMENT
- ▁THEREUPON
- ▁GODDESS
- ▁PACIFIC
- ▁WARRANT
- ▁COSTUME
- ▁BRIDLE
- ▁CALIFORNIA
- ▁DEMOCRATIC
- ▁EUSTACE
- ▁SQUIRREL
- ▁UNCOMMON
- ▁MARVELLOUS
- ▁PLOUGH
- ▁TRAGEDY
- ▁VAULT
- ▁HESITATE
- ▁REFRAIN
- ▁ADMIRING
- ▁CORPORAL
- ▁ENTITLED
- ▁SHREWD
- ▁SQUEEZ
- ▁ACCURATE
- ▁TEMPEST
- ▁MONUMENT
- ▁SIEGE
- ▁CHINESE
- ▁RAVEN
- ▁LOUNG
- ▁ASSASSIN
- ▁INFLICT
- ▁AGITATED
- ▁DESIRABLE
- ▁EARLIEST
- ▁LAUNCH
- ▁PILOT
- ▁PULSE
- ▁MUTE
- LEIGH
- ▁LIQUOR
- ▁SCARECROW
- ▁SKULL
- ▁DESOLATE
- ▁SUBLIME
- ▁SERENE
- ▁RECESS
- ▁WAKING
- ▁CHARLOTTE
- ▁CIRCULAR
- ▁INJUSTICE
- ▁PINOCCHIO
- ▁PRISCILLA
- ▁THYSELF
- ▁OCCURRENCE
- ▁CASUAL
- ▁FRANTIC
- ▁LEGEND
- ▁FERTIL
- ▁BACKGROUND
- ▁DELICACY
- ▁ESTRALLA
- ▁MANUSCRIPT
- ▁RESPONSE
- ▁UNIVERSITY
- ▁WOLVES
- ▁SCANDAL
- ▁STUMBLE
- ▁HOARSE
- ▁BODILY
- ▁CONVENT
- ▁EXAMINING
- ▁INCAPABLE
- ▁PERCEIVING
- ▁PHILADELPHIA
- ▁SUBSEQUENT
- ▁THIEVES
- ▁ACCUMULAT
- ▁DAMSEL
- ▁SCOTCH
- ▁UNDERNEATH
- ▁NOBILITY
- ▁SMASH
- ▁REVOLT
- ▁ENGAGE
- ▁CATHEDRAL
- ▁CHAMPION
- ▁DESPATCH
- ▁ETERNITY
- ▁JANUARY
- ▁PLEADED
- ▁PROBABILITY
- ▁JIMMIE
- ▁PARALLEL
- ▁FISHERMAN
- ▁JERRY
- ▁SWORE
- ▁DRAUGHT
- ▁OPPONENT
- ▁PRIMITIVE
- ▁SIGNIFICANCE
- ▁SUBSTANTIAL
- ▁AMAZED
- ▁DUNBAR
- ▁COMMEND
- ▁CONTEMPLATE
- ▁TESTIMONY
- ▁IMPERIAL
- ▁ADAPT
- ▁JUICE
- ▁CALAMIT
- CULAR
- ▁CHATEAU
- ▁PHOENIX
- ▁PRUDENT
- ▁SOLUTION
- ▁VILLEFORT
- ▁REACTION
- ▁RELAX
- ▁YU
- ▁PROHIBIT
- ▁DISTRUST
- ▁PLUNDER
- ▁WELFARE
- ▁NAVIGAT
- ▁PARLOR
- ▁LAZY
- ▁DETACH
- OMETER
- ▁PRIV
- ▁DISCOURAGE
- ▁OBSTINATE
- ▁REJOICING
- ▁SERMON
- ▁VEHICLE
- ▁FANCIES
- ▁ENLIGHTEN
- ▁ACUTE
- ▁ILLUSION
- ▁ANTHEA
- ▁MARTIAN
- ▁EXCITE
- ▁GENEROSITY
- OLOGIST
- ▁AMAZING
- ▁UNWORTHY
- ▁INTERNAL
- ▁INCENSE
- ▁VIBRAT
- ▁ADHERE
- ROACH
- ▁FEBRUARY
- ▁MEXICAN
- ▁POTATOES
- ▁INCESSANT
- ▁INTERPOSED
- ▁PARCEL
- ▁VEXED
- ▁PROMOTE
- MIDST
- ▁ARISTOCRAT
- ▁CYRIL
- ▁EMBARK
- ▁ABUNDANCE
- ▁LITERALLY
- ▁SURGEON
- ▁TERRACE
- ▁ATLANTIC
- ▁MARTYR
- ▁SPECK
- ▁SENATE
- ▁LOAF
- ▁ADMINISTER
- ▁APPREHEND
- ▁SUBDUED
- ▁TEMPORARY
- ▁DOMINION
- ▁ELABORATE
- ▁DIGNIFIED
- ▁ELIZA
- ▁SPLASH
- ▁CONSEIL
- ▁DEXTER
- ▁UNSEEN
- ▁TRAGIC
- VOCATION
- ▁GRATIFY
- ▁BACHELOR
- ▁DEFENSE
- ▁EXCURSION
- ▁FACULTIES
- ▁PROPRIETOR
- ▁SYMPATHETIC
- ▁UNNECESSARY
- ▁RADIANT
- ▁VACANT
- ▁OUNCE
- ▁SCREW
- ▁PHENOMENON
- ▁PROMINENT
- ▁WORRIED
- ▁STUDIES
- ▁CLIMATE
- ▁KEITH
- ▁ARAMIS
- ▁BLISS
- ▁CONTINUAL
- ▁SURPASS
- ▁HEBREW
- ▁IDENTITY
- ▁PROVOKE
- ▁TEMPERAMENT
- ▁CHARIOT
- ▁HARBOR
- ▁NINTH
- ▁PRIOR
- ▁DESIROUS
- ▁JERUSALEM
- ▁UNDERTAKING
- ▁EDISON
- ▁MIRTH
- ▁SCOUT
- ▁APPARATUS
- ▁ILLUSTRATION
- ▁INTELLIGIBLE
- ▁INVARIABLY
- ▁PIERCED
- ▁REVIEW
- ▁FLICKER
- ▁HAZARD
- ▁REVELATION
- ▁DIXON
- ▁EXCITING
- ▁GOSPEL
- ▁CONSTANCE
- ▁OVERTAKE
- ▁GUINEA
- ▁ALADDIN
- ▁CHICAGO
- ▁TULLIVER
- ▁HAMILTON
- ▁GARRISON
- ▁DISCIPLE
- ▁INTENSITY
- ▁TRAITOR
- ▁CHANCELLOR
- ▁PROVERB
- ▁DAGGER
- ▁FORESEE
- ▁CONFIDE
- ▁GLIMMER
- ▁CHAUVELIN
- ▁ILLUSTRATE
- ▁VOLUNTEER
- ▁JUNGLE
- ▁STREAK
- ▁SUNRISE
- ▁DISSOLV
- ▁QUEST
- ▁AWHILE
- ▁FELICITY
- ▁LEGISLATURE
- ▁LEONORA
- ▁MAGAZINE
- ▁PITIFUL
- ▁COLONY
- ▁SHAWL
- ▁ARRIVING
- ▁FUNDAMENTAL
- ▁CARPENTER
- ▁OVERFLOW
- ▁EXPAND
- ▁HARVEST
- ▁FEMININE
- ▁INNUMERABLE
- ▁SCRAMBLE
- ▁TWENTIETH
- ▁TRIFLING
- ▁GHASTL
- ▁CONQUEST
- ▁DANIEL
- ▁FACILIT
- ▁FORSAKE
- ▁BEHAVIOUR
- ▁GORGEOUS
- ▁PRODUCING
- ▁HAPPIER
- ▁PROMISING
- ▁RAINBOW
- ▁INSTINCTIVELY
- ▁DECREE
- ▁EYEBROWS
- ▁IRRESISTIBLE
- ▁PHARAOH
- ▁SCROOGE
- ▁UNNATURAL
- ▁CRUMBS
- ▁REFINED
- ▁DREARY
- ▁TRENCH
- ▁CONVINCE
- ▁FRINGE
- ▁EXTREMITY
- ▁INTIMACY
- ▁SCOUNDREL
- ▁SUFFRAGE
- ▁UNEASINESS
- ▁BARRICADE
- ▁CIRCULAT
- ▁SAMUEL
- ▁BRUCE
- ▁DARCY
- <sos/eos>
init: xavier_uniform
input_size: 83
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: false
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: null
frontend_conf: {}
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_fbank_pitch_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: contextual_block_transformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.0
input_layer: conv2d
normalize_before: true
block_size: 40
hop_size: 16
look_ahead: 16
init_average: true
ctx_pos_enc: true
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.0
src_attention_dropout_rate: 0.0
required:
- output_dir
- token_list
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
eml914/streaming_transformer_asr_librispeech
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'eml914/streaming\_transformer\_asr\_librispeech'
This model was trained by Emiru Tsunoo using librispeech recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Wed Nov 17 18:18:46 JST 2021'
* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.5a1'
* pytorch version: 'pytorch 1.4.0'
* Git hash: '12eb132418a1f69548f7998e53273cd05d989ed9'
+ Commit date: 'Tue Nov 16 10:12:21 2021 +0900'
asr\_train\_asr\_streaming\_fbank\_pitch\_en\_bpe5000\_sp
---------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'eml914/streaming\\_transformer\\_asr\\_librispeech'\n\n\nThis model was trained by Emiru Tsunoo using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Nov 17 18:18:46 JST 2021'\n* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.4.0'\n* Git hash: '12eb132418a1f69548f7998e53273cd05d989ed9'\n\t+ Commit date: 'Tue Nov 16 10:12:21 2021 +0900'\n\n\nasr\\_train\\_asr\\_streaming\\_fbank\\_pitch\\_en\\_bpe5000\\_sp\n---------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'eml914/streaming\\_transformer\\_asr\\_librispeech'\n\n\nThis model was trained by Emiru Tsunoo using librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Nov 17 18:18:46 JST 2021'\n* python version: '3.8.11 (default, Aug 3 2021, 15:09:35) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.4.0'\n* Git hash: '12eb132418a1f69548f7998e53273cd05d989ed9'\n\t+ Commit date: 'Tue Nov 16 10:12:21 2021 +0900'\n\n\nasr\\_train\\_asr\\_streaming\\_fbank\\_pitch\\_en\\_bpe5000\\_sp\n---------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
summarization
|
transformers
|
# arxiv27k-t5-abst-title-gen/
This model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6002
- Rouge1: 32.8
- Rouge2: 21.9
- Rougel: 34.8
-
## Model description
Model has been trained with a colab-pro notebook in 4 hours.
## Intended uses & limitations
Can be used for generating journal titles from given abstracts
### Training args
model_args = T5Args()
model_args.max_seq_length = 256
model_args.train_batch_size = 8
model_args.eval_batch_size = 8
model_args.num_train_epochs = 6
model_args.evaluate_during_training = False
model_args.use_multiprocessing = False
model_args.fp16 = False
model_args.save_steps = 40000
model_args.save_eval_checkpoints = False
model_args.save_model_every_epoch = True
model_args.output_dir = OUTPUT_DIR
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.num_return_sequences = 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Contact
detasar@gmail.com
Davut Emre Taşar
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "summarization"], "metrics": ["rouge"], "model-index": [{"name": "arxiv27k-t5-abst-title-gen/", "results": []}]}
|
emre/arxiv27k-t5-abst-title-gen
| null |
[
"transformers",
"pytorch",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"summarization",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #mt5 #text2text-generation #generated_from_trainer #summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# arxiv27k-t5-abst-title-gen/
This model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6002
- Rouge1: 32.8
- Rouge2: 21.9
- Rougel: 34.8
-
## Model description
Model has been trained with a colab-pro notebook in 4 hours.
## Intended uses & limitations
Can be used for generating journal titles from given abstracts
### Training args
model_args = T5Args()
model_args.max_seq_length = 256
model_args.train_batch_size = 8
model_args.eval_batch_size = 8
model_args.num_train_epochs = 6
model_args.evaluate_during_training = False
model_args.use_multiprocessing = False
model_args.fp16 = False
model_args.save_steps = 40000
model_args.save_eval_checkpoints = False
model_args.save_model_every_epoch = True
model_args.output_dir = OUTPUT_DIR
model_args.no_cache = True
model_args.reprocess_input_data = True
model_args.overwrite_output_dir = True
model_args.num_return_sequences = 1
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
### Contact
detasar@URL
Davut Emre Taşar
|
[
"# arxiv27k-t5-abst-title-gen/\n\nThis model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6002\n- Rouge1: 32.8\n- Rouge2: 21.9\n- Rougel: 34.8\n-",
"## Model description\n\nModel has been trained with a colab-pro notebook in 4 hours.",
"## Intended uses & limitations\n\nCan be used for generating journal titles from given abstracts",
"### Training args\nmodel_args = T5Args()\nmodel_args.max_seq_length = 256\nmodel_args.train_batch_size = 8\nmodel_args.eval_batch_size = 8\nmodel_args.num_train_epochs = 6\nmodel_args.evaluate_during_training = False\nmodel_args.use_multiprocessing = False\nmodel_args.fp16 = False\nmodel_args.save_steps = 40000\nmodel_args.save_eval_checkpoints = False\nmodel_args.save_model_every_epoch = True\nmodel_args.output_dir = OUTPUT_DIR\nmodel_args.no_cache = True\nmodel_args.reprocess_input_data = True\nmodel_args.overwrite_output_dir = True\nmodel_args.num_return_sequences = 1",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3",
"### Contact\ndetasar@URL\nDavut Emre Taşar"
] |
[
"TAGS\n#transformers #pytorch #safetensors #mt5 #text2text-generation #generated_from_trainer #summarization #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# arxiv27k-t5-abst-title-gen/\n\nThis model is a fine-tuned version of mt5-small on the arxiv-abstract-title dataset.\nIt achieves the following results on the evaluation set:\n- Loss: 1.6002\n- Rouge1: 32.8\n- Rouge2: 21.9\n- Rougel: 34.8\n-",
"## Model description\n\nModel has been trained with a colab-pro notebook in 4 hours.",
"## Intended uses & limitations\n\nCan be used for generating journal titles from given abstracts",
"### Training args\nmodel_args = T5Args()\nmodel_args.max_seq_length = 256\nmodel_args.train_batch_size = 8\nmodel_args.eval_batch_size = 8\nmodel_args.num_train_epochs = 6\nmodel_args.evaluate_during_training = False\nmodel_args.use_multiprocessing = False\nmodel_args.fp16 = False\nmodel_args.save_steps = 40000\nmodel_args.save_eval_checkpoints = False\nmodel_args.save_model_every_epoch = True\nmodel_args.output_dir = OUTPUT_DIR\nmodel_args.no_cache = True\nmodel_args.reprocess_input_data = True\nmodel_args.overwrite_output_dir = True\nmodel_args.num_return_sequences = 1",
"### Framework versions\n\n- Transformers 4.12.5\n- Pytorch 1.10.0+cu111\n- Datasets 1.15.1\n- Tokenizers 0.10.3",
"### Contact\ndetasar@URL\nDavut Emre Taşar"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1620
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2256 | 1.0 | 5533 | 1.1620 |
| 0.9551 | 2.0 | 11066 | 1.1237 |
| 0.7726 | 3.0 | 16599 | 1.1620 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "base_model": "distilbert-base-uncased", "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
emre/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"base_model:distilbert-base-uncased",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1620
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.16.2
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #base_model-distilbert-base-uncased #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.16.2\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.11.0"
] |
question-answering
|
transformers
|
# Turkish SQuAD Model : Question Answering
Fine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset
* Loodos-BERT-base: https://huggingface.co/loodos/bert-base-turkish-uncased
* TQuAD dataset: https://github.com/TQuad/turkish-nlp-qa-dataset
# Training Code
```
!python3 Turkish-QA.py \
--model_type bert \
--model_name_or_path loodos/bert-base-turkish-uncased
--do_train \
--do_eval \
--train_file trainQ.json \
--predict_file dev1.json \
--per_gpu_train_batch_size 8 \
--learning_rate 5e-5 \
--num_train_epochs 10 \
--max_seq_length 384 \
--output_dir "./model"
```
# Example Usage
> Load Model
```
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("emre/distilbert-tr-q-a")
model = AutoModelForQuestionAnswering.from_pretrained("emre/distilbert-tr-q-a")
nlp = pipeline('question-answering', model=model, tokenizer=tokenizer)
```
> Apply the model
```
def ask(question,context):
temp = nlp(question=question, context=context)
start_idx = temp["start"]
end_idx = temp["end"]
return context[start_idx:end_idx]
izmir="İzmir, Türkiye'de Ege Bölgesi'nde yer alan şehir ve ülkenin 81 ilinden biridir. Ülkenin nüfus bakımından en kalabalık üçüncü şehridir. Ekonomik, tarihi ve sosyo-kültürel açıdan önde gelen şehirlerden biridir. Nüfusu 2021 itibarıyla 4.425.789 kişidir. Yüzölçümü olarak ülkenin yirmi üçüncü büyük ilidir."
soru1 = "İzmir'in nüfusu kaçtır?"
print(ask(soru1,izmir))
soru2 = "İzmir hangi bölgede bulunur?"
print(ask(soru2,izmir))
```
|
{"language": "tr", "tags": ["question-answering", "loodos-bert-base", "TQuAD", "tr"], "datasets": ["TQuAD"]}
|
emre/distilbert-tr-q-a
| null |
[
"transformers",
"pytorch",
"bert",
"question-answering",
"loodos-bert-base",
"TQuAD",
"tr",
"dataset:TQuAD",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #bert #question-answering #loodos-bert-base #TQuAD #tr #dataset-TQuAD #endpoints_compatible #has_space #region-us
|
# Turkish SQuAD Model : Question Answering
Fine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset
* Loodos-BERT-base: URL
* TQuAD dataset: URL
# Training Code
# Example Usage
> Load Model
> Apply the model
|
[
"# Turkish SQuAD Model : Question Answering\n\nFine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset\n* Loodos-BERT-base: URL\n* TQuAD dataset: URL",
"# Training Code",
"# Example Usage\n\n> Load Model\n\n\n> Apply the model"
] |
[
"TAGS\n#transformers #pytorch #bert #question-answering #loodos-bert-base #TQuAD #tr #dataset-TQuAD #endpoints_compatible #has_space #region-us \n",
"# Turkish SQuAD Model : Question Answering\n\nFine-tuned Loodos-Turkish-Bert-Model for Question-Answering problem with TQuAD dataset\n* Loodos-BERT-base: URL\n* TQuAD dataset: URL",
"# Training Code",
"# Example Usage\n\n> Load Model\n\n\n> Apply the model"
] |
null |
transformers
|
# jurisprudence-textgen-gpt-2
Pretrained model on Turkish language using a causal language modeling (CLM) objective.
## Model description of Original GPT-2
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## Model description of jurisprudence-textgen-gpt-2
Jurisprudence-textgen-gpt-2 is a transformers model for tensorflow pretrained with 18950 Turkish court Jurisprudence text data which has been obtained from [Bilirkisi GITHUB REPO TRAIN DATA] (https://github.com/Bilirkisi/Bilirkisi/tree/main/train) with 5 epochs.
Model Training results are:
Epoch 1/5
4986/4986 - 2770s 552ms/step - loss: 4.0122 - output_1_loss: 4.0122 - output_1_accuracy: 0.4544
Epoch 2/5
4986/4986 - 2753s 552ms/step - loss: 2.7074 - output_1_loss: 2.7074 - output_1_accuracy: 0.5843
Epoch 3/5
4986/4986 - 2754s 552ms/step - loss: 2.3411 - output_1_loss: 2.3411 - output_1_accuracy: 0.6214
Epoch 4/5
4986/4986 - 2754s 552ms/step - loss: 2.1241 - output_1_loss: 2.1241 - output_1_accuracy: 0.6431
Epoch 5/5
4986/4986 - 2754s 552ms/step - loss: 1.9647 - output_1_loss: 1.9647 - output_1_accuracy: 0.6597
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a turkish law included downstream task. See the
[model hub](https://huggingface.co/models?filter=gpt2) to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation.
Here is how to use this model to get the features of a given text in Tensorflow:
```python
>>> from transformers import GPT2Tokenizer , TFGPT2LMHeadModel
>>> tokenizer = GPT2Tokenizer.from_pretrained('emre/jurisprudence-textgen-gpt-2')
>>> model = TFGPT2LMHeadModel.from_pretrained('emre/jurisprudence-textgen-gpt-2')
>>> text = "Tarafların karşılıklı iddia ve savunmalarına," #Translation: "Mutual claims and defenses of the parties,"
>>> # encoding the input text
>>> input_ids = tokenizer.encode(text, return_tensors='tf')
>>> # getting out output
>>> beam_output = model.generate(
>>> input_ids,
>>> max_length = 250,
>>> num_beams = 5,
>>> temperature = 0.7,
>>> no_repeat_ngram_size=2,
>>> num_return_sequences=5
>>> )
>>> for i in range(5):
>>> print(tokenizer.decode(beam_output[i]))
[{'generated_text': "Tarafların karşılıklı iddia ve savunmalarına, dayandıkları belgelere, temyiz olunan kararda yazılı gerekçelere göre yerinde bulunmayan temyiz sebeplerinin reddiyle usul ve kanuna uygun mahkeme kararının İİK. 366. ve HUMK. 438. maddeleri uyarınca (ONANMASINA), 13.10 YTL onama harcı temyiz edenden alındığından başkaca harç alınmasına mahal olmadığına, 25.12.2007 gününde oybirliğiyle karar verildi."},
{'generated_text': "Tarafların karşılıklı iddia ve savunmalarına, dayandıkları belgelere, temyiz olunan kararda yazılı gerekçelere göre yerinde bulunmayan temyiz itirazlarının reddiyle usul ve kanuna uygun mahkeme kararının İİK. 366. ve HUMK. 438. maddeleri uyarınca (ONANMASINA), 15,60 TL onama harcı temyiz edenden alındığından başkaca harç alınmasına mahal olmadığına, 30/12/2009 gününde oybirliğiyle karar verildi."},
{'generated_text': "Tarafların karşılıklı iddia ve savunmalarına, dayandıkları belgelere, temyiz olunan kararda yazılı gerekçelere göre yerinde bulunmayan temyiz sebeplerinin reddiyle usul ve kanuna uygun mahkeme kararının İİK. 366. ve HUMK. 438. maddeleri uyarınca (ONANMASINA), 15,60 TL onama harcı temyiz edenden alındığından başkaca harç alınmasına mahal olmadığına, 30/12/2009 gününde oybirliğiyle karar verildi."},
{'generated_text': "Tarafların karşılıklı iddia ve savunmalarına, dayandıkları belgelere, temyiz olunan kararda yazılı gerekçelere göre yerinde bulunmayan temyiz sebeplerinin reddiyle usul ve kanuna uygun mahkeme kararının İİK. 366. ve HUMK. 438. maddeleri uyarınca (ONANMASINA), 13.10 YTL onama harcı temyiz edenden alındığından başkaca harç alınmasına mahal olmadığına, 25/12/2007 gününde oybirliğiyle karar verildi."},
{'generated_text': "Tarafların karşılıklı iddia ve savunmalarına, dayandıkları belgelere, temyiz olunan kararda yazılı gerekçelere göre yerinde bulunmayan temyiz sebeplerinin reddiyle usul ve kanuna uygun mahkeme kararının İİK. 366. ve HUMK. 438. maddeleri uyarınca (ONANMASINA), 13.10 YTL onama harcı temyiz edenden alındığından başkaca harç alınmasına mahal olmadığına, 27/12/2007 gününde oybirliğiyle karar verildi."}]
```
### BibTeX entry and citation info
soon will be defined..
|
{"language": "tr", "license": "mit"}
|
emre/jurisprudence-textgen-gpt-2
| null |
[
"transformers",
"tf",
"gpt2",
"tr",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #tf #gpt2 #tr #license-mit #endpoints_compatible #text-generation-inference #region-us
|
# jurisprudence-textgen-gpt-2
Pretrained model on Turkish language using a causal language modeling (CLM) objective.
## Model description of Original GPT-2
GPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.
More precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.
This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.
## Model description of jurisprudence-textgen-gpt-2
Jurisprudence-textgen-gpt-2 is a transformers model for tensorflow pretrained with 18950 Turkish court Jurisprudence text data which has been obtained from [Bilirkisi GITHUB REPO TRAIN DATA] (URL with 5 epochs.
Model Training results are:
Epoch 1/5
4986/4986 - 2770s 552ms/step - loss: 4.0122 - output_1_loss: 4.0122 - output_1_accuracy: 0.4544
Epoch 2/5
4986/4986 - 2753s 552ms/step - loss: 2.7074 - output_1_loss: 2.7074 - output_1_accuracy: 0.5843
Epoch 3/5
4986/4986 - 2754s 552ms/step - loss: 2.3411 - output_1_loss: 2.3411 - output_1_accuracy: 0.6214
Epoch 4/5
4986/4986 - 2754s 552ms/step - loss: 2.1241 - output_1_loss: 2.1241 - output_1_accuracy: 0.6431
Epoch 5/5
4986/4986 - 2754s 552ms/step - loss: 1.9647 - output_1_loss: 1.9647 - output_1_accuracy: 0.6597
## Intended uses & limitations
You can use the raw model for text generation or fine-tune it to a turkish law included downstream task. See the
model hub to look for fine-tuned versions on a task that interests you.
### How to use
You can use this model directly with a pipeline for text generation.
Here is how to use this model to get the features of a given text in Tensorflow:
### BibTeX entry and citation info
soon will be defined..
|
[
"# jurisprudence-textgen-gpt-2\n\nPretrained model on Turkish language using a causal language modeling (CLM) objective.",
"## Model description of Original GPT-2\nGPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.",
"## Model description of jurisprudence-textgen-gpt-2\nJurisprudence-textgen-gpt-2 is a transformers model for tensorflow pretrained with 18950 Turkish court Jurisprudence text data which has been obtained from [Bilirkisi GITHUB REPO TRAIN DATA] (URL with 5 epochs. \nModel Training results are:\n\nEpoch 1/5\n4986/4986 - 2770s 552ms/step - loss: 4.0122 - output_1_loss: 4.0122 - output_1_accuracy: 0.4544 \n\nEpoch 2/5\n4986/4986 - 2753s 552ms/step - loss: 2.7074 - output_1_loss: 2.7074 - output_1_accuracy: 0.5843 \n\nEpoch 3/5\n4986/4986 - 2754s 552ms/step - loss: 2.3411 - output_1_loss: 2.3411 - output_1_accuracy: 0.6214 \n\nEpoch 4/5\n4986/4986 - 2754s 552ms/step - loss: 2.1241 - output_1_loss: 2.1241 - output_1_accuracy: 0.6431 \n\nEpoch 5/5\n4986/4986 - 2754s 552ms/step - loss: 1.9647 - output_1_loss: 1.9647 - output_1_accuracy: 0.6597",
"## Intended uses & limitations\nYou can use the raw model for text generation or fine-tune it to a turkish law included downstream task. See the\nmodel hub to look for fine-tuned versions on a task that interests you.",
"### How to use\nYou can use this model directly with a pipeline for text generation.\nHere is how to use this model to get the features of a given text in Tensorflow:",
"### BibTeX entry and citation info\nsoon will be defined.."
] |
[
"TAGS\n#transformers #tf #gpt2 #tr #license-mit #endpoints_compatible #text-generation-inference #region-us \n",
"# jurisprudence-textgen-gpt-2\n\nPretrained model on Turkish language using a causal language modeling (CLM) objective.",
"## Model description of Original GPT-2\nGPT-2 is a transformers model pretrained on a very large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was trained to guess the next word in sentences.\n\nMore precisely, inputs are sequences of continuous text of a certain length and the targets are the same sequence, shifted one token (word or piece of word) to the right. The model uses internally a mask-mechanism to make sure the predictions for the token i only uses the inputs from 1 to i but not the future tokens.\n\nThis way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks. The model is best at what it was pretrained for however, which is generating texts from a prompt.",
"## Model description of jurisprudence-textgen-gpt-2\nJurisprudence-textgen-gpt-2 is a transformers model for tensorflow pretrained with 18950 Turkish court Jurisprudence text data which has been obtained from [Bilirkisi GITHUB REPO TRAIN DATA] (URL with 5 epochs. \nModel Training results are:\n\nEpoch 1/5\n4986/4986 - 2770s 552ms/step - loss: 4.0122 - output_1_loss: 4.0122 - output_1_accuracy: 0.4544 \n\nEpoch 2/5\n4986/4986 - 2753s 552ms/step - loss: 2.7074 - output_1_loss: 2.7074 - output_1_accuracy: 0.5843 \n\nEpoch 3/5\n4986/4986 - 2754s 552ms/step - loss: 2.3411 - output_1_loss: 2.3411 - output_1_accuracy: 0.6214 \n\nEpoch 4/5\n4986/4986 - 2754s 552ms/step - loss: 2.1241 - output_1_loss: 2.1241 - output_1_accuracy: 0.6431 \n\nEpoch 5/5\n4986/4986 - 2754s 552ms/step - loss: 1.9647 - output_1_loss: 1.9647 - output_1_accuracy: 0.6597",
"## Intended uses & limitations\nYou can use the raw model for text generation or fine-tune it to a turkish law included downstream task. See the\nmodel hub to look for fine-tuned versions on a task that interests you.",
"### How to use\nYou can use this model directly with a pipeline for text generation.\nHere is how to use this model to get the features of a given text in Tensorflow:",
"### BibTeX entry and citation info\nsoon will be defined.."
] |
automatic-speech-recognition
|
transformers
|
# wav2vec-tr-lite-AG
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.00005
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- gradient_accumulation_steps: 8
- total_train_batch_size: 32
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.4388 | 3.7 | 400 | 1.366 | 0.9701 |
| 0.3766 | 7.4 | 800 | 0.4914 | 0.5374 |
| 0.2295 | 11.11 | 1200 | 0.3934 | 0.4125 |
| 0.1121 | 14.81 | 1600 | 0.3264 | 0.2904 |
| 0.1473 | 18.51 | 2000 | 0.3103 | 0.2671 |
| 0.1013 | 22.22 | 2400 | 0.2589 | 0.2324 |
| 0.0704 | 25.92 | 2800 | 0.2826 | 0.2339 |
| 0.0537 | 29.63 | 3200 | 0.2704 | 0.2309 |
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.8.1
- Datasets 1.14.1.dev0
- Tokenizers 0.10.3
|
{"language": "tr", "license": "apache-2.0", "tags": ["audio", "automatic-speech-recognition", "speech"], "datasets": ["common_voice"], "metrics": ["wer"]}
|
emre/wav2vec-tr-lite-AG
| null |
[
"transformers",
"pytorch",
"jax",
"wav2vec2",
"automatic-speech-recognition",
"audio",
"speech",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec-tr-lite-AG
==================
Usage
-----
The model can be used directly (without a language model) as follows:
'''python
import torch
import torchaudio
from datasets import load\_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test\_dataset = load\_dataset("common\_voice", "tr", split="test[:2%]")
processor = Wav2Vec2Processor.from\_pretrained("emre/wav2vec-tr-lite-AG")
model = Wav2Vec2ForCTC.from\_pretrained("emre/wav2vec-tr-lite-AG")
resampler = torchaudio.transforms.Resample(48\_000, 16\_000)
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.00005
* train\_batch\_size: 2
* eval\_batch\_size: 8
* seed: 42
* distributed\_type: multi-GPU
* num\_devices: 2
* gradient\_accumulation\_steps: 8
* total\_train\_batch\_size: 32
* total\_eval\_batch\_size: 16
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.12.0.dev0
* Pytorch 1.8.1
* Datasets 1.14.1.dev0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00005\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.14.1.dev0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #jax #wav2vec2 #automatic-speech-recognition #audio #speech #tr #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.00005\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 8\n* seed: 42\n* distributed\\_type: multi-GPU\n* num\\_devices: 2\n* gradient\\_accumulation\\_steps: 8\n* total\\_train\\_batch\\_size: 32\n* total\\_eval\\_batch\\_size: 16\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.0.dev0\n* Pytorch 1.8.1\n* Datasets 1.14.1.dev0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_8_0 - TR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Wer: 0.2869
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.8222 | 0.64 | 500 | 3.5026 | 1.0 |
| 3.2136 | 1.28 | 1000 | 3.0593 | 1.0000 |
| 2.8882 | 1.91 | 1500 | 2.4670 | 0.9939 |
| 2.3743 | 2.55 | 2000 | 1.1844 | 0.8657 |
| 1.9456 | 3.19 | 2500 | 0.8228 | 0.7397 |
| 1.7781 | 3.83 | 3000 | 0.6826 | 0.6753 |
| 1.6848 | 4.46 | 3500 | 0.5885 | 0.6140 |
| 1.6228 | 5.1 | 4000 | 0.5274 | 0.5789 |
| 1.5768 | 5.74 | 4500 | 0.4900 | 0.5519 |
| 1.5431 | 6.38 | 5000 | 0.4508 | 0.5238 |
| 1.5019 | 7.02 | 5500 | 0.4248 | 0.5021 |
| 1.4684 | 7.65 | 6000 | 0.4009 | 0.4827 |
| 1.4635 | 8.29 | 6500 | 0.3830 | 0.4700 |
| 1.4291 | 8.93 | 7000 | 0.3707 | 0.4595 |
| 1.4271 | 9.57 | 7500 | 0.3570 | 0.4514 |
| 1.3938 | 10.2 | 8000 | 0.3479 | 0.4378 |
| 1.3914 | 10.84 | 8500 | 0.3396 | 0.4368 |
| 1.3767 | 11.48 | 9000 | 0.3253 | 0.4262 |
| 1.3641 | 12.12 | 9500 | 0.3251 | 0.4178 |
| 1.355 | 12.76 | 10000 | 0.3138 | 0.4136 |
| 1.336 | 13.39 | 10500 | 0.3121 | 0.4069 |
| 1.3292 | 14.03 | 11000 | 0.3041 | 0.4014 |
| 1.3249 | 14.67 | 11500 | 0.3014 | 0.3931 |
| 1.3156 | 15.31 | 12000 | 0.3014 | 0.3929 |
| 1.313 | 15.94 | 12500 | 0.2969 | 0.3968 |
| 1.3068 | 16.58 | 13000 | 0.2965 | 0.3966 |
| 1.2785 | 17.22 | 13500 | 0.2943 | 0.3850 |
| 1.2867 | 17.86 | 14000 | 0.2912 | 0.3782 |
| 1.2714 | 18.49 | 14500 | 0.2819 | 0.3747 |
| 1.2844 | 19.13 | 15000 | 0.2840 | 0.3740 |
| 1.2684 | 19.77 | 15500 | 0.2913 | 0.3828 |
| 1.26 | 20.41 | 16000 | 0.2739 | 0.3674 |
| 1.2543 | 21.05 | 16500 | 0.2740 | 0.3691 |
| 1.2532 | 21.68 | 17000 | 0.2709 | 0.3756 |
| 1.2409 | 22.32 | 17500 | 0.2669 | 0.3593 |
| 1.2404 | 22.96 | 18000 | 0.2673 | 0.3576 |
| 1.2347 | 23.6 | 18500 | 0.2678 | 0.3643 |
| 1.2351 | 24.23 | 19000 | 0.2715 | 0.3650 |
| 1.2409 | 24.87 | 19500 | 0.2637 | 0.3571 |
| 1.2152 | 25.51 | 20000 | 0.2785 | 0.3609 |
| 1.2046 | 26.15 | 20500 | 0.2610 | 0.3508 |
| 1.2082 | 26.79 | 21000 | 0.2619 | 0.3461 |
| 1.2109 | 27.42 | 21500 | 0.2597 | 0.3502 |
| 1.2014 | 28.06 | 22000 | 0.2608 | 0.3468 |
| 1.1948 | 28.7 | 22500 | 0.2573 | 0.3457 |
| 1.205 | 29.34 | 23000 | 0.2619 | 0.3464 |
| 1.2019 | 29.97 | 23500 | 0.2559 | 0.3474 |
| 1.1917 | 30.61 | 24000 | 0.2601 | 0.3462 |
| 1.1939 | 31.25 | 24500 | 0.2575 | 0.3387 |
| 1.1882 | 31.89 | 25000 | 0.2535 | 0.3368 |
| 1.191 | 32.53 | 25500 | 0.2489 | 0.3365 |
| 1.1767 | 33.16 | 26000 | 0.2501 | 0.3347 |
| 1.167 | 33.8 | 26500 | 0.2504 | 0.3347 |
| 1.1678 | 34.44 | 27000 | 0.2480 | 0.3378 |
| 1.1803 | 35.08 | 27500 | 0.2487 | 0.3345 |
| 1.167 | 35.71 | 28000 | 0.2442 | 0.3319 |
| 1.1661 | 36.35 | 28500 | 0.2495 | 0.3334 |
| 1.164 | 36.99 | 29000 | 0.2472 | 0.3292 |
| 1.1578 | 37.63 | 29500 | 0.2442 | 0.3242 |
| 1.1584 | 38.27 | 30000 | 0.2431 | 0.3314 |
| 1.1526 | 38.9 | 30500 | 0.2441 | 0.3347 |
| 1.1542 | 39.54 | 31000 | 0.2437 | 0.3330 |
| 1.1508 | 40.18 | 31500 | 0.2433 | 0.3294 |
| 1.1406 | 40.82 | 32000 | 0.2434 | 0.3271 |
| 1.1514 | 41.45 | 32500 | 0.2426 | 0.3255 |
| 1.1418 | 42.09 | 33000 | 0.2432 | 0.3233 |
| 1.1365 | 42.73 | 33500 | 0.2436 | 0.3240 |
| 1.1348 | 43.37 | 34000 | 0.2483 | 0.3257 |
| 1.1301 | 44.01 | 34500 | 0.2420 | 0.3271 |
| 1.1268 | 44.64 | 35000 | 0.2472 | 0.3225 |
| 1.1224 | 45.28 | 35500 | 0.2382 | 0.3205 |
| 1.1224 | 45.92 | 36000 | 0.2388 | 0.3184 |
| 1.1198 | 46.56 | 36500 | 0.2382 | 0.3202 |
| 1.1274 | 47.19 | 37000 | 0.2404 | 0.3172 |
| 1.1147 | 47.83 | 37500 | 0.2394 | 0.3164 |
| 1.121 | 48.47 | 38000 | 0.2406 | 0.3202 |
| 1.1109 | 49.11 | 38500 | 0.2384 | 0.3154 |
| 1.1164 | 49.74 | 39000 | 0.2375 | 0.3169 |
| 1.1105 | 50.38 | 39500 | 0.2387 | 0.3173 |
| 1.1054 | 51.02 | 40000 | 0.2362 | 0.3120 |
| 1.0893 | 51.66 | 40500 | 0.2399 | 0.3130 |
| 1.0913 | 52.3 | 41000 | 0.2357 | 0.3088 |
| 1.1017 | 52.93 | 41500 | 0.2345 | 0.3084 |
| 1.0937 | 53.57 | 42000 | 0.2330 | 0.3140 |
| 1.0945 | 54.21 | 42500 | 0.2399 | 0.3107 |
| 1.0933 | 54.85 | 43000 | 0.2383 | 0.3134 |
| 1.0912 | 55.48 | 43500 | 0.2372 | 0.3077 |
| 1.0898 | 56.12 | 44000 | 0.2339 | 0.3083 |
| 1.0903 | 56.76 | 44500 | 0.2367 | 0.3065 |
| 1.0947 | 57.4 | 45000 | 0.2352 | 0.3104 |
| 1.0751 | 58.04 | 45500 | 0.2334 | 0.3084 |
| 1.09 | 58.67 | 46000 | 0.2328 | 0.3100 |
| 1.0876 | 59.31 | 46500 | 0.2276 | 0.3050 |
| 1.076 | 59.95 | 47000 | 0.2309 | 0.3047 |
| 1.086 | 60.59 | 47500 | 0.2293 | 0.3047 |
| 1.082 | 61.22 | 48000 | 0.2328 | 0.3027 |
| 1.0714 | 61.86 | 48500 | 0.2290 | 0.3020 |
| 1.0746 | 62.5 | 49000 | 0.2313 | 0.3059 |
| 1.076 | 63.14 | 49500 | 0.2342 | 0.3050 |
| 1.0648 | 63.78 | 50000 | 0.2286 | 0.3025 |
| 1.0586 | 64.41 | 50500 | 0.2338 | 0.3044 |
| 1.0753 | 65.05 | 51000 | 0.2308 | 0.3045 |
| 1.0664 | 65.69 | 51500 | 0.2273 | 0.3009 |
| 1.0739 | 66.33 | 52000 | 0.2298 | 0.3027 |
| 1.0695 | 66.96 | 52500 | 0.2247 | 0.2996 |
| 1.06 | 67.6 | 53000 | 0.2276 | 0.3015 |
| 1.0742 | 68.24 | 53500 | 0.2280 | 0.2974 |
| 1.0618 | 68.88 | 54000 | 0.2291 | 0.2989 |
| 1.062 | 69.52 | 54500 | 0.2302 | 0.2971 |
| 1.0572 | 70.15 | 55000 | 0.2280 | 0.2990 |
| 1.055 | 70.79 | 55500 | 0.2278 | 0.2983 |
| 1.0553 | 71.43 | 56000 | 0.2282 | 0.2991 |
| 1.0509 | 72.07 | 56500 | 0.2261 | 0.2959 |
| 1.0469 | 72.7 | 57000 | 0.2216 | 0.2919 |
| 1.0476 | 73.34 | 57500 | 0.2267 | 0.2989 |
| 1.0494 | 73.98 | 58000 | 0.2260 | 0.2960 |
| 1.0517 | 74.62 | 58500 | 0.2297 | 0.2989 |
| 1.0458 | 75.26 | 59000 | 0.2246 | 0.2923 |
| 1.0382 | 75.89 | 59500 | 0.2255 | 0.2922 |
| 1.0462 | 76.53 | 60000 | 0.2258 | 0.2954 |
| 1.0375 | 77.17 | 60500 | 0.2251 | 0.2929 |
| 1.0332 | 77.81 | 61000 | 0.2277 | 0.2940 |
| 1.0423 | 78.44 | 61500 | 0.2243 | 0.2896 |
| 1.0379 | 79.08 | 62000 | 0.2274 | 0.2928 |
| 1.0398 | 79.72 | 62500 | 0.2237 | 0.2928 |
| 1.0395 | 80.36 | 63000 | 0.2265 | 0.2956 |
| 1.0397 | 80.99 | 63500 | 0.2240 | 0.2920 |
| 1.0262 | 81.63 | 64000 | 0.2244 | 0.2934 |
| 1.0335 | 82.27 | 64500 | 0.2265 | 0.2936 |
| 1.0385 | 82.91 | 65000 | 0.2238 | 0.2928 |
| 1.0289 | 83.55 | 65500 | 0.2219 | 0.2912 |
| 1.0372 | 84.18 | 66000 | 0.2236 | 0.2898 |
| 1.0279 | 84.82 | 66500 | 0.2219 | 0.2902 |
| 1.0325 | 85.46 | 67000 | 0.2240 | 0.2908 |
| 1.0202 | 86.1 | 67500 | 0.2206 | 0.2886 |
| 1.0166 | 86.73 | 68000 | 0.2219 | 0.2886 |
| 1.0259 | 87.37 | 68500 | 0.2235 | 0.2897 |
| 1.0337 | 88.01 | 69000 | 0.2210 | 0.2873 |
| 1.0264 | 88.65 | 69500 | 0.2216 | 0.2882 |
| 1.0231 | 89.29 | 70000 | 0.2223 | 0.2899 |
| 1.0281 | 89.92 | 70500 | 0.2214 | 0.2872 |
| 1.0135 | 90.56 | 71000 | 0.2218 | 0.2868 |
| 1.0291 | 91.2 | 71500 | 0.2209 | 0.2863 |
| 1.0321 | 91.84 | 72000 | 0.2199 | 0.2876 |
| 1.028 | 92.47 | 72500 | 0.2214 | 0.2858 |
| 1.0213 | 93.11 | 73000 | 0.2219 | 0.2875 |
| 1.0261 | 93.75 | 73500 | 0.2232 | 0.2869 |
| 1.0197 | 94.39 | 74000 | 0.2227 | 0.2866 |
| 1.0298 | 95.03 | 74500 | 0.2228 | 0.2868 |
| 1.0192 | 95.66 | 75000 | 0.2230 | 0.2865 |
| 1.0156 | 96.3 | 75500 | 0.2220 | 0.2869 |
| 1.0075 | 96.94 | 76000 | 0.2223 | 0.2866 |
| 1.0201 | 97.58 | 76500 | 0.2219 | 0.2866 |
| 1.0159 | 98.21 | 77000 | 0.2219 | 0.2876 |
| 1.0087 | 98.85 | 77500 | 0.2219 | 0.2873 |
| 1.0159 | 99.49 | 78000 | 0.2223 | 0.2867 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2.dev0
- Tokenizers 0.11.0
|
{"language": "tr", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_8_0", "robust-speech-event"], "datasets": ["mozilla-foundation/common_voice_8_0"], "base_model": "facebook/wav2vec2-xls-r-300m", "model-index": [{"name": "wav2vec2-large-xls-r-300m-tr", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice_8_0", "args": "tr"}, "metrics": [{"type": "wer", "value": 28.69, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-large-xls-r-300m-tr
| null |
[
"transformers",
"pytorch",
"safetensors",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"mozilla-foundation/common_voice_8_0",
"robust-speech-event",
"tr",
"dataset:mozilla-foundation/common_voice_8_0",
"base_model:facebook/wav2vec2-xls-r-300m",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #tr #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xls-r-300m-tr
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the MOZILLA-FOUNDATION/COMMON\_VOICE\_8\_0 - TR dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2224
* Wer: 0.2869
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 100.0
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.17.0.dev0
* Pytorch 1.10.2+cu102
* Datasets 1.18.2.dev0
* Tokenizers 0.11.0
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
[
"TAGS\n#transformers #pytorch #safetensors #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #mozilla-foundation/common_voice_8_0 #robust-speech-event #tr #dataset-mozilla-foundation/common_voice_8_0 #base_model-facebook/wav2vec2-xls-r-300m #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 100.0\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.17.0.dev0\n* Pytorch 1.10.2+cu102\n* Datasets 1.18.2.dev0\n* Tokenizers 0.11.0"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-W2V2-TATAR-SMALL
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4714
- Wer: 0.5316
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 6.2446 | 1.17 | 400 | 3.2621 | 1.0 |
| 1.739 | 2.35 | 800 | 0.5832 | 0.7688 |
| 0.4718 | 3.52 | 1200 | 0.4785 | 0.6824 |
| 0.3574 | 4.69 | 1600 | 0.4814 | 0.6792 |
| 0.2946 | 5.86 | 2000 | 0.4484 | 0.6506 |
| 0.2674 | 7.04 | 2400 | 0.4612 | 0.6225 |
| 0.2349 | 8.21 | 2800 | 0.4600 | 0.6050 |
| 0.2206 | 9.38 | 3200 | 0.4772 | 0.6048 |
| 0.2072 | 10.56 | 3600 | 0.4676 | 0.6106 |
| 0.1984 | 11.73 | 4000 | 0.4816 | 0.6079 |
| 0.1793 | 12.9 | 4400 | 0.4616 | 0.5836 |
| 0.172 | 14.08 | 4800 | 0.4808 | 0.5860 |
| 0.1624 | 15.25 | 5200 | 0.4854 | 0.5820 |
| 0.156 | 16.42 | 5600 | 0.4609 | 0.5656 |
| 0.1448 | 17.59 | 6000 | 0.4926 | 0.5817 |
| 0.1406 | 18.77 | 6400 | 0.4638 | 0.5654 |
| 0.1337 | 19.94 | 6800 | 0.4731 | 0.5652 |
| 0.1317 | 21.11 | 7200 | 0.4861 | 0.5639 |
| 0.1179 | 22.29 | 7600 | 0.4766 | 0.5521 |
| 0.1197 | 23.46 | 8000 | 0.4824 | 0.5584 |
| 0.1096 | 24.63 | 8400 | 0.5006 | 0.5559 |
| 0.1038 | 25.81 | 8800 | 0.4994 | 0.5440 |
| 0.0992 | 26.98 | 9200 | 0.4867 | 0.5405 |
| 0.0984 | 28.15 | 9600 | 0.4798 | 0.5361 |
| 0.0943 | 29.33 | 10000 | 0.4714 | 0.5316 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"language": "tt", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event", "tt"], "datasets": ["common_voice"], "base_model": "facebook/wav2vec2-large-xlsr-53", "model-index": [{"name": "wav2vec2-large-xlsr-53-W2V2-TATAR-SMALL", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "tt"}, "metrics": [{"type": "wer", "value": 53.16, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-large-xlsr-53-W2V2-TATAR-SMALL
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tt",
"dataset:common_voice",
"base_model:facebook/wav2vec2-large-xlsr-53",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tt"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tt #dataset-common_voice #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-W2V2-TATAR-SMALL
=======================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4714
* Wer: 0.5316
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tt #dataset-common_voice #base_model-facebook/wav2vec2-large-xlsr-53 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-W2V2-TR-MED
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Wer: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1343 | 4.21 | 400 | 2.3674 | 1.0372 |
| 0.8075 | 8.42 | 800 | 0.4583 | 0.6308 |
| 0.3209 | 12.63 | 1200 | 0.4291 | 0.5531 |
| 0.2273 | 16.84 | 1600 | 0.4348 | 0.5378 |
| 0.1764 | 21.05 | 2000 | 0.4550 | 0.5326 |
| 0.148 | 25.26 | 2400 | 0.4839 | 0.5319 |
| 0.1268 | 29.47 | 2800 | 0.4515 | 0.5070 |
| 0.1113 | 33.68 | 3200 | 0.4590 | 0.4930 |
| 0.1025 | 37.89 | 3600 | 0.4546 | 0.4888 |
| 0.0922 | 42.11 | 4000 | 0.4782 | 0.4852 |
| 0.082 | 46.32 | 4400 | 0.4605 | 0.4752 |
| 0.0751 | 50.53 | 4800 | 0.4358 | 0.4689 |
| 0.0699 | 54.74 | 5200 | 0.4359 | 0.4629 |
| 0.0633 | 58.95 | 5600 | 0.4467 | 0.4598 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-W2V2-TR-MED", "results": []}]}
|
emre/wav2vec2-large-xlsr-53-W2V2-TR-MED
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-W2V2-TR-MED
==================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4467
* Wer: 0.4598
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 60
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3966
- Wer: 0.4834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1516 | 4.21 | 400 | 2.7673 | 1.0 |
| 0.9134 | 8.42 | 800 | 0.4618 | 0.6418 |
| 0.3273 | 12.63 | 1200 | 0.4188 | 0.5535 |
| 0.2252 | 16.84 | 1600 | 0.4144 | 0.5232 |
| 0.1692 | 21.05 | 2000 | 0.3995 | 0.5030 |
| 0.1355 | 25.26 | 2400 | 0.4073 | 0.4920 |
| 0.1172 | 29.47 | 2800 | 0.3966 | 0.4834 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-demo-colab", "results": []}]}
|
emre/wav2vec2-large-xlsr-53-demo-colab
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-demo-colab
=================================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3966
* Wer: 0.4834
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-sah-CV8
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5089
- Wer: 0.5606
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6849 | 16.67 | 500 | 1.1135 | 0.9344 |
| 0.8223 | 33.33 | 1000 | 0.5148 | 0.5686 |
| 0.5477 | 50.0 | 1500 | 0.5089 | 0.5606 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"language": "sah", "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-large-xlsr-53-sah-CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sah", "type": "common_voice", "args": "sah"}, "metrics": [{"type": "wer", "value": 56.06, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "sah"}, "metrics": [{"type": "wer", "value": 43.75, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-large-xlsr-53-sah-CV8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"sah",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sah"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-large-xlsr-53-sah-CV8
==============================
This model is a fine-tuned version of facebook/wav2vec2-large-xlsr-53 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5089
* Wer: 0.5606
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-Br-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0573
- Wer: 0.6675
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.7464 | 2.79 | 400 | 1.7474 | 1.1018 |
| 1.1117 | 5.59 | 800 | 0.9434 | 0.8697 |
| 0.6481 | 8.39 | 1200 | 0.9251 | 0.7910 |
| 0.4754 | 11.19 | 1600 | 0.9208 | 0.7412 |
| 0.3602 | 13.98 | 2000 | 0.9284 | 0.7232 |
| 0.2873 | 16.78 | 2400 | 0.9299 | 0.6940 |
| 0.2386 | 19.58 | 2800 | 1.0182 | 0.6927 |
| 0.1971 | 22.38 | 3200 | 1.0456 | 0.6898 |
| 0.1749 | 25.17 | 3600 | 1.0208 | 0.6769 |
| 0.1487 | 27.97 | 4000 | 1.0573 | 0.6675 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"language": "br", "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Br-small", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice br", "type": "common_voice", "args": "br"}, "metrics": [{"type": "wer", "value": 66.75, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-Br-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"br",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"br"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Br-small
============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 1.0573
* Wer: 0.6675
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #br #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Russian-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3514
- Wer: 0.4838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.512 | 1.32 | 400 | 3.2207 | 1.0 |
| 3.1562 | 2.65 | 800 | 3.0166 | 1.0 |
| 1.5211 | 3.97 | 1200 | 0.7134 | 0.8275 |
| 0.6724 | 5.3 | 1600 | 0.4713 | 0.6402 |
| 0.4693 | 6.62 | 2000 | 0.3904 | 0.5668 |
| 0.3693 | 7.95 | 2400 | 0.3609 | 0.5121 |
| 0.3004 | 9.27 | 2800 | 0.3514 | 0.4838 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"language": ["ru"], "license": "apache-2.0", "tags": ["generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Russian-small", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice ru", "type": "common_voice", "args": "ru"}, "metrics": [{"type": "wer", "value": 48.38, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 58.25, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "ru"}, "metrics": [{"type": "wer", "value": 56.83, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-Russian-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ru",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ru"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ru #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Russian-small
=================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.3514
* Wer: 0.4838
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 10
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ru #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 10\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
This model is a fine-tuned version of [emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8](https://huggingface.co/emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Wer: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0402 | 0.67 | 500 | 0.3354 | 0.5681 |
| 0.7265 | 1.33 | 1000 | 0.3181 | 0.5444 |
| 0.6858 | 2.0 | 1500 | 0.3044 | 0.5322 |
| 0.6537 | 2.66 | 2000 | 0.2911 | 0.5217 |
| 0.6337 | 3.33 | 2500 | 0.2874 | 0.5164 |
| 0.6111 | 3.99 | 3000 | 0.2758 | 0.5059 |
| 0.5815 | 4.66 | 3500 | 0.2708 | 0.5010 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8", "results": []}]}
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
===========================================================
This model is a fine-tuned version of emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8 on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2708
* Wer: 0.5010
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2556
- Wer: 0.4914
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4876 | 6.66 | 5000 | 0.3252 | 0.5784 |
| 0.6919 | 13.32 | 10000 | 0.2720 | 0.5172 |
| 0.5919 | 19.97 | 15000 | 0.2556 | 0.4914 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"language": "tr", "license": "apache-2.0", "tags": ["generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Tr-med-CommonVoice8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice tr", "type": "common_voice", "args": "tr"}, "metrics": [{"type": "wer", "value": 49.14, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"tr",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Tr-med-CommonVoice8
=======================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2556
* Wer: 0.4914
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 20
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #tr #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 20\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-med
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 0.4677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8093 | 4.21 | 400 | 2.7831 | 1.0 |
| 0.9881 | 8.42 | 800 | 0.5088 | 0.6681 |
| 0.3519 | 12.63 | 1200 | 0.4496 | 0.6007 |
| 0.2436 | 16.84 | 1600 | 0.4993 | 0.5654 |
| 0.1874 | 21.05 | 2000 | 0.4793 | 0.5530 |
| 0.1561 | 25.26 | 2400 | 0.5187 | 0.5589 |
| 0.1336 | 29.47 | 2800 | 0.5135 | 0.5311 |
| 0.1163 | 33.68 | 3200 | 0.4960 | 0.5143 |
| 0.1056 | 37.89 | 3600 | 0.4795 | 0.5045 |
| 0.0959 | 42.11 | 4000 | 0.4883 | 0.4987 |
| 0.0819 | 46.32 | 4400 | 0.4799 | 0.4903 |
| 0.0756 | 50.53 | 4800 | 0.4822 | 0.4831 |
| 0.0692 | 54.74 | 5200 | 0.4621 | 0.4762 |
| 0.062 | 58.95 | 5600 | 0.4727 | 0.4677 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Turkish-Tr-med", "results": []}]}
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-med
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Turkish-Tr-med
==================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4727
* Wer: 0.4677
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 60
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 60\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4813
- Wer: 0.7207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2 | 0.53 | 400 | 3.1949 | 0.9964 |
| 2.9387 | 1.07 | 800 | 2.5015 | 1.0337 |
| 1.5975 | 1.6 | 1200 | 1.0928 | 0.9945 |
| 1.0688 | 2.13 | 1600 | 0.8388 | 0.9390 |
| 0.8977 | 2.66 | 2000 | 0.7106 | 0.8889 |
| 0.789 | 3.2 | 2400 | 0.6051 | 0.8273 |
| 0.7116 | 3.73 | 2800 | 0.5580 | 0.7855 |
| 0.6576 | 4.26 | 3200 | 0.5033 | 0.7433 |
| 0.6002 | 4.79 | 3600 | 0.4813 | 0.7207 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8", "results": []}]}
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Turkish-Tr-small-CommonVoice8
=================================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4813
* Wer: 0.7207
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 5
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 5\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4375
- Wer: 0.5050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8735 | 4.21 | 400 | 2.8173 | 1.0002 |
| 1.0073 | 8.42 | 800 | 0.4981 | 0.6717 |
| 0.3395 | 12.63 | 1200 | 0.4470 | 0.5866 |
| 0.2254 | 16.84 | 1600 | 0.4349 | 0.5491 |
| 0.1648 | 21.05 | 2000 | 0.4454 | 0.5284 |
| 0.1325 | 25.26 | 2400 | 0.4552 | 0.5131 |
| 0.1102 | 29.47 | 2800 | 0.4375 | 0.5050 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-Turkish-Tr-small", "results": []}]}
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-Turkish-Tr-small
====================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.4375
* Wer: 0.5050
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 30
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 30\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9068
- Wer: 0.7900
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.6926 | 19.05 | 400 | 2.7538 | 1.0 |
| 0.7031 | 38.1 | 800 | 0.9068 | 0.7900 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"language": "sah", "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice sah", "type": "common_voice", "args": "sah"}, "metrics": [{"type": "wer", "value": 79.0, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"sah",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"sah"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-W2V2-XLSR-300M-YAKUT-SMALL
==============================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9068
* Wer: 0.7900
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0003
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 500
* num\_epochs: 50
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #sah #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0003\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 500\n* num\\_epochs: 50\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
# wav2vec2-xls-r-300m-ab-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2105
- Wer: 0.5474
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 4.7729 | 0.63 | 500 | 3.0624 | 1.0021 |
| 2.7348 | 1.26 | 1000 | 1.0460 | 0.9815 |
| 1.2756 | 1.9 | 1500 | 0.4618 | 0.8309 |
| 1.0419 | 2.53 | 2000 | 0.3725 | 0.7449 |
| 0.9491 | 3.16 | 2500 | 0.3368 | 0.7345 |
| 0.9006 | 3.79 | 3000 | 0.3014 | 0.6936 |
| 0.8519 | 4.42 | 3500 | 0.2852 | 0.6767 |
| 0.8243 | 5.06 | 4000 | 0.2701 | 0.6504 |
| 0.7902 | 5.69 | 4500 | 0.2641 | 0.6221 |
| 0.7767 | 6.32 | 5000 | 0.2549 | 0.6192 |
| 0.7516 | 6.95 | 5500 | 0.2515 | 0.6179 |
| 0.737 | 7.59 | 6000 | 0.2408 | 0.5963 |
| 0.7217 | 8.22 | 6500 | 0.2429 | 0.6261 |
| 0.7101 | 8.85 | 7000 | 0.2366 | 0.5687 |
| 0.6922 | 9.48 | 7500 | 0.2277 | 0.5680 |
| 0.6866 | 10.11 | 8000 | 0.2242 | 0.5847 |
| 0.6703 | 10.75 | 8500 | 0.2222 | 0.5803 |
| 0.6649 | 11.38 | 9000 | 0.2247 | 0.5765 |
| 0.6513 | 12.01 | 9500 | 0.2182 | 0.5644 |
| 0.6369 | 12.64 | 10000 | 0.2128 | 0.5508 |
| 0.6425 | 13.27 | 10500 | 0.2132 | 0.5514 |
| 0.6399 | 13.91 | 11000 | 0.2116 | 0.5495 |
| 0.6208 | 14.54 | 11500 | 0.2105 | 0.5474 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"language": "ab", "license": "apache-2.0", "tags": ["generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-ab-CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "ab"}, "metrics": [{"type": "wer", "value": 44.9, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-ab-CV8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"ab",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"ab"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ab #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-ab-CV8
==========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2105
* Wer: 0.5474
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #ab #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-as-CV8-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"language": "as", "license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-as-CV8-v1", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "as"}, "metrics": [{"type": "wer", "value": 100.0, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-as-CV8-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"hf-asr-leaderboard",
"as",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"as"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
# wav2vec2-xls-r-300m-as-CV8-v1
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
[
"# wav2vec2-xls-r-300m-as-CV8-v1\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 300\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #hf-asr-leaderboard #as #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"# wav2vec2-xls-r-300m-as-CV8-v1\n\nThis model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common_voice dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 0.0001\n- train_batch_size: 16\n- eval_batch_size: 8\n- seed: 42\n- gradient_accumulation_steps: 2\n- total_train_batch_size: 32\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- lr_scheduler_warmup_steps: 300\n- num_epochs: 30\n- mixed_precision_training: Native AMP",
"### Training results",
"### Framework versions\n\n- Transformers 4.11.3\n- Pytorch 1.10.0+cu111\n- Datasets 1.18.3\n- Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bas-CV8-v2
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6121
- Wer: 0.5697
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 90
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.5211 | 16.13 | 500 | 1.2661 | 0.9153 |
| 0.7026 | 32.25 | 1000 | 0.6245 | 0.6516 |
| 0.3752 | 48.38 | 1500 | 0.6039 | 0.6148 |
| 0.2752 | 64.51 | 2000 | 0.6080 | 0.5808 |
| 0.2155 | 80.63 | 2500 | 0.6121 | 0.5697 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"language": "bas", "license": "apache-2.0", "tags": ["automatic-speech-recognition", "common_voice", "generated_from_trainer", "bas", "robust-speech-event", "hf-asr-leaderboard"], "datasets": ["mozilla-foundation/common_voice_8_0"], "model-index": [{"name": "wav2vec2-xls-r-300m-bas-CV8-v2", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8", "type": "mozilla-foundation/common_voice_8_0", "args": "bas"}, "metrics": [{"type": "wer", "value": 56.97, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-bas-CV8-v2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"common_voice",
"generated_from_trainer",
"bas",
"robust-speech-event",
"hf-asr-leaderboard",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"bas"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #bas #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-bas-CV8-v2
==============================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6121
* Wer: 0.5697
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 90
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 90\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #common_voice #generated_from_trainer #bas #robust-speech-event #hf-asr-leaderboard #dataset-mozilla-foundation/common_voice_8_0 #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 90\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-gl-CV8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2151
- Wer: 0.2080
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 15
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.9427 | 4.9 | 500 | 2.8801 | 1.0 |
| 2.1594 | 9.8 | 1000 | 0.4092 | 0.4001 |
| 0.7332 | 14.71 | 1500 | 0.2151 | 0.2080 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
{"language": "gl", "license": "apache-2.0", "tags": ["generated_from_trainer", "hf-asr-leaderboard", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-gl-CV8", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Speech Recognition"}, "dataset": {"name": "Common Voice gl", "type": "common_voice", "args": "gl"}, "metrics": [{"type": "wer", "value": 0.208, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Common Voice 8.0", "type": "mozilla-foundation/common_voice_8_0", "args": "gl"}, "metrics": [{"type": "wer", "value": 22.94, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Dev Data", "type": "speech-recognition-community-v2/dev_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 47.82, "name": "Test WER"}]}, {"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "Robust Speech Event - Test Data", "type": "speech-recognition-community-v2/eval_data", "args": "gl"}, "metrics": [{"type": "wer", "value": 50.8, "name": "Test WER"}]}]}]}
|
emre/wav2vec2-xls-r-300m-gl-CV8
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"hf-asr-leaderboard",
"robust-speech-event",
"gl",
"dataset:common_voice",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"gl"
] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #gl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-gl-CV8
==========================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.2151
* Wer: 0.2080
---
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 15
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #hf-asr-leaderboard #robust-speech-event #gl #dataset-common_voice #license-apache-2.0 #model-index #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 15\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.1\n* Tokenizers 0.10.3"
] |
automatic-speech-recognition
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-hy-AM-CV8-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9145
- Wer: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 170
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.7132 | 83.31 | 500 | 1.9274 | 1.0523 |
| 1.017 | 166.62 | 1000 | 0.9145 | 0.9598 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer", "robust-speech-event"], "datasets": ["common_voice"], "model-index": [{"name": "wav2vec2-xls-r-300m-hy-AM-CV8-v1", "results": []}]}
|
emre/wav2vec2-xls-r-300m-hy-AM-CV8-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us
|
wav2vec2-xls-r-300m-hy-AM-CV8-v1
================================
This model is a fine-tuned version of facebook/wav2vec2-xls-r-300m on the common\_voice dataset.
It achieves the following results on the evaluation set:
* Loss: 0.9145
* Wer: 0.9598
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 0.0001
* train\_batch\_size: 16
* eval\_batch\_size: 8
* seed: 42
* gradient\_accumulation\_steps: 2
* total\_train\_batch\_size: 32
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* lr\_scheduler\_warmup\_steps: 300
* num\_epochs: 170
* mixed\_precision\_training: Native AMP
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.10.0+cu111
* Datasets 1.18.3
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 170\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #wav2vec2 #automatic-speech-recognition #generated_from_trainer #robust-speech-event #dataset-common_voice #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 0.0001\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 8\n* seed: 42\n* gradient\\_accumulation\\_steps: 2\n* total\\_train\\_batch\\_size: 32\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* lr\\_scheduler\\_warmup\\_steps: 300\n* num\\_epochs: 170\n* mixed\\_precision\\_training: Native AMP",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.10.0+cu111\n* Datasets 1.18.3\n* Tokenizers 0.10.3"
] |
zero-shot-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased_allnli_tr
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6144
- Accuracy: 0.7662
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8623 | 0.03 | 1000 | 0.9076 | 0.5917 |
| 0.7528 | 0.07 | 2000 | 0.8587 | 0.6119 |
| 0.7074 | 0.1 | 3000 | 0.7867 | 0.6647 |
| 0.6949 | 0.14 | 4000 | 0.7474 | 0.6772 |
| 0.6681 | 0.17 | 5000 | 0.7661 | 0.6814 |
| 0.6597 | 0.2 | 6000 | 0.7264 | 0.6943 |
| 0.6495 | 0.24 | 7000 | 0.7841 | 0.6781 |
| 0.6323 | 0.27 | 8000 | 0.7256 | 0.6952 |
| 0.6308 | 0.31 | 9000 | 0.7319 | 0.6958 |
| 0.6254 | 0.34 | 10000 | 0.7054 | 0.7004 |
| 0.6233 | 0.37 | 11000 | 0.7069 | 0.7085 |
| 0.6165 | 0.41 | 12000 | 0.6880 | 0.7181 |
| 0.6033 | 0.44 | 13000 | 0.6844 | 0.7197 |
| 0.6014 | 0.48 | 14000 | 0.6753 | 0.7129 |
| 0.5947 | 0.51 | 15000 | 0.7000 | 0.7039 |
| 0.5965 | 0.54 | 16000 | 0.6708 | 0.7263 |
| 0.5979 | 0.58 | 17000 | 0.6562 | 0.7285 |
| 0.5787 | 0.61 | 18000 | 0.6554 | 0.7297 |
| 0.58 | 0.65 | 19000 | 0.6544 | 0.7315 |
| 0.574 | 0.68 | 20000 | 0.6549 | 0.7339 |
| 0.5751 | 0.71 | 21000 | 0.6545 | 0.7289 |
| 0.5659 | 0.75 | 22000 | 0.6467 | 0.7371 |
| 0.5732 | 0.78 | 23000 | 0.6448 | 0.7362 |
| 0.5637 | 0.82 | 24000 | 0.6520 | 0.7355 |
| 0.5648 | 0.85 | 25000 | 0.6412 | 0.7345 |
| 0.5622 | 0.88 | 26000 | 0.6350 | 0.7358 |
| 0.5579 | 0.92 | 27000 | 0.6347 | 0.7393 |
| 0.5518 | 0.95 | 28000 | 0.6417 | 0.7392 |
| 0.5547 | 0.99 | 29000 | 0.6321 | 0.7437 |
| 0.524 | 1.02 | 30000 | 0.6430 | 0.7412 |
| 0.4982 | 1.05 | 31000 | 0.6253 | 0.7458 |
| 0.5002 | 1.09 | 32000 | 0.6316 | 0.7418 |
| 0.4993 | 1.12 | 33000 | 0.6197 | 0.7487 |
| 0.4963 | 1.15 | 34000 | 0.6307 | 0.7462 |
| 0.504 | 1.19 | 35000 | 0.6272 | 0.7480 |
| 0.4922 | 1.22 | 36000 | 0.6410 | 0.7433 |
| 0.5016 | 1.26 | 37000 | 0.6295 | 0.7461 |
| 0.4957 | 1.29 | 38000 | 0.6183 | 0.7506 |
| 0.4883 | 1.32 | 39000 | 0.6261 | 0.7502 |
| 0.4985 | 1.36 | 40000 | 0.6315 | 0.7496 |
| 0.4885 | 1.39 | 41000 | 0.6189 | 0.7529 |
| 0.4909 | 1.43 | 42000 | 0.6189 | 0.7473 |
| 0.4894 | 1.46 | 43000 | 0.6314 | 0.7433 |
| 0.4912 | 1.49 | 44000 | 0.6184 | 0.7446 |
| 0.4851 | 1.53 | 45000 | 0.6258 | 0.7461 |
| 0.4879 | 1.56 | 46000 | 0.6286 | 0.7480 |
| 0.4907 | 1.6 | 47000 | 0.6196 | 0.7512 |
| 0.4884 | 1.63 | 48000 | 0.6157 | 0.7526 |
| 0.4755 | 1.66 | 49000 | 0.6056 | 0.7591 |
| 0.4811 | 1.7 | 50000 | 0.5977 | 0.7582 |
| 0.4787 | 1.73 | 51000 | 0.5915 | 0.7621 |
| 0.4779 | 1.77 | 52000 | 0.6014 | 0.7583 |
| 0.4767 | 1.8 | 53000 | 0.6041 | 0.7623 |
| 0.4737 | 1.83 | 54000 | 0.6093 | 0.7563 |
| 0.4836 | 1.87 | 55000 | 0.6001 | 0.7568 |
| 0.4765 | 1.9 | 56000 | 0.6109 | 0.7601 |
| 0.4776 | 1.94 | 57000 | 0.6046 | 0.7599 |
| 0.4769 | 1.97 | 58000 | 0.5970 | 0.7568 |
| 0.4654 | 2.0 | 59000 | 0.6147 | 0.7614 |
| 0.4144 | 2.04 | 60000 | 0.6439 | 0.7566 |
| 0.4101 | 2.07 | 61000 | 0.6373 | 0.7527 |
| 0.4192 | 2.11 | 62000 | 0.6136 | 0.7575 |
| 0.4128 | 2.14 | 63000 | 0.6283 | 0.7560 |
| 0.4204 | 2.17 | 64000 | 0.6187 | 0.7625 |
| 0.4114 | 2.21 | 65000 | 0.6127 | 0.7621 |
| 0.4097 | 2.24 | 66000 | 0.6188 | 0.7626 |
| 0.4129 | 2.28 | 67000 | 0.6156 | 0.7639 |
| 0.4085 | 2.31 | 68000 | 0.6232 | 0.7616 |
| 0.4074 | 2.34 | 69000 | 0.6240 | 0.7605 |
| 0.409 | 2.38 | 70000 | 0.6153 | 0.7591 |
| 0.4046 | 2.41 | 71000 | 0.6375 | 0.7587 |
| 0.4117 | 2.45 | 72000 | 0.6145 | 0.7629 |
| 0.4002 | 2.48 | 73000 | 0.6279 | 0.7610 |
| 0.4042 | 2.51 | 74000 | 0.6176 | 0.7646 |
| 0.4055 | 2.55 | 75000 | 0.6277 | 0.7643 |
| 0.4021 | 2.58 | 76000 | 0.6196 | 0.7642 |
| 0.4081 | 2.62 | 77000 | 0.6127 | 0.7659 |
| 0.408 | 2.65 | 78000 | 0.6237 | 0.7638 |
| 0.3997 | 2.68 | 79000 | 0.6190 | 0.7636 |
| 0.4093 | 2.72 | 80000 | 0.6152 | 0.7648 |
| 0.4095 | 2.75 | 81000 | 0.6155 | 0.7627 |
| 0.4088 | 2.79 | 82000 | 0.6130 | 0.7641 |
| 0.4063 | 2.82 | 83000 | 0.6072 | 0.7646 |
| 0.3978 | 2.85 | 84000 | 0.6128 | 0.7662 |
| 0.4034 | 2.89 | 85000 | 0.6157 | 0.7627 |
| 0.4044 | 2.92 | 86000 | 0.6127 | 0.7661 |
| 0.403 | 2.96 | 87000 | 0.6126 | 0.7664 |
| 0.4033 | 2.99 | 88000 | 0.6144 | 0.7662 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "mit", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["nli_tr"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Dolar y\u00fckselmeye devam ediyor.", "candidate_labels": "ekonomi, siyaset, spor"}, {"text": "Senaryo \u00e7ok sa\u00e7mayd\u0131, be\u011fendim diyemem.", "candidate_labels": "olumlu, olumsuz"}]}
|
emrecan/bert-base-multilingual-cased-allnli_tr
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-base-multilingual-cased\_allnli\_tr
========================================
This model is a fine-tuned version of bert-base-multilingual-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6144
* Accuracy: 0.7662
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
zero-shot-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5771
- Accuracy: 0.7978
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.8559 | 0.03 | 1000 | 0.7577 | 0.6798 |
| 0.6612 | 0.07 | 2000 | 0.7263 | 0.6958 |
| 0.6115 | 0.1 | 3000 | 0.6431 | 0.7364 |
| 0.5916 | 0.14 | 4000 | 0.6347 | 0.7407 |
| 0.5719 | 0.17 | 5000 | 0.6317 | 0.7483 |
| 0.5575 | 0.2 | 6000 | 0.6034 | 0.7544 |
| 0.5521 | 0.24 | 7000 | 0.6148 | 0.7568 |
| 0.5393 | 0.27 | 8000 | 0.5931 | 0.7610 |
| 0.5382 | 0.31 | 9000 | 0.5866 | 0.7665 |
| 0.5306 | 0.34 | 10000 | 0.5881 | 0.7594 |
| 0.5295 | 0.37 | 11000 | 0.6120 | 0.7632 |
| 0.5225 | 0.41 | 12000 | 0.5620 | 0.7759 |
| 0.5112 | 0.44 | 13000 | 0.5641 | 0.7769 |
| 0.5133 | 0.48 | 14000 | 0.5571 | 0.7798 |
| 0.5023 | 0.51 | 15000 | 0.5719 | 0.7722 |
| 0.5017 | 0.54 | 16000 | 0.5482 | 0.7844 |
| 0.5111 | 0.58 | 17000 | 0.5503 | 0.7800 |
| 0.4929 | 0.61 | 18000 | 0.5502 | 0.7836 |
| 0.4923 | 0.65 | 19000 | 0.5424 | 0.7843 |
| 0.4894 | 0.68 | 20000 | 0.5417 | 0.7851 |
| 0.4877 | 0.71 | 21000 | 0.5514 | 0.7841 |
| 0.4818 | 0.75 | 22000 | 0.5494 | 0.7848 |
| 0.4898 | 0.78 | 23000 | 0.5450 | 0.7859 |
| 0.4823 | 0.82 | 24000 | 0.5417 | 0.7878 |
| 0.4806 | 0.85 | 25000 | 0.5354 | 0.7875 |
| 0.4779 | 0.88 | 26000 | 0.5338 | 0.7848 |
| 0.4744 | 0.92 | 27000 | 0.5277 | 0.7934 |
| 0.4678 | 0.95 | 28000 | 0.5507 | 0.7871 |
| 0.4727 | 0.99 | 29000 | 0.5603 | 0.7789 |
| 0.4243 | 1.02 | 30000 | 0.5626 | 0.7894 |
| 0.3955 | 1.05 | 31000 | 0.5324 | 0.7939 |
| 0.4022 | 1.09 | 32000 | 0.5322 | 0.7925 |
| 0.3976 | 1.12 | 33000 | 0.5450 | 0.7920 |
| 0.3913 | 1.15 | 34000 | 0.5464 | 0.7948 |
| 0.406 | 1.19 | 35000 | 0.5406 | 0.7958 |
| 0.3875 | 1.22 | 36000 | 0.5489 | 0.7878 |
| 0.4024 | 1.26 | 37000 | 0.5427 | 0.7925 |
| 0.3988 | 1.29 | 38000 | 0.5335 | 0.7904 |
| 0.393 | 1.32 | 39000 | 0.5415 | 0.7923 |
| 0.3988 | 1.36 | 40000 | 0.5385 | 0.7962 |
| 0.3912 | 1.39 | 41000 | 0.5383 | 0.7950 |
| 0.3949 | 1.43 | 42000 | 0.5415 | 0.7931 |
| 0.3902 | 1.46 | 43000 | 0.5438 | 0.7893 |
| 0.3948 | 1.49 | 44000 | 0.5348 | 0.7906 |
| 0.3921 | 1.53 | 45000 | 0.5361 | 0.7890 |
| 0.3944 | 1.56 | 46000 | 0.5419 | 0.7953 |
| 0.3959 | 1.6 | 47000 | 0.5402 | 0.7967 |
| 0.3926 | 1.63 | 48000 | 0.5429 | 0.7925 |
| 0.3854 | 1.66 | 49000 | 0.5346 | 0.7959 |
| 0.3864 | 1.7 | 50000 | 0.5241 | 0.7979 |
| 0.385 | 1.73 | 51000 | 0.5149 | 0.8002 |
| 0.3871 | 1.77 | 52000 | 0.5325 | 0.8002 |
| 0.3819 | 1.8 | 53000 | 0.5332 | 0.8022 |
| 0.384 | 1.83 | 54000 | 0.5419 | 0.7873 |
| 0.3899 | 1.87 | 55000 | 0.5225 | 0.7974 |
| 0.3894 | 1.9 | 56000 | 0.5358 | 0.7977 |
| 0.3838 | 1.94 | 57000 | 0.5264 | 0.7988 |
| 0.3881 | 1.97 | 58000 | 0.5280 | 0.7956 |
| 0.3756 | 2.0 | 59000 | 0.5601 | 0.7969 |
| 0.3156 | 2.04 | 60000 | 0.5936 | 0.7925 |
| 0.3125 | 2.07 | 61000 | 0.5898 | 0.7938 |
| 0.3179 | 2.11 | 62000 | 0.5591 | 0.7981 |
| 0.315 | 2.14 | 63000 | 0.5853 | 0.7970 |
| 0.3122 | 2.17 | 64000 | 0.5802 | 0.7979 |
| 0.3105 | 2.21 | 65000 | 0.5758 | 0.7979 |
| 0.3076 | 2.24 | 66000 | 0.5685 | 0.7980 |
| 0.3117 | 2.28 | 67000 | 0.5799 | 0.7944 |
| 0.3108 | 2.31 | 68000 | 0.5742 | 0.7988 |
| 0.3047 | 2.34 | 69000 | 0.5907 | 0.7921 |
| 0.3114 | 2.38 | 70000 | 0.5723 | 0.7937 |
| 0.3035 | 2.41 | 71000 | 0.5944 | 0.7955 |
| 0.3129 | 2.45 | 72000 | 0.5838 | 0.7928 |
| 0.3071 | 2.48 | 73000 | 0.5929 | 0.7949 |
| 0.3061 | 2.51 | 74000 | 0.5794 | 0.7967 |
| 0.3068 | 2.55 | 75000 | 0.5892 | 0.7954 |
| 0.3053 | 2.58 | 76000 | 0.5796 | 0.7962 |
| 0.3117 | 2.62 | 77000 | 0.5763 | 0.7981 |
| 0.3062 | 2.65 | 78000 | 0.5852 | 0.7964 |
| 0.3004 | 2.68 | 79000 | 0.5793 | 0.7966 |
| 0.3146 | 2.72 | 80000 | 0.5693 | 0.7985 |
| 0.3146 | 2.75 | 81000 | 0.5788 | 0.7982 |
| 0.3079 | 2.79 | 82000 | 0.5726 | 0.7978 |
| 0.3058 | 2.82 | 83000 | 0.5677 | 0.7988 |
| 0.3055 | 2.85 | 84000 | 0.5701 | 0.7982 |
| 0.3049 | 2.89 | 85000 | 0.5809 | 0.7970 |
| 0.3044 | 2.92 | 86000 | 0.5741 | 0.7986 |
| 0.3057 | 2.96 | 87000 | 0.5743 | 0.7980 |
| 0.3081 | 2.99 | 88000 | 0.5771 | 0.7978 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "mit", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["nli_tr"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Dolar y\u00fckselmeye devam ediyor.", "candidate_labels": "ekonomi, siyaset, spor"}, {"text": "Senaryo \u00e7ok sa\u00e7mayd\u0131, be\u011fendim diyemem.", "candidate_labels": "olumlu, olumsuz"}]}
|
emrecan/bert-base-turkish-cased-allnli_tr
| null |
[
"transformers",
"pytorch",
"bert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #bert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us
|
bert-base-turkish-cased\_allnli\_tr
===================================
This model is a fine-tuned version of dbmdz/bert-base-turkish-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5771
* Accuracy: 0.7978
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #bert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-mit #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
sentence-similarity
|
sentence-transformers
|
# emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on Turkish machine translated versions of [NLI](https://huggingface.co/datasets/nli_tr) and [STS-b](https://huggingface.co/datasets/emrecan/stsb-mt-turkish) datasets, using example [training scripts]( https://github.com/UKPLab/sentence-transformers/tree/master/examples/training) from sentence-transformers GitHub repository.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
model = SentenceTransformer('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ["Bu örnek bir cümle", "Her cümle vektöre çevriliyor"]
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
model = AutoModel.from_pretrained('emrecan/bert-base-turkish-cased-mean-nli-stsb-tr')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
Evaluation results on test and development sets are given below:
| Split | Epoch | cosine_pearson | cosine_spearman | euclidean_pearson | euclidean_spearman | manhattan_pearson | manhattan_spearman | dot_pearson | dot_spearman |
|------------|-------|----------------|-----------------|-------------------|--------------------|-------------------|--------------------|-------------|--------------|
| test | - | 0.834 | 0.830 | 0.820 | 0.819 | 0.819 | 0.818 | 0.799 | 0.789 |
| validation | 1 | 0.850 | 0.848 | 0.831 | 0.835 | 0.83 | 0.83 | 0.80 | 0.806 |
| validation | 2 | 0.857 | 0.857 | 0.844 | 0.848 | 0.844 | 0.848 | 0.813 | 0.810 |
| validation | 3 | 0.860 | 0.859 | 0.846 | 0.851 | 0.846 | 0.850 | 0.825 | 0.822 |
| validation | 4 | 0.859 | 0.860 | 0.846 | 0.851 | 0.846 | 0.851 | 0.825 | 0.823 |
## Training
Training scripts [`training_nli_v2.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/nli/training_nli_v2.py) and [`training_stsbenchmark_continue_training.py`](https://github.com/UKPLab/sentence-transformers/blob/master/examples/training/sts/training_stsbenchmark_continue_training.py) were used to train the model.
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 360 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 4,
"evaluation_steps": 200,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 144,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "transformers"], "datasets": ["nli_tr", "emrecan/stsb-mt-turkish"], "pipeline_tag": "sentence-similarity", "widget": {"source_sentence": "Bu \u00e7ok mutlu bir ki\u015fi", "sentences": ["Bu mutlu bir k\u00f6pek", "Bu sevincinden havalara u\u00e7an bir insan", "\u00c7ok kar ya\u011f\u0131yor"]}}
|
emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
| null |
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"tr",
"dataset:nli_tr",
"dataset:emrecan/stsb-mt-turkish",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #license-apache-2.0 #endpoints_compatible #has_space #region-us
|
emrecan/bert-base-turkish-cased-mean-nli-stsb-tr
================================================
This is a sentence-transformers model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. The model was trained on Turkish machine translated versions of NLI and STS-b datasets, using example training scripts from sentence-transformers GitHub repository.
Usage (Sentence-Transformers)
-----------------------------
Using this model becomes easy when you have sentence-transformers installed:
Then you can use the model like this:
Usage (HuggingFace Transformers)
--------------------------------
Without sentence-transformers, you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
Evaluation Results
------------------
Evaluation results on test and development sets are given below:
Training
--------
Training scripts 'training\_nli\_v2.py' and 'training\_stsbenchmark\_continue\_training.py' were used to train the model.
The model was trained with the parameters:
DataLoader:
'URL.dataloader.DataLoader' of length 360 with parameters:
Loss:
'sentence\_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss'
Parameters of the fit()-Method:
Full Model Architecture
-----------------------
Citing & Authors
----------------
|
[] |
[
"TAGS\n#sentence-transformers #pytorch #bert #feature-extraction #sentence-similarity #transformers #tr #dataset-nli_tr #dataset-emrecan/stsb-mt-turkish #license-apache-2.0 #endpoints_compatible #has_space #region-us \n"
] |
zero-shot-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# convbert-base-turkish-mc4-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/convbert-base-turkish-mc4-cased](https://huggingface.co/dbmdz/convbert-base-turkish-mc4-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5541
- Accuracy: 0.8111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7338 | 0.03 | 1000 | 0.6722 | 0.7236 |
| 0.603 | 0.07 | 2000 | 0.6465 | 0.7399 |
| 0.5605 | 0.1 | 3000 | 0.5801 | 0.7728 |
| 0.55 | 0.14 | 4000 | 0.5994 | 0.7626 |
| 0.529 | 0.17 | 5000 | 0.5720 | 0.7697 |
| 0.5196 | 0.2 | 6000 | 0.5692 | 0.7769 |
| 0.5117 | 0.24 | 7000 | 0.5725 | 0.7785 |
| 0.5044 | 0.27 | 8000 | 0.5532 | 0.7787 |
| 0.5016 | 0.31 | 9000 | 0.5546 | 0.7812 |
| 0.5031 | 0.34 | 10000 | 0.5461 | 0.7870 |
| 0.4949 | 0.37 | 11000 | 0.5725 | 0.7826 |
| 0.4894 | 0.41 | 12000 | 0.5419 | 0.7933 |
| 0.4796 | 0.44 | 13000 | 0.5278 | 0.7914 |
| 0.4795 | 0.48 | 14000 | 0.5193 | 0.7953 |
| 0.4713 | 0.51 | 15000 | 0.5534 | 0.7771 |
| 0.4738 | 0.54 | 16000 | 0.5098 | 0.8039 |
| 0.481 | 0.58 | 17000 | 0.5244 | 0.7958 |
| 0.4634 | 0.61 | 18000 | 0.5215 | 0.7972 |
| 0.465 | 0.65 | 19000 | 0.5129 | 0.7985 |
| 0.4624 | 0.68 | 20000 | 0.5062 | 0.8047 |
| 0.4597 | 0.71 | 21000 | 0.5114 | 0.8029 |
| 0.4571 | 0.75 | 22000 | 0.5070 | 0.8073 |
| 0.4602 | 0.78 | 23000 | 0.5115 | 0.7993 |
| 0.4552 | 0.82 | 24000 | 0.5085 | 0.8052 |
| 0.4538 | 0.85 | 25000 | 0.5118 | 0.7974 |
| 0.4517 | 0.88 | 26000 | 0.5036 | 0.8044 |
| 0.4517 | 0.92 | 27000 | 0.4930 | 0.8062 |
| 0.4413 | 0.95 | 28000 | 0.5307 | 0.7964 |
| 0.4483 | 0.99 | 29000 | 0.5195 | 0.7938 |
| 0.4036 | 1.02 | 30000 | 0.5238 | 0.8029 |
| 0.3724 | 1.05 | 31000 | 0.5125 | 0.8082 |
| 0.3777 | 1.09 | 32000 | 0.5099 | 0.8075 |
| 0.3753 | 1.12 | 33000 | 0.5172 | 0.8053 |
| 0.367 | 1.15 | 34000 | 0.5188 | 0.8053 |
| 0.3819 | 1.19 | 35000 | 0.5218 | 0.8046 |
| 0.363 | 1.22 | 36000 | 0.5202 | 0.7993 |
| 0.3794 | 1.26 | 37000 | 0.5240 | 0.8048 |
| 0.3749 | 1.29 | 38000 | 0.5026 | 0.8054 |
| 0.367 | 1.32 | 39000 | 0.5198 | 0.8075 |
| 0.3759 | 1.36 | 40000 | 0.5298 | 0.7993 |
| 0.3701 | 1.39 | 41000 | 0.5072 | 0.8091 |
| 0.3742 | 1.43 | 42000 | 0.5071 | 0.8098 |
| 0.3706 | 1.46 | 43000 | 0.5317 | 0.8037 |
| 0.3716 | 1.49 | 44000 | 0.5034 | 0.8052 |
| 0.3717 | 1.53 | 45000 | 0.5258 | 0.8012 |
| 0.3714 | 1.56 | 46000 | 0.5195 | 0.8050 |
| 0.3781 | 1.6 | 47000 | 0.5004 | 0.8104 |
| 0.3725 | 1.63 | 48000 | 0.5124 | 0.8113 |
| 0.3624 | 1.66 | 49000 | 0.5040 | 0.8094 |
| 0.3657 | 1.7 | 50000 | 0.4979 | 0.8111 |
| 0.3669 | 1.73 | 51000 | 0.4968 | 0.8100 |
| 0.3636 | 1.77 | 52000 | 0.5075 | 0.8079 |
| 0.36 | 1.8 | 53000 | 0.4985 | 0.8110 |
| 0.3624 | 1.83 | 54000 | 0.5125 | 0.8070 |
| 0.366 | 1.87 | 55000 | 0.4918 | 0.8117 |
| 0.3655 | 1.9 | 56000 | 0.5051 | 0.8109 |
| 0.3609 | 1.94 | 57000 | 0.5083 | 0.8105 |
| 0.3672 | 1.97 | 58000 | 0.5129 | 0.8085 |
| 0.3545 | 2.0 | 59000 | 0.5467 | 0.8109 |
| 0.2938 | 2.04 | 60000 | 0.5635 | 0.8049 |
| 0.29 | 2.07 | 61000 | 0.5781 | 0.8041 |
| 0.2992 | 2.11 | 62000 | 0.5470 | 0.8077 |
| 0.2957 | 2.14 | 63000 | 0.5765 | 0.8073 |
| 0.292 | 2.17 | 64000 | 0.5472 | 0.8106 |
| 0.2893 | 2.21 | 65000 | 0.5590 | 0.8085 |
| 0.2883 | 2.24 | 66000 | 0.5535 | 0.8064 |
| 0.2923 | 2.28 | 67000 | 0.5508 | 0.8095 |
| 0.2868 | 2.31 | 68000 | 0.5679 | 0.8098 |
| 0.2892 | 2.34 | 69000 | 0.5660 | 0.8057 |
| 0.292 | 2.38 | 70000 | 0.5494 | 0.8088 |
| 0.286 | 2.41 | 71000 | 0.5653 | 0.8085 |
| 0.2939 | 2.45 | 72000 | 0.5673 | 0.8070 |
| 0.286 | 2.48 | 73000 | 0.5600 | 0.8092 |
| 0.2844 | 2.51 | 74000 | 0.5508 | 0.8095 |
| 0.2913 | 2.55 | 75000 | 0.5645 | 0.8088 |
| 0.2859 | 2.58 | 76000 | 0.5677 | 0.8095 |
| 0.2892 | 2.62 | 77000 | 0.5598 | 0.8113 |
| 0.2898 | 2.65 | 78000 | 0.5618 | 0.8096 |
| 0.2814 | 2.68 | 79000 | 0.5664 | 0.8103 |
| 0.2917 | 2.72 | 80000 | 0.5484 | 0.8122 |
| 0.2907 | 2.75 | 81000 | 0.5522 | 0.8116 |
| 0.2896 | 2.79 | 82000 | 0.5540 | 0.8093 |
| 0.2907 | 2.82 | 83000 | 0.5469 | 0.8104 |
| 0.2882 | 2.85 | 84000 | 0.5471 | 0.8122 |
| 0.2878 | 2.89 | 85000 | 0.5532 | 0.8108 |
| 0.2858 | 2.92 | 86000 | 0.5511 | 0.8115 |
| 0.288 | 2.96 | 87000 | 0.5491 | 0.8111 |
| 0.2834 | 2.99 | 88000 | 0.5541 | 0.8111 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["nli_tr"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Dolar y\u00fckselmeye devam ediyor.", "candidate_labels": "ekonomi, siyaset, spor"}, {"text": "Senaryo \u00e7ok sa\u00e7mayd\u0131, be\u011fendim diyemem.", "candidate_labels": "olumlu, olumsuz"}]}
|
emrecan/convbert-base-turkish-mc4-cased-allnli_tr
| null |
[
"transformers",
"pytorch",
"convbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #convbert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
convbert-base-turkish-mc4-cased\_allnli\_tr
===========================================
This model is a fine-tuned version of dbmdz/convbert-base-turkish-mc4-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.5541
* Accuracy: 0.8111
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #convbert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
zero-shot-classification
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-turkish-cased_allnli_tr
This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6481
- Accuracy: 0.7381
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.94 | 0.03 | 1000 | 0.9074 | 0.5813 |
| 0.8102 | 0.07 | 2000 | 0.8802 | 0.5949 |
| 0.7737 | 0.1 | 3000 | 0.8491 | 0.6155 |
| 0.7576 | 0.14 | 4000 | 0.8283 | 0.6261 |
| 0.7286 | 0.17 | 5000 | 0.8150 | 0.6362 |
| 0.7162 | 0.2 | 6000 | 0.7998 | 0.6400 |
| 0.7092 | 0.24 | 7000 | 0.7830 | 0.6565 |
| 0.6962 | 0.27 | 8000 | 0.7653 | 0.6629 |
| 0.6876 | 0.31 | 9000 | 0.7630 | 0.6687 |
| 0.6778 | 0.34 | 10000 | 0.7475 | 0.6739 |
| 0.6737 | 0.37 | 11000 | 0.7495 | 0.6781 |
| 0.6712 | 0.41 | 12000 | 0.7350 | 0.6826 |
| 0.6559 | 0.44 | 13000 | 0.7274 | 0.6897 |
| 0.6493 | 0.48 | 14000 | 0.7248 | 0.6902 |
| 0.6483 | 0.51 | 15000 | 0.7263 | 0.6858 |
| 0.6445 | 0.54 | 16000 | 0.7070 | 0.6978 |
| 0.6467 | 0.58 | 17000 | 0.7083 | 0.6981 |
| 0.6332 | 0.61 | 18000 | 0.6996 | 0.7004 |
| 0.6288 | 0.65 | 19000 | 0.6979 | 0.6978 |
| 0.6308 | 0.68 | 20000 | 0.6912 | 0.7040 |
| 0.622 | 0.71 | 21000 | 0.6904 | 0.7092 |
| 0.615 | 0.75 | 22000 | 0.6872 | 0.7094 |
| 0.6186 | 0.78 | 23000 | 0.6877 | 0.7075 |
| 0.6183 | 0.82 | 24000 | 0.6818 | 0.7111 |
| 0.6115 | 0.85 | 25000 | 0.6856 | 0.7122 |
| 0.608 | 0.88 | 26000 | 0.6697 | 0.7179 |
| 0.6071 | 0.92 | 27000 | 0.6727 | 0.7181 |
| 0.601 | 0.95 | 28000 | 0.6798 | 0.7118 |
| 0.6018 | 0.99 | 29000 | 0.6854 | 0.7071 |
| 0.5762 | 1.02 | 30000 | 0.6697 | 0.7214 |
| 0.5507 | 1.05 | 31000 | 0.6710 | 0.7185 |
| 0.5575 | 1.09 | 32000 | 0.6709 | 0.7226 |
| 0.5493 | 1.12 | 33000 | 0.6659 | 0.7191 |
| 0.5464 | 1.15 | 34000 | 0.6709 | 0.7232 |
| 0.5595 | 1.19 | 35000 | 0.6642 | 0.7220 |
| 0.5446 | 1.22 | 36000 | 0.6709 | 0.7202 |
| 0.5524 | 1.26 | 37000 | 0.6751 | 0.7148 |
| 0.5473 | 1.29 | 38000 | 0.6642 | 0.7209 |
| 0.5477 | 1.32 | 39000 | 0.6662 | 0.7223 |
| 0.5522 | 1.36 | 40000 | 0.6586 | 0.7227 |
| 0.5406 | 1.39 | 41000 | 0.6602 | 0.7258 |
| 0.54 | 1.43 | 42000 | 0.6564 | 0.7273 |
| 0.5458 | 1.46 | 43000 | 0.6780 | 0.7213 |
| 0.5448 | 1.49 | 44000 | 0.6561 | 0.7235 |
| 0.5418 | 1.53 | 45000 | 0.6600 | 0.7253 |
| 0.5408 | 1.56 | 46000 | 0.6616 | 0.7274 |
| 0.5451 | 1.6 | 47000 | 0.6557 | 0.7283 |
| 0.5385 | 1.63 | 48000 | 0.6583 | 0.7295 |
| 0.5261 | 1.66 | 49000 | 0.6468 | 0.7325 |
| 0.5364 | 1.7 | 50000 | 0.6447 | 0.7329 |
| 0.5294 | 1.73 | 51000 | 0.6429 | 0.7320 |
| 0.5332 | 1.77 | 52000 | 0.6508 | 0.7272 |
| 0.5274 | 1.8 | 53000 | 0.6492 | 0.7326 |
| 0.5286 | 1.83 | 54000 | 0.6470 | 0.7318 |
| 0.5359 | 1.87 | 55000 | 0.6393 | 0.7354 |
| 0.5366 | 1.9 | 56000 | 0.6445 | 0.7367 |
| 0.5296 | 1.94 | 57000 | 0.6413 | 0.7313 |
| 0.5346 | 1.97 | 58000 | 0.6393 | 0.7315 |
| 0.5264 | 2.0 | 59000 | 0.6448 | 0.7357 |
| 0.4857 | 2.04 | 60000 | 0.6640 | 0.7335 |
| 0.4888 | 2.07 | 61000 | 0.6612 | 0.7318 |
| 0.4964 | 2.11 | 62000 | 0.6516 | 0.7337 |
| 0.493 | 2.14 | 63000 | 0.6503 | 0.7356 |
| 0.4961 | 2.17 | 64000 | 0.6519 | 0.7348 |
| 0.4847 | 2.21 | 65000 | 0.6517 | 0.7327 |
| 0.483 | 2.24 | 66000 | 0.6555 | 0.7310 |
| 0.4857 | 2.28 | 67000 | 0.6525 | 0.7312 |
| 0.484 | 2.31 | 68000 | 0.6444 | 0.7342 |
| 0.4792 | 2.34 | 69000 | 0.6508 | 0.7330 |
| 0.488 | 2.38 | 70000 | 0.6513 | 0.7344 |
| 0.472 | 2.41 | 71000 | 0.6547 | 0.7346 |
| 0.4872 | 2.45 | 72000 | 0.6500 | 0.7342 |
| 0.4782 | 2.48 | 73000 | 0.6585 | 0.7358 |
| 0.481 | 2.51 | 74000 | 0.6477 | 0.7356 |
| 0.4822 | 2.55 | 75000 | 0.6587 | 0.7346 |
| 0.4728 | 2.58 | 76000 | 0.6572 | 0.7340 |
| 0.4841 | 2.62 | 77000 | 0.6443 | 0.7374 |
| 0.4885 | 2.65 | 78000 | 0.6494 | 0.7362 |
| 0.4752 | 2.68 | 79000 | 0.6509 | 0.7382 |
| 0.4883 | 2.72 | 80000 | 0.6457 | 0.7371 |
| 0.4888 | 2.75 | 81000 | 0.6497 | 0.7364 |
| 0.4844 | 2.79 | 82000 | 0.6481 | 0.7376 |
| 0.4833 | 2.82 | 83000 | 0.6451 | 0.7389 |
| 0.48 | 2.85 | 84000 | 0.6423 | 0.7373 |
| 0.4832 | 2.89 | 85000 | 0.6477 | 0.7357 |
| 0.4805 | 2.92 | 86000 | 0.6464 | 0.7379 |
| 0.4775 | 2.96 | 87000 | 0.6477 | 0.7380 |
| 0.4843 | 2.99 | 88000 | 0.6481 | 0.7381 |
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
|
{"language": ["tr"], "license": "apache-2.0", "tags": ["zero-shot-classification", "nli", "pytorch"], "datasets": ["nli_tr"], "metrics": ["accuracy"], "pipeline_tag": "zero-shot-classification", "widget": [{"text": "Dolar y\u00fckselmeye devam ediyor.", "candidate_labels": "ekonomi, siyaset, spor"}, {"text": "Senaryo \u00e7ok sa\u00e7mayd\u0131, be\u011fendim diyemem.", "candidate_labels": "olumlu, olumsuz"}]}
|
emrecan/distilbert-base-turkish-cased-allnli_tr
| null |
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"zero-shot-classification",
"nli",
"tr",
"dataset:nli_tr",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"tr"
] |
TAGS
#transformers #pytorch #distilbert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us
|
distilbert-base-turkish-cased\_allnli\_tr
=========================================
This model is a fine-tuned version of dbmdz/distilbert-base-turkish-cased on the None dataset.
It achieves the following results on the evaluation set:
* Loss: 0.6481
* Accuracy: 0.7381
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 32
* eval\_batch\_size: 32
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.12.3
* Pytorch 1.10.0+cu102
* Datasets 1.15.1
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #distilbert #text-classification #zero-shot-classification #nli #tr #dataset-nli_tr #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 32\n* eval\\_batch\\_size: 32\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.12.3\n* Pytorch 1.10.0+cu102\n* Datasets 1.15.1\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2065 | 1.0 | 5577 | 1.1289 |
| 0.9226 | 2.0 | 11154 | 1.1019 |
| 0.7411 | 3.0 | 16731 | 1.1453 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
en/distilbert-base-uncased-finetuned-squad
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.1453
Model description
-----------------
More information needed
Intended uses & limitations
---------------------------
More information needed
Training and evaluation data
----------------------------
More information needed
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.11.3
* Pytorch 1.9.0+cu111
* Datasets 1.14.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.11.3\n* Pytorch 1.9.0+cu111\n* Datasets 1.14.0\n* Tokenizers 0.10.3"
] |
feature-extraction
|
transformers
|
# Model description
The model was created for selective question answering in Polish. I.e. it is used to find passages containing the answers to the given question.
It is used to encode the contexts (aka passages) in the DPR bi-encoder architecture. The architecture requires two separate models.
The question part has to be encoded with the corresponding [question encoder](https://huggingface.co/enelpol/czywiesz-question).
The model was created by fine-tuning [Herbert base cased](https://huggingface.co/allegro/herbert-base-cased) on "Czywiesz" dataset.
[Czywiesz](https://clarin-pl.eu/dspace/handle/11321/39) dataset contains questions and Wikipedia articles extracted from the Polish Wikipedia.
# Usage
It is the easiest to use the model with the [Haystack framework](https://haystack.deepset.ai/overview/intro).
```python
from haystack.document_stores import FAISSDocumentStore
from haystack.retriever import DensePassageRetriever
document_store = FAISSDocumentStore(faiss_index_factory_str="Flat")
retriever = DensePassageRetriever(
document_store=document_store,
query_embedding_model="enelpol/czywiesz-question",
passage_embedding_model="enelpol/czywiesz-context"
)
for document in documents:
document_store.write_documents([document])
document_store.update_embeddings(retriever)
document_store.save("contexts.faiss")
```
|
{"language": "pl", "datasets": ["enelpol/czywiesz"]}
|
enelpol/czywiesz-context
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"pl",
"dataset:enelpol/czywiesz",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #bert #feature-extraction #pl #dataset-enelpol/czywiesz #endpoints_compatible #region-us
|
# Model description
The model was created for selective question answering in Polish. I.e. it is used to find passages containing the answers to the given question.
It is used to encode the contexts (aka passages) in the DPR bi-encoder architecture. The architecture requires two separate models.
The question part has to be encoded with the corresponding question encoder.
The model was created by fine-tuning Herbert base cased on "Czywiesz" dataset.
Czywiesz dataset contains questions and Wikipedia articles extracted from the Polish Wikipedia.
# Usage
It is the easiest to use the model with the Haystack framework.
|
[
"# Model description\n\nThe model was created for selective question answering in Polish. I.e. it is used to find passages containing the answers to the given question.\n\nIt is used to encode the contexts (aka passages) in the DPR bi-encoder architecture. The architecture requires two separate models.\nThe question part has to be encoded with the corresponding question encoder.\n\nThe model was created by fine-tuning Herbert base cased on \"Czywiesz\" dataset. \nCzywiesz dataset contains questions and Wikipedia articles extracted from the Polish Wikipedia.",
"# Usage\n\nIt is the easiest to use the model with the Haystack framework."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #pl #dataset-enelpol/czywiesz #endpoints_compatible #region-us \n",
"# Model description\n\nThe model was created for selective question answering in Polish. I.e. it is used to find passages containing the answers to the given question.\n\nIt is used to encode the contexts (aka passages) in the DPR bi-encoder architecture. The architecture requires two separate models.\nThe question part has to be encoded with the corresponding question encoder.\n\nThe model was created by fine-tuning Herbert base cased on \"Czywiesz\" dataset. \nCzywiesz dataset contains questions and Wikipedia articles extracted from the Polish Wikipedia.",
"# Usage\n\nIt is the easiest to use the model with the Haystack framework."
] |
feature-extraction
|
transformers
|
## Model description
This is the question encoder for the Polish DPR question answering model. The full model consists of two encoders.
Please read [context encoder documentation](https://huggingface.co/enelpol/czywiesz-context) to get the details of the model.
|
{"language": "pl", "datasets": ["enelpol/czywiesz"], "task_categories": ["question_answering"], "task_ids": ["open-domain-qa"], "multilinguality": ["monolingual"], "size_categories": ["1k<n<10K"]}
|
enelpol/czywiesz-question
| null |
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"pl",
"dataset:enelpol/czywiesz",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"pl"
] |
TAGS
#transformers #pytorch #bert #feature-extraction #pl #dataset-enelpol/czywiesz #endpoints_compatible #region-us
|
## Model description
This is the question encoder for the Polish DPR question answering model. The full model consists of two encoders.
Please read context encoder documentation to get the details of the model.
|
[
"## Model description\n\nThis is the question encoder for the Polish DPR question answering model. The full model consists of two encoders.\nPlease read context encoder documentation to get the details of the model."
] |
[
"TAGS\n#transformers #pytorch #bert #feature-extraction #pl #dataset-enelpol/czywiesz #endpoints_compatible #region-us \n",
"## Model description\n\nThis is the question encoder for the Polish DPR question answering model. The full model consists of two encoders.\nPlease read context encoder documentation to get the details of the model."
] |
text2text-generation
|
transformers
|
Trained with prefix `ocr: `.
|
{}
|
enelpol/poleval2021-task3
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
Trained with prefix 'ocr: '.
|
[] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text-generation
|
transformers
|
This is fine-tuned model on Bhagvad Gita and creates text based on prompts.
Example of usage:
```
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("epsil/bhagvad_gita")
model = AutoModelForCausalLM.from_pretrained("epsil/bhagvad_gita")
```
Input
```
from transformers import pipeline
pipeline = pipeline('text-generation',model=model, tokenizer=tokenizer)
result = samples('Krishna show me the right path')[0]['generated_text']
print(result)
```
Output
```
Krishna show me the right path, and I also to remember the lessons, and to remember them right.
Sama! in His Day, and by Thy own Eternal Grace.
A man like that who shall come to us
```
> Created by [Saurabh Mishra](https://www.linkedin.com/in/saurabh-mishra-12b5a1216/)
> Made with <span style="color: #e25555;">♥</span> in India
|
{}
|
epsil/bhagvad_gita
| null |
[
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
This is fine-tuned model on Bhagvad Gita and creates text based on prompts.
Example of usage:
Input
Output
> Created by Saurabh Mishra
> Made with <span style="color: #e25555;">♥</span> in India
|
[] |
[
"TAGS\n#transformers #pytorch #safetensors #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n"
] |
text2text-generation
|
transformers
|
# Persian-t5-formality-transfer
This is a formality style transfer model for the Persian language to convert colloquial text into a formal one. It is based on [the monolingual T5 model for Persian.](https://huggingface.co/Ahmad/parsT5-base) and [Persian T5 paraphraser](https://huggingface.co/erfan226/persian-t5-paraphraser)
Note: This model is still in development and therefore its outputs might not be very good. However, you can experiment with different values for the decoder to get better results. For more info check this [link.](https://huggingface.co/blog/how-to-generate)
## Usage
```python
>>> pip install transformers
>>> from transformers import (T5ForConditionalGeneration, AutoTokenizer, pipeline)
>>> import torch
model_path = 'erfan226/persian-t5-formality-transfer'
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline(task='text2text-generation', model=model, tokenizer=tokenizer)
def paraphrase(text):
for j in range(3):
out = pipe(text, encoder_no_repeat_ngram_size=4, do_sample=True, num_beams=5, max_length=128)[0]['generated_text']
print("Paraphrase:", out)
text = "من با دوستام میرم بازی"
print("Original:", text)
paraphrase(text)
# Original: من با دوستام میرم بازی
# Paraphrase: دوست دارم با دوستانم بازی کنم.
# Paraphrase: من با دوستانم میرم...
# Paraphrase: من با دوستام بازی می کنم.
```
## Training data
TBD
|
{"language": "fa", "tags": ["Style transfer", "Formality style transfer"], "widget": [{"text": "\u0645\u0646 \u0628\u0627 \u062f\u0648\u0633\u062a\u0627\u0645 \u0645\u06cc\u0631\u0645 \u0628\u0627\u0632\u06cc."}, {"text": "\u0645\u0646 \u0628\u0647 \u062e\u0648\u0646\u0647 \u062f\u0648\u0633\u062a\u0645 \u0631\u0641\u062a\u0645."}]}
|
erfan226/persian-t5-formality-transfer
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"Style transfer",
"Formality style transfer",
"fa",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fa"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #Style transfer #Formality style transfer #fa #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# Persian-t5-formality-transfer
This is a formality style transfer model for the Persian language to convert colloquial text into a formal one. It is based on the monolingual T5 model for Persian. and Persian T5 paraphraser
Note: This model is still in development and therefore its outputs might not be very good. However, you can experiment with different values for the decoder to get better results. For more info check this link.
## Usage
## Training data
TBD
|
[
"# Persian-t5-formality-transfer\n\nThis is a formality style transfer model for the Persian language to convert colloquial text into a formal one. It is based on the monolingual T5 model for Persian. and Persian T5 paraphraser\n\nNote: This model is still in development and therefore its outputs might not be very good. However, you can experiment with different values for the decoder to get better results. For more info check this link.",
"## Usage",
"## Training data\nTBD"
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #Style transfer #Formality style transfer #fa #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# Persian-t5-formality-transfer\n\nThis is a formality style transfer model for the Persian language to convert colloquial text into a formal one. It is based on the monolingual T5 model for Persian. and Persian T5 paraphraser\n\nNote: This model is still in development and therefore its outputs might not be very good. However, you can experiment with different values for the decoder to get better results. For more info check this link.",
"## Usage",
"## Training data\nTBD"
] |
text2text-generation
|
transformers
|
# Persian-t5-paraphraser
This is a paraphrasing model for the Persian language. It is based on [the monolingual T5 model for Persian.](https://huggingface.co/Ahmad/parsT5-base)
## Usage
```python
>>> pip install transformers
>>> from transformers import (T5ForConditionalGeneration, AutoTokenizer, pipeline)
>>> import torch
model_path = 'erfan226/persian-t5-paraphraser'
model = T5ForConditionalGeneration.from_pretrained(model_path)
tokenizer = AutoTokenizer.from_pretrained(model_path)
pipe = pipeline(task='text2text-generation', model=model, tokenizer=tokenizer)
def paraphrase(text):
for j in range(5):
out = pipe(text, encoder_no_repeat_ngram_size=5, do_sample=True, num_beams=5, max_length=128)[0]['generated_text']
print("Paraphrase:", out)
text = "این یک مقالهٔ خرد آلمان است. میتوانید با گسترش آن به ویکیپدیا کمک کنید."
print("Original:", text)
paraphrase(text)
# Original: این یک مقالهٔ خرد آلمان است. میتوانید با گسترش آن به ویکیپدیا کمک کنید.
# Paraphrase: این یک مقالهٔ کوچک است.
# Paraphrase: این یک مقالهٔ کوچک است.
# Paraphrase: شما می توانید با گسترش این مقاله، به کسب و کار خود کمک کنید.
# Paraphrase: می توانید با گسترش این مقالهٔ خرد آلمان کمک کنید.
# Paraphrase: شما می توانید با گسترش این مقالهٔ خرد، به گسترش آن کمک کنید.
```
## Training data
This model was trained on the Persian subset of the [Tapaco dataset](https://huggingface.co/datasets/tapaco). It should be noted that this model was trained on a very small dataset and therefore the performance might not be as expected, for now.
|
{"language": "fa", "tags": ["paraphrasing"], "datasets": ["tapaco"], "widget": [{"text": "\u0627\u06cc\u0646 \u06cc\u06a9 \u0645\u0642\u0627\u0644\u0647\u0654 \u062e\u0631\u062f \u0622\u0644\u0645\u0627\u0646 \u0627\u0633\u062a. \u0645\u06cc\u200c\u062a\u0648\u0627\u0646\u06cc\u062f \u0628\u0627 \u06af\u0633\u062a\u0631\u0634 \u0622\u0646 \u0628\u0647 \u0648\u06cc\u06a9\u06cc\u200c\u067e\u062f\u06cc\u0627 \u06a9\u0645\u06a9 \u06a9\u0646\u06cc\u062f."}, {"text": "\u0628\u0631\u0627\u06cc \u062e\u0631\u06cc\u062f \u06cc\u06a9 \u06a9\u062a\u0627\u0628 \u0628\u0627\u06cc\u062f \u0627\u0632 \u0641\u0631\u0648\u0634\u06af\u0627\u0647 \u0627\u06cc\u0646\u062a\u0631\u0646\u062a\u06cc \u0627\u0633\u062a\u0641\u0627\u062f\u0647 \u06a9\u0646\u06cc\u062f."}]}
|
erfan226/persian-t5-paraphraser
| null |
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"paraphrasing",
"fa",
"dataset:tapaco",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"fa"
] |
TAGS
#transformers #pytorch #t5 #text2text-generation #paraphrasing #fa #dataset-tapaco #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Persian-t5-paraphraser
This is a paraphrasing model for the Persian language. It is based on the monolingual T5 model for Persian.
## Usage
## Training data
This model was trained on the Persian subset of the Tapaco dataset. It should be noted that this model was trained on a very small dataset and therefore the performance might not be as expected, for now.
|
[
"# Persian-t5-paraphraser\n\nThis is a paraphrasing model for the Persian language. It is based on the monolingual T5 model for Persian.",
"## Usage",
"## Training data\nThis model was trained on the Persian subset of the Tapaco dataset. It should be noted that this model was trained on a very small dataset and therefore the performance might not be as expected, for now."
] |
[
"TAGS\n#transformers #pytorch #t5 #text2text-generation #paraphrasing #fa #dataset-tapaco #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Persian-t5-paraphraser\n\nThis is a paraphrasing model for the Persian language. It is based on the monolingual T5 model for Persian.",
"## Usage",
"## Training data\nThis model was trained on the Persian subset of the Tapaco dataset. It should be noted that this model was trained on a very small dataset and therefore the performance might not be as expected, for now."
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.0178
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 8.013245033112582, F1: 15.9706088498649
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.3602 | 1.0 | 5533 | 4.3460 |
| 4.0995 | 2.0 | 11066 | 4.0787 |
| 4.0302 | 3.0 | 16599 | 4.0178 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/bert-base-uncased-finetuned-squad-frozen-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-squad
=================================
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 4.0178
Model description
-----------------
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
Training and evaluation data
----------------------------
Achieved EM: 8.013245033112582, F1: 15.9706088498649
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-squad
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4571
## Model description
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
## Training and evaluation data
Achieved EM: 76.77388836329234, F1: 85.41893520501723
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 1.2944 | 1.0 | 44262 | 1.3432 |
| 1.0152 | 2.0 | 88524 | 1.3450 |
| 1.0062 | 3.0 | 132786 | 1.4571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "bert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/bert-base-uncased-finetuned-squad-frozen-v2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
bert-base-uncased-finetuned-squad
=================================
This model is a fine-tuned version of bert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.4571
Model description
-----------------
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
Training and evaluation data
----------------------------
Achieved EM: 76.77388836329234, F1: 85.41893520501723
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 2
* eval\_batch\_size: 2
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #bert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 2\n* eval\\_batch\\_size: 2\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 4.3629
## Model description
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
## Training and evaluation data
Achieved EM: 4.7776726584673606, F1: 11.440882287905591
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 4.679 | 1.0 | 5533 | 4.6713 |
| 4.4171 | 2.0 | 11066 | 4.4218 |
| 4.3464 | 3.0 | 16599 | 4.3629 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v1
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 4.3629
Model description
-----------------
Base model weights were frozen leaving only to finetune the last layer (qa outputs).
Training and evaluation data
----------------------------
Achieved EM: 4.7776726584673606, F1: 11.440882287905591
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
question-answering
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2104
## Model description
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
## Training and evaluation data
Achieved EM: 73.519394512772, F1: 82.71779517079237
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.3937 | 1.0 | 5533 | 1.2915 |
| 1.1522 | 2.0 | 11066 | 1.2227 |
| 1.0055 | 3.0 | 16599 | 1.2104 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "datasets": ["squad"], "model-index": [{"name": "distilbert-base-uncased-finetuned-squad", "results": []}]}
|
ericRosello/distilbert-base-uncased-finetuned-squad-frozen-v2
| null |
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us
|
distilbert-base-uncased-finetuned-squad
=======================================
This model is a fine-tuned version of distilbert-base-uncased on the squad dataset.
It achieves the following results on the evaluation set:
* Loss: 1.2104
Model description
-----------------
Most base model weights were frozen leaving only to finetune the last layer (qa outputs) and 3 last layers of the encoder.
Training and evaluation data
----------------------------
Achieved EM: 73.519394512772, F1: 82.71779517079237
Training procedure
------------------
### Training hyperparameters
The following hyperparameters were used during training:
* learning\_rate: 2e-05
* train\_batch\_size: 16
* eval\_batch\_size: 16
* seed: 42
* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
* lr\_scheduler\_type: linear
* num\_epochs: 3
### Training results
### Framework versions
* Transformers 4.15.0
* Pytorch 1.10.0+cu111
* Datasets 1.17.0
* Tokenizers 0.10.3
|
[
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #distilbert #question-answering #generated_from_trainer #dataset-squad #license-apache-2.0 #endpoints_compatible #region-us \n",
"### Training hyperparameters\n\n\nThe following hyperparameters were used during training:\n\n\n* learning\\_rate: 2e-05\n* train\\_batch\\_size: 16\n* eval\\_batch\\_size: 16\n* seed: 42\n* optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n* lr\\_scheduler\\_type: linear\n* num\\_epochs: 3",
"### Training results",
"### Framework versions\n\n\n* Transformers 4.15.0\n* Pytorch 1.10.0+cu111\n* Datasets 1.17.0\n* Tokenizers 0.10.3"
] |
text-generation
|
transformers
|
# Harry Potter DialoGPT Model
|
{"tags": ["conversational"]}
|
ericklasco/DialoGPT-small-erickHarryPotter
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Harry Potter DialoGPT Model
|
[
"# Harry Potter DialoGPT Model"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Harry Potter DialoGPT Model"
] |
text-generation
|
transformers
|
# Rick
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-Medium-Rick
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# Rick
|
[
"# Rick"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# Rick"
] |
text-generation
|
transformers
|
# rick
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-Medium-Rick_v2
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# rick
|
[
"# rick"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# rick"
] |
text-generation
|
transformers
|
# elon
|
{"tags": ["conversational"]}
|
ericzhou/DialoGPT-medium-elon
| null |
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"conversational",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# elon
|
[
"# elon"
] |
[
"TAGS\n#transformers #pytorch #gpt2 #text-generation #conversational #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# elon"
] |
text-generation
|
transformers
|
# GPT2 Keyword Based Lecture Generator
## Model description
GPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license).
## Intended uses
Used to generate spoken-word lectures.
### How to use
Input text:
<BOS> title <|SEP|> Some keywords <|SEP|>
Keyword Format: "Main Topic"."Subtopic1","Subtopic2","Subtopic3"
Code Example:
```
prompt = <BOS> + title + \\
<|SEP|> + keywords + <|SEP|>
generated = torch.tensor(tokenizer.encode(prompt)).unsqueeze(0)
model.eval();
```
|
{}
|
erikinfo/gpt2TEDlectures
| null |
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# GPT2 Keyword Based Lecture Generator
## Model description
GPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license).
## Intended uses
Used to generate spoken-word lectures.
### How to use
Input text:
<BOS> title <|SEP|> Some keywords <|SEP|>
Keyword Format: "Main Topic"."Subtopic1","Subtopic2","Subtopic3"
Code Example:
|
[
"# GPT2 Keyword Based Lecture Generator",
"## Model description\n\nGPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license).",
"## Intended uses\n\nUsed to generate spoken-word lectures.",
"### How to use\n\nInput text: \n\n <BOS> title <|SEP|> Some keywords <|SEP|>\n\nKeyword Format: \"Main Topic\".\"Subtopic1\",\"Subtopic2\",\"Subtopic3\"\n\nCode Example:"
] |
[
"TAGS\n#transformers #pytorch #jax #gpt2 #text-generation #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# GPT2 Keyword Based Lecture Generator",
"## Model description\n\nGPT2 fine-tuned on the TED Talks Dataset (published under the Creative Commons BY-NC-ND license).",
"## Intended uses\n\nUsed to generate spoken-word lectures.",
"### How to use\n\nInput text: \n\n <BOS> title <|SEP|> Some keywords <|SEP|>\n\nKeyword Format: \"Main Topic\".\"Subtopic1\",\"Subtopic2\",\"Subtopic3\"\n\nCode Example:"
] |
text-classification
|
transformers
|
# Classifying Text into DB07 Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify Danish descriptions of activities into [Dansk Branchekode DB07](https://www.dst.dk/en/Statistik/dokumentation/nomenklaturer/dansk-branchekode-db07) codes.
## Data
Approximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.
Activity descriptions and business names were concatenated but separated by the separator token `</s>`. Thus, the model was trained on input texts in the format `f"{description_of_activity}</s>{business_name}"`.
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-db07")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("Vi sælger sko")
pl("We sell clothes</s>Clothing ApS")
```
## License
This model is released under the MIT License.
|
{}
|
erst/xlm-roberta-base-finetuned-db07
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Classifying Text into DB07 Codes
This model is xlm-roberta-base fine-tuned to classify Danish descriptions of activities into Dansk Branchekode DB07 codes.
## Data
Approximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.
Activity descriptions and business names were concatenated but separated by the separator token '</s>'. Thus, the model was trained on input texts in the format 'f"{description_of_activity}</s>{business_name}"'.
## Quick Start
## License
This model is released under the MIT License.
|
[
"# Classifying Text into DB07 Codes\n\nThis model is xlm-roberta-base fine-tuned to classify Danish descriptions of activities into Dansk Branchekode DB07 codes.",
"## Data\nApproximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.\n\nActivity descriptions and business names were concatenated but separated by the separator token '</s>'. Thus, the model was trained on input texts in the format 'f\"{description_of_activity}</s>{business_name}\"'.",
"## Quick Start",
"## License\n\nThis model is released under the MIT License."
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Classifying Text into DB07 Codes\n\nThis model is xlm-roberta-base fine-tuned to classify Danish descriptions of activities into Dansk Branchekode DB07 codes.",
"## Data\nApproximately 2.5 million business names and descriptions of activities from Norwegian and Danish businesses were used to fine-tune the model. The Norwegian descriptions were translated into Danish and the Norwegian SN 2007 codes were translated into Danish DB07 codes.\n\nActivity descriptions and business names were concatenated but separated by the separator token '</s>'. Thus, the model was trained on input texts in the format 'f\"{description_of_activity}</s>{business_name}\"'.",
"## Quick Start",
"## License\n\nThis model is released under the MIT License."
] |
text-classification
|
transformers
|
# Classifying Text into NACE Codes
This model is [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) fine-tuned to classify descriptions of activities into [NACE Rev. 2](https://ec.europa.eu/eurostat/web/nace-rev2) codes.
## Data
The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:
- English
- German
- Spanish
- French
- Finnish
- Polish
## Quick Start
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
model = AutoModelForSequenceClassification.from_pretrained("erst/xlm-roberta-base-finetuned-nace")
pl = pipeline(
"sentiment-analysis",
model=model,
tokenizer=tokenizer,
return_all_scores=False,
)
pl("The purpose of our company is to build houses")
```
## License
This model is released under the MIT License
|
{}
|
erst/xlm-roberta-base-finetuned-nace
| null |
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #xlm-roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us
|
# Classifying Text into NACE Codes
This model is xlm-roberta-base fine-tuned to classify descriptions of activities into NACE Rev. 2 codes.
## Data
The data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:
- English
- German
- Spanish
- French
- Finnish
- Polish
## Quick Start
## License
This model is released under the MIT License
|
[
"# Classifying Text into NACE Codes\n\nThis model is xlm-roberta-base fine-tuned to classify descriptions of activities into NACE Rev. 2 codes.",
"## Data\nThe data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:\n- English\n- German\n- Spanish\n- French\n- Finnish\n- Polish",
"## Quick Start",
"## License\n\nThis model is released under the MIT License"
] |
[
"TAGS\n#transformers #pytorch #xlm-roberta #text-classification #autotrain_compatible #endpoints_compatible #region-us \n",
"# Classifying Text into NACE Codes\n\nThis model is xlm-roberta-base fine-tuned to classify descriptions of activities into NACE Rev. 2 codes.",
"## Data\nThe data used to fine-tune the model consist of 2.5 million descriptions of activities from Norwegian and Danish businesses. To improve the model's multilingual performance, random samples of the Norwegian and Danish descriptions were machine translated into the following languages:\n- English\n- German\n- Spanish\n- French\n- Finnish\n- Polish",
"## Quick Start",
"## License\n\nThis model is released under the MIT License"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cocktails_recipe-base
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-cocktails_recipe-base", "results": []}]}
|
erwanlc/t5-cocktails_recipe-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us
|
# t5-cocktails_recipe-base
This model is a fine-tuned version of t5-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# t5-cocktails_recipe-base\n\nThis model is a fine-tuned version of t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #has_space #text-generation-inference #region-us \n",
"# t5-cocktails_recipe-base\n\nThis model is a fine-tuned version of t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 5e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 5",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-cocktails_recipe-small
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "t5-base", "model-index": [{"name": "t5-cocktails_recipe-small", "results": []}]}
|
erwanlc/t5-cocktails_recipe-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:t5-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-cocktails_recipe-small
This model is a fine-tuned version of t5-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# t5-cocktails_recipe-small\n\nThis model is a fine-tuned version of t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #safetensors #t5 #text2text-generation #generated_from_trainer #base_model-t5-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-cocktails_recipe-small\n\nThis model is a fine-tuned version of t5-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 8\n- eval_batch_size: 16\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-base
This model is a fine-tuned version of [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "base_model": "google/t5-v1_1-base", "model-index": [{"name": "t5-coktails_recipe-base", "results": []}]}
|
erwanlc/t5-coktails_recipe-base
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:google/t5-v1_1-base",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-google/t5-v1_1-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-coktails_recipe-base
This model is a fine-tuned version of google/t5-v1_1-base on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# t5-coktails_recipe-base\n\nThis model is a fine-tuned version of google/t5-v1_1-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #base_model-google/t5-v1_1-base #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-coktails_recipe-base\n\nThis model is a fine-tuned version of google/t5-v1_1-base on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
text2text-generation
|
transformers
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-coktails_recipe-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
{"license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "t5-coktails_recipe-small", "results": []}]}
|
erwanlc/t5-coktails_recipe-small
| null |
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[] |
TAGS
#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us
|
# t5-coktails_recipe-small
This model is a fine-tuned version of t5-small on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
[
"# t5-coktails_recipe-small\n\nThis model is a fine-tuned version of t5-small on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
[
"TAGS\n#transformers #pytorch #tensorboard #t5 #text2text-generation #generated_from_trainer #license-apache-2.0 #autotrain_compatible #endpoints_compatible #text-generation-inference #region-us \n",
"# t5-coktails_recipe-small\n\nThis model is a fine-tuned version of t5-small on an unknown dataset.",
"## Model description\n\nMore information needed",
"## Intended uses & limitations\n\nMore information needed",
"## Training and evaluation data\n\nMore information needed",
"## Training procedure",
"### Training hyperparameters\n\nThe following hyperparameters were used during training:\n- learning_rate: 2e-05\n- train_batch_size: 4\n- eval_batch_size: 8\n- seed: 42\n- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08\n- lr_scheduler_type: linear\n- num_epochs: 3",
"### Training results",
"### Framework versions\n\n- Transformers 4.15.0\n- Pytorch 1.10.0+cu111\n- Datasets 1.17.0\n- Tokenizers 0.10.3"
] |
image-classification
|
fastai
|
## Pet breeds classification model
Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
[this paper](https://www.robots.ox.ac.uk/~vgg/publications/2012/parkhi12a/) and first released in
[this webpage](https://www.robots.ox.ac.uk/~vgg/data/pets/).
The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in [this paper](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf) and available [in this webpage](https://image-net.org/download.php)
Disclaimer: The model was fine-tuned after [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook) written by Jeremy Howard and Sylvain Gugger.
## Model description
The model was finetuned using the `cnn_learner` method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. `cnn_learner` automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.
Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture ([Neurohive, 2019](https://neurohive.io/en/popular-networks/resnet/)):
- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.
- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.
Please refer to the original paper '[Deep Residual Learning for Image Recognition](https://arxiv.org/pdf/1512.03385.pdf)' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.
Specifically, the model was obtained:
```
learn = cnn_learner(dls, resnet34, metrics=error_rate)
learn.fine_tune(2)
```
## How to use
Download the model this way:
```python
from huggingface_hub import hf_hub_download
from fastai.learner import load_learner
model = load_learner(
hf_hub_download('espejelomar/fastai-pet-breeds-classification', filename="model.pkl")
)
```
Then you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:
```python
_, _, preds = self.model.predict(np.array(inputs))
```
## Training data
The Resnet34 model was pretrained on [ImageNet](https://image-net.org/static_files/papers/imagenet_cvpr09.pdf), a dataset that has 100,000+ images across 200 different classes, and fine-tuned on [The Oxford-IIIT Pet Dataset](https://www.robots.ox.ac.uk/~vgg/data/pets/).
## Preprocessing
For more detailed information on the preprocessing procedure, refer to the [Chapter 5](https://github.com/fastai/fastbook/blob/master/05_pet_breeds.ipynb) of [Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020)](https://github.com/fastai/fastbook).
Two main strategies are followed to presizing the images:
- Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end." ([Howard and Gugger, 2020](https://github.com/fastai/fastbook))
Specifically, the following code is used for preprocessing:
```python
#hide_input
#id interpolations
#caption A comparison of fastai's data augmentation strategy (left) and the traditional approach (right).
dblock1 = DataBlock(blocks=(ImageBlock(), CategoryBlock()),
get_y=parent_label,
item_tfms=Resize(460))
# Place an image in the 'images/grizzly.jpg' subfolder where this notebook is located before running this
dls1 = dblock1.dataloaders([(Path.cwd()/'images'/'grizzly.jpg')]*100, bs=8)
dls1.train.get_idxs = lambda: Inf.ones
x,y = dls1.valid.one_batch()
_,axs = subplots(1, 2)
x1 = TensorImage(x.clone())
x1 = x1.affine_coord(sz=224)
x1 = x1.rotate(draw=30, p=1.)
x1 = x1.zoom(draw=1.2, p=1.)
x1 = x1.warp(draw_x=-0.2, draw_y=0.2, p=1.)
tfms = setup_aug_tfms([Rotate(draw=30, p=1, size=224), Zoom(draw=1.2, p=1., size=224),
Warp(draw_x=-0.2, draw_y=0.2, p=1., size=224)])
x = Pipeline(tfms)(x)
#x.affine_coord(coord_tfm=coord_tfm, sz=size, mode=mode, pad_mode=pad_mode)
TensorImage(x[0]).show(ctx=axs[0])
TensorImage(x1[0]).show(ctx=axs[1]);
```
### BibTeX entry and citation info
```bibtex
@book{howard2020deep,
author = {Howard, J. and Gugger, S.},
title = {Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD},
isbn = {9781492045526},
year = {2020},
url = {https://books.google.no/books?id=xd6LxgEACAAJ},
publisher = {O'Reilly Media, Incorporated},
}
```
|
{"library_name": "fastai", "tags": ["image-classification", "fastai"], "datasets": ["Oxford-IIIT Pet Dataset", "ImageNet"]}
|
espejelomar/fastai-pet-breeds-classification
| null |
[
"fastai",
"image-classification",
"arxiv:1512.03385",
"has_space",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1512.03385"
] |
[] |
TAGS
#fastai #image-classification #arxiv-1512.03385 #has_space #region-us
|
## Pet breeds classification model
Finetuned model on The Oxford-IIIT Pet Dataset. It was introduced in
this paper and first released in
this webpage.
The pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in this paper and available in this webpage
Disclaimer: The model was fine-tuned after Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020) written by Jeremy Howard and Sylvain Gugger.
## Model description
The model was finetuned using the 'cnn_learner' method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. 'cnn_learner' automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.
Resnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture (Neurohive, 2019):
- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.
- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.
Please refer to the original paper 'Deep Residual Learning for Image Recognition' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.
Specifically, the model was obtained:
## How to use
Download the model this way:
Then you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:
## Training data
The Resnet34 model was pretrained on ImageNet, a dataset that has 100,000+ images across 200 different classes, and fine-tuned on The Oxford-IIIT Pet Dataset.
## Preprocessing
For more detailed information on the preprocessing procedure, refer to the Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020).
Two main strategies are followed to presizing the images:
- Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.
- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.
"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.
In the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end." (Howard and Gugger, 2020)
Specifically, the following code is used for preprocessing:
### BibTeX entry and citation info
|
[
"## Pet breeds classification model\n\nFinetuned model on The Oxford-IIIT Pet Dataset. It was introduced in\nthis paper and first released in\nthis webpage.\n\nThe pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in this paper and available in this webpage\n\nDisclaimer: The model was fine-tuned after Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020) written by Jeremy Howard and Sylvain Gugger.",
"## Model description\n\nThe model was finetuned using the 'cnn_learner' method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. 'cnn_learner' automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.\n\nResnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture (Neurohive, 2019):\n- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.\n- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.\n\n Please refer to the original paper 'Deep Residual Learning for Image Recognition' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.\n\nSpecifically, the model was obtained:",
"## How to use\n\nDownload the model this way:\n\n\nThen you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:",
"## Training data\n\nThe Resnet34 model was pretrained on ImageNet, a dataset that has 100,000+ images across 200 different classes, and fine-tuned on The Oxford-IIIT Pet Dataset.",
"## Preprocessing\n\nFor more detailed information on the preprocessing procedure, refer to the Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020).\n\nTwo main strategies are followed to presizing the images:\n\n- Resize images to relatively \"large\" dimensions—that is, dimensions significantly larger than the target training dimensions.\n- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.\n\n\"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.\n\nIn the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end.\" (Howard and Gugger, 2020)\n\nSpecifically, the following code is used for preprocessing:",
"### BibTeX entry and citation info"
] |
[
"TAGS\n#fastai #image-classification #arxiv-1512.03385 #has_space #region-us \n",
"## Pet breeds classification model\n\nFinetuned model on The Oxford-IIIT Pet Dataset. It was introduced in\nthis paper and first released in\nthis webpage.\n\nThe pretrained model was trained on the ImageNet dataset, a dataset that has 100,000+ images across 200 different classes. It was introduced in this paper and available in this webpage\n\nDisclaimer: The model was fine-tuned after Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020) written by Jeremy Howard and Sylvain Gugger.",
"## Model description\n\nThe model was finetuned using the 'cnn_learner' method of the fastai library suing a Resnet 34 backbone pretrained on the ImageNet dataset. The fastai library uses PyTorch for the undelying operations. 'cnn_learner' automatically gets a pretrained model from a given architecture with a custom head that is suitable for the target data.\n\nResnet34 is a 34 layer convolutional neural network. It takes residuals from each layer and uses them in the subsequent connected layers. Advantages of a resnet arquitecture (Neurohive, 2019):\n- Are easy to optimize, but the “plain” networks (that simply stack layers) shows higher training error when the depth increases.\n- Can easily gain accuracy from greatly increased depth, producing results which are better than previous networks.\n\n Please refer to the original paper 'Deep Residual Learning for Image Recognition' written by Kaiming He, Xiangyu Zhang, Shaoqing Ren and Jian Sun.\n\nSpecifically, the model was obtained:",
"## How to use\n\nDownload the model this way:\n\n\nThen you can use your downloaded fastai model in any way you want. For example, if the input is a PIL Image, with the following code you can obtain the resulting outputs for each class:",
"## Training data\n\nThe Resnet34 model was pretrained on ImageNet, a dataset that has 100,000+ images across 200 different classes, and fine-tuned on The Oxford-IIIT Pet Dataset.",
"## Preprocessing\n\nFor more detailed information on the preprocessing procedure, refer to the Chapter 5 of Deep Learning for Coders with Fastai and Pytorch: AI Applications Without a PhD (2020).\n\nTwo main strategies are followed to presizing the images:\n\n- Resize images to relatively \"large\" dimensions—that is, dimensions significantly larger than the target training dimensions.\n- Compose all of the common augmentation operations (including a resize to the final target size) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.\n\n\"The first step, the resize, creates images large enough that they have spare margin to allow further augmentation transforms on their inner regions without creating empty zones. This transformation works by resizing to a square, using a large crop size. On the training set, the crop area is chosen randomly, and the size of the crop is selected to cover the entire width or height of the image, whichever is smaller.\n\nIn the second step, the GPU is used for all data augmentation, and all of the potentially destructive operations are done together, with a single interpolation at the end.\" (Howard and Gugger, 2020)\n\nSpecifically, the following code is used for preprocessing:",
"### BibTeX entry and citation info"
] |
audio-to-audio
|
espnet
|
## Example ESPnet2 ENH model
### `Chenda_Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave`
♻️ Imported from https://zenodo.org/record/4498562/
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["wsj0_2mix"]}
|
espnet/Chenda_Li_wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"speech-enhancement",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-wsj0_2mix #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ENH model
### 'Chenda_Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave'
️ Imported from URL
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ENH model",
"### 'Chenda_Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-wsj0_2mix #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ENH model",
"### 'Chenda_Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
audio-to-audio
|
espnet
|
## Example ESPnet2 ENH model
### `Chenda_Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave`
♻️ Imported from https://zenodo.org/record/4498554/
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["wsj0_2mix"]}
|
espnet/Chenda_Li_wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"speech-enhancement",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-wsj0_2mix #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ENH model
### 'Chenda_Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave'
️ Imported from URL
This model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ENH model",
"### 'Chenda_Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-wsj0_2mix #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ENH model",
"### 'Chenda_Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by Chenda Li using wsj0_2mix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `Dan_Berrebbi_aishell4_asr`
This model was trained by dan_berrebbi using aishell4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout da1a26652f7d5a019cc24ad1e0e6e844f2b57e1b
pip install -e .
cd egs2/aishell4/asr1
./run.sh --skip_data_prep false --skip_train true --download_model Dan_Berrebbi_aishell4_asr
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Tue Sep 21 09:36:01 EDT 2021`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: `7887faeabbc2299922267928e190ed89cb032a36`
- Commit date: `Mon Sep 20 16:25:02 2021 -0400`
## asr_fine_tune5_100ep
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.7|0.5|0.0|93.2|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|601|6.8|92.8|0.3|0.0|93.2|93.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_rnn_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|66.9|25.6|7.5|9.8|42.9|93.2|
|decode_transformer_lm_lm_nuit_valid.loss.ave_asr_model_valid.acc.ave/dev|599|15936|64.7|27.6|7.7|11.0|46.3|93.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer5.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_fine_tune5_100ep
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 10000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_zh_char/train/speech_shape
- exp/asr_stats_raw_zh_char/train/text_shape.char
valid_shape_file:
- exp/asr_stats_raw_zh_char/valid/speech_shape
- exp/asr_stats_raw_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 51200
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_nodev/wav.scp
- speech
- sound
- - dump/raw/train_nodev/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 4.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_zh_char/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
macaron_style: true
use_cnn_module: true
cnn_module_kernel: 15
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
## LM config
<details><summary>expand</summary>
```
config: conf/train_lm_transformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/lm_nuit
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 15
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 2000000
valid_batch_bins: null
train_shape_file:
- exp/lm_stats_zh_char/train/text_shape.char
valid_shape_file:
- exp/lm_stats_zh_char/valid/text_shape.char
batch_type: numel
valid_batch_type: null
fold_length:
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/lm_train.txt
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ,
- 的
- 是
- 个
- 这
- 一
- 。
- 就
- 儿
- 嗯
- 们
- 呃
- 我
- 有
- <sil>
- 那
- 说
- 不
- 些
- 也
- 他
- 你
- 要
- 后
- 以
- 咱
- 在
- 啊
- 了
- 然
- 家
- 都
- 来
- 还
- 可
- 子
- 下
- 上
- 时
- 比
- 话
- 孩
- 呢
- 去
- 人
- 好
- 对
- 能
- 么
- 吧
- 学
- 多
- 到
- 看
- 为
- 进
- 把
- 大
- 做
- 生
- 种
- 品
- 给
- 没
- 行
- 现
- 小
- 会
- 作
- 较
- 方
- 块
- 业
- 让
- 点
- 定
- 因
- 什
- 长
- 面
- 如
- 安
- 客
- 问
- 过
- 车
- 出
- 啦
- 边
- 候
- 主
- 所
- 题
- 买
- 销
- 天
- 意
- 自
- 全
- 动
- 工
- '&'
- 老
- 或
- 者
- 年
- 着
- 实
- 活
- 理
- 包
- 样
- 再
- 区
- 用
- 呀
- 零
- 员
- 发
- 先
- 部
- 放
- 门
- 情
- 像
- 分
- 售
- 很
- 开
- 己
- 十
- 括
- 跟
- 事
- 需
- 更
- 其
- 装
- 市
- 成
- 里
- 物
- 别
- 间
- 第
- 次
- 中
- 提
- 超
- 顾
- 保
- 感
- 加
- 量
- 二
- 和
- 各
- 嘛
- 新
- 每
- 完
- 力
- 消
- 得
- 店
- 本
- 通
- 习
- 觉
- 道
- 心
- 校
- 菜
- 交
- 哪
- 产
- 于
- 位
- 电
- 想
- 三
- 况
- 度
- 期
- 应
- 但
- 教
- 体
- 常
- 师
- 它
- 高
- 前
- 之
- 西
- 特
- 商
- 果
- 场
- 重
- 防
- 管
- 起
- 地
- 该
- 东
- 少
- 打
- 费
- 当
- 带
- 服
- 口
- 购
- 知
- 回
- 同
- 钱
- 外
- 户
- 注
- 促
- 价
- 解
- <#>
- 水
- 百
- 今
- 太
- 最
- 报
- 怎
- 才
- 等
- 及
- 关
- <->
- 肯
- 火
- 机
- 流
- 制
- 送
- 手
- 确
- 法
- 写
- 玩
- 传
- 路
- 班
- 查
- 招
- 卖
- 几
- 正
- 合
- 够
- 五
- 引
- 容
- 只
- 男
- 日
- 四
- 宣
- 反
- 两
- 清
- 处
- 周
- 单
- 首
- 课
- 衣
- 便
- 身
- 气
- 针
- 奶
- 六
- 经
- 接
- 女
- 育
- 鲜
- 赠
- 试
- 停
- 晚
- 类
- 故
- 入
- 性
- 增
- 食
- 满
- 格
- 基
- 备
- 洗
- 培
- 质
- 美
- 明
- 整
- 化
- 公
- 案
- 哎
- 吸
- 原
- 易
- 幺
- 总
- 尽
- 优
- 而
- 建
- 责
- 啥
- 干
- 月
- 使
- 找
- 季
- 望
- 器
- 目
- 识
- 低
- 听
- 烟
- 相
- 早
- 检
- 护
- 摆
- 住
- 直
- 从
- 务
- 希
- 导
- 内
- 八
- 持
- 近
- 配
- 叫
- 见
- 设
- 吗
- 非
- 调
- 程
- 拿
- 训
- <%>
- 结
- 标
- 挺
- 花
- <$>
- 受
- 式
- 求
- 平
- 换
- 具
- 愿
- 货
- 牌
- 专
- 轻
- 推
- 妈
- 司
- 辆
- 存
- 名
- 且
- 欢
- 喜
- 吃
- 数
- 段
- 议
- 控
- 往
- 礼
- 决
- 走
- 养
- 免
- 惠
- 园
- 档
- 谁
- 真
- 快
- 置
- 幼
- 乐
- 证
- 向
- 厂
- 简
- 声
- 视
- 划
- 绩
- 适
- 集
- 搞
- 办
- 规
- 灾
- 造
- 准
- 必
- 任
- 险
- 响
- 毕
- 群
- 鞋
- 九
- 嘞
- 信
- 库
- 计
- 认
- 奖
- 表
- 无
- 影
- 头
- 卡
- 告
- 考
- 抽
- 竟
- 选
- 帮
- 何
- 修
- 酒
- 尤
- 线
- 穿
- 讲
- 光
- 留
- 讨
- 随
- 请
- 卫
- 系
- 队
- 失
- 双
- 庭
- 强
- 微
- 折
- 色
- 半
- 否
- 立
- 差
- 沟
- 冬
- 批
- 害
- 已
- 危
- 白
- 爆
- 节
- 参
- 逛
- 搭
- 风
- 朋
- 友
- 环
- 验
- 评
- 严
- 般
- 效
- 舞
- 饭
- 境
- 负
- 又
- 底
- 术
- 刚
- 件
- 罚
- 助
- 态
- 状
- 室
- 房
- 游
- 息
- 领
- 难
- 警
- 按
- 级
- 错
- 利
- 与
- 餐
- 陪
- 蹈
- 论
- 记
- 许
- 马
- 算
- 楼
- 型
- 排
- 广
- 值
- 油
- 糕
- 楚
- 步
- 至
- 拉
- 紧
- 灯
- 升
- 七
- 共
- 努
- 除
- 展
- 形
- 元
- 网
- 宜
- 营
- 兴
- 互
- 蛋
- 燃
- 冷
- 条
- 思
- 巡
- 净
- 须
- 遇
- 落
- 禁
- 科
- 款
- 哦
- 止
- 采
- 材
- 介
- 套
- 围
- 维
- 旦
- 切
- 显
- 汇
- 损
- 速
- 越
- 模
- 假
- 精
- 稍
- 书
- 绍
- 父
- 积
- 策
- 示
- 骑
- 改
- 跑
- 运
- 变
- 洁
- 仓
- 鱼
- <space>
- 绝
- 诶
- 伤
- 细
- 职
- 离
- 慢
- 素
- 料
- 睡
- 趣
- 爱
- 母
- 眼
- 味
- 列
- 督
- 张
- 率
- 被
- 域
- 语
- 坏
- 资
- 红
- 减
- 励
- 择
- 预
- 层
- 陈
- 根
- 休
- 毒
- 球
- 爸
- 登
- 足
- 取
- 指
- 柜
- 限
- 降
- 概
- 院
- 供
- 支
- 额
- 源
- 始
- 盘
- 饮
- 项
- 液
- 童
- 爷
- 号
- 抓
- 台
- 转
- 观
- 金
- 照
- 滑
- 岁
- 致
- 文
- 她
- 弄
- 站
- 酸
- 音
- 胎
- 投
- 疏
- 乱
- 临
- 允
- 狗
- 疫
- 询
- 、
- 象
- 占
- 坐
- 倒
- 争
- 午
- 亲
- 读
- 演
- 退
- 惯
- 贵
- 达
- 监
- 志
- 绿
- 醒
- 急
- 驾
- 违
- 诉
- 片
- 空
- 势
- 极
- 豆
- 独
- 钟
- 代
- 瓶
- 纸
- 并
- 企
- 映
- 统
- 属
- 省
- 夜
- 障
- 谈
- 避
- 由
- 终
- 频
- 掉
- 估
- 激
- 仅
- 布
- 谢
- 灭
- 忙
- 码
- 伙
- 缺
- 叶
- 功
- 析
- 赖
- 架
- 范
- 签
- D
- 待
- 神
- 龄
- 画
- 券
- 居
- 杜
- 堵
- 您
- 勤
- 扫
- 技
- 财
- 隐
- 患
- 例
- 乘
- 摩
- 戏
- 鼓
- 份
- 杂
- 散
- 热
- 铺
- 据
- 肤
- 怕
- 依
- 拖
- 充
- 智
- 偷
- 远
- 挂
- 盗
- 附
- 梯
- 冰
- 联
- 借
- 蹭
- 异
- 蔬
- 绑
- 堂
- 将
- 厨
- 帽
- 破
- 戴
- 皮
- 粉
- 氛
- 仪
- 国
- 益
- 闯
- 惩
- 逃
- 刻
- 突
- 申
- 略
- 顿
- 毛
- 召
- 海
- 黄
- 青
- 士
- 移
- 喝
- 板
- 练
- 歌
- 千
- 床
- 享
- 磨
- 构
- 收
- 万
- 摸
- 圈
- 亮
- 刹
- 逆
- 驶
- 赶
- 松
- 呐
- 压
- 拥
- 辅
- 协
- 托
- 断
- 轮
- 善
- 哈
- 捆
- 座
- 病
- 健
- 牛
- 草
- 释
- 似
- 土
- 补
- 俩
- 堆
- 即
- 密
- 背
- 言
- 街
- 尚
- 窗
- C
- 艺
- 纠
- 纷
- 忽
- 句
- 另
- 施
- 政
- 温
- 某
- 翻
- 章
- 守
- 熟
- 民
- 续
- 良
- 挤
- 础
- 字
- 瓜
- 乎
- 竞
- 距
- 际
- 暖
- 凭
- 董
- 碗
- 短
- 渠
- 康
- 藏
- 香
- 虽
- 露
- 厉
- 忘
- 误
- 冒
- 窃
- 络
- 淡
- 腐
- 颜
- 播
- 默
- 锻
- 炼
- 宝
- 组
- 淘
- 则
- 逻
- 垃
- 圾
- 复
- 贴
- 靠
- 潜
- 察
- 晨
- 碰
- 剩
- 峰
- 深
- 偏
- 虑
- 念
- 初
- 闹
- 幸
- 跳
- 米
- 旧
- 蛤
- 虾
- 汽
- 苦
- 螃
- 蟹
- 冲
- 固
- 隔
- 懂
- 卷
- 镜
- 罩
- 暴
- 闭
- 野
- 玻
- 璃
- 义
- B
- 煤
- 富
- 踩
- 途
- 闲
- 紫
- 北
- 欲
- 曲
- 榜
- 垒
- 伴
- 累
- 判
- 搜
- 困
- 租
- 键
- 肥
- 社
- 弯
- 角
- 纪
- 律
- 详
- 右
- 刮
- 继
- 撤
- 输
- 普
- 未
- 稳
- 摔
- 访
- 扩
- 扣
- 末
- 票
- 承
- 担
- 丢
- 涉
- 欠
- 创
- 获
- 摊
- 疑
- 蓝
- 答
- 霜
- 录
- 齐
- 烦
- 治
- 粗
- 叛
- 污
- 址
- 若
- 染
- 含
- 药
- 雨
- 此
- 陌
- 研
- 催
- 拨
- 页
- 磕
- 呆
- 脸
- 墙
- 夫
- A
- 棉
- 袜
- 填
- 死
- 懒
- 植
- 扇
- 捡
- 遍
- 操
- 摄
- 箱
- ?
- 繁
- 城
- 咯
- 左
- 拐
- 悉
- 犯
- 宽
- 伞
- 余
- 糊
- 巧
- 透
- 贪
- 顺
- 局
- 妇
- 私
- 浪
- 岗
- 棋
- 序
- 辛
- V
- 握
- 擦
- 扔
- 斤
- 付
- 剐
- 锁
- 麻
- 敢
- 桶
- 佩
- 坠
- 封
- 替
- 塞
- 斗
- 攀
- 爽
- 沉
- 混
- 滋
- 刺
- 潮
- 皿
- 端
- 刷
- 刀
- 巾
- 烫
- 木
- 漏
- 迅
- 织
- 救
- 吹
- 仔
- 称
- 返
- 景
- 聚
- 阶
- 秀
- 涨
- P
- 颈
- 肩
- 泥
- I
- 侣
- 尔
- 伍
- 甚
- 皂
- 蒙
- 世
- 界
- 嘻
- 辈
- Q
- 审
- 尾
- 浇
- 遛
- 馨
- 措
- 邻
- 撒
- 挥
- 遵
- 予
- 击
- 鉴
- 殊
- 哇
- 载
- 添
- 盈
- 盯
- 惊
- 喷
- 荷
- 怠
- 抢
- 喂
- 饱
- 谅
- 团
- 龙
- 冻
- 图
- 掺
- 扑
- 刊
- 葱
- 薄
- 萝
- 卜
- 麦
- 苹
- 触
- 飞
- 艳
- 畅
- 鸡
- 权
- 趟
- 连
- 哭
- 旁
- 漂
- 焊
- 敞
- 叉
- 钢
- 氧
- 溺
- 聊
- 巢
- 衡
- 淀
- 劣
- 虫
- 符
- 均
- 辨
- 菌
- 彻
- 烂
- 厅
- 皱
- 妥
- 拾
- 插
- 携
- 竹
- 碍
- 湿
- 灵
- 忌
- 旅
- 勿
- 宿
- 迷
- 探
- 春
- 劵
- 星
- 耐
- 裤
- 颖
- 韩
- 艾
- 灸
- 邀
- 婚
- 乳
- 芽
- 挑
- 摘
- 阿
- 姨
- 伊
- 慕
- 纯
- 貌
- 嘴
- 偶
- 睛
- 献
- 坚
- 账
- 典
- 唱
- L
- E
- 贡
- 寒
- 唧
- Y
- 尝
- 抹
- 汰
- 腾
- 哼
- 仿
- 英
- 舒
- 扰
- 拒
- 剪
- 夏
- 宠
- 咬
- 派
- 委
- 婉
- 执
- 呗
- 悄
- 搬
- 雪
- 盐
- 暂
- 奸
- 耍
- 僻
- 却
- 署
- 寻
- 串
- 援
- 亏
- 烈
- 印
- 捎
- 幅
- 绘
- 锈
- 闸
- 罪
- 嫌
- 俗
- 歹
- 劳
- 兜
- 喽
- 谓
- 鹤
- 舍
- 克
- 徇
- 倍
- 敏
- 丝
- 纺
- 拭
- 融
- 蔫
- 掂
- 测
- T
- 众
- 卸
- 暗
- 赔
- 偿
- 举
- 劲
- 篮
- 储
- 乙
- 炔
- 软
- 侵
- 诱
- 浊
- 蚀
- 秽
- 炸
- 泽
- 闻
- 鼻
- 甜
- 澈
- 脏
- 官
- 凝
- 芳
- 灰
- 卵
- 农
- 烧
- 肉
- 桌
- 椅
- 垫
- 硬
- 叠
- 瓷
- 碎
- 柄
- 屉
- 拳
- 撞
- 铝
- 歇
- 遗
- 炮
- 掌
- 妨
- 静
- 浸
- 涂
- 凉
- 炫
- 耀
- 姓
- 究
- 奏
- 缆
- 脚
- 酿
- 抄
- 慌
- 戚
- 燥
- 毯
- 挽
- 诺
- 济
- 旺
- 抖
- 郊
- 疗
- 巴
- 痧
- 脊
- 膜
- 晒
- 润
- 掏
- 笔
- 鞭
- 博
- 捧
- 函
- 胡
- 锅
- 雾
- 疯
- 狂
- 趋
- 膏
- 妆
- 尘
- 袋
- 贝
- 俺
- 耽
- 怀
- 恐
- 赋
- 脑
- 焉
- 愣
- 呵
- 噼
- 啪
- 虚
- 河
- 归
- 绊
- 械
- 扬
- 筒
- 靴
- 束
- 彩
- 荐
- 沙
- 迎
- 荡
- 凌
- 昂
- 碑
- 蹦
- 扉
- 泼
- 丰
- 滴
- 沾
- 亭
- 粘
- 奇
- 饼
- 牙
- 娃
- 杯
- 踢
- 嘿
- 抛
- 枯
- 剔
- 苗
- 纹
- 永
- 津
- 唉
- 趁
- 屡
- 逮
- 戒
- 肃
- 仁
- 肇
- 醉
- 糟
- 馈
- 横
- 扭
- 盔
- 侧
- 鲁
- 莽
- 飙
- 稿
- 逐
- 谋
- 京
- 苏
- 宁
- 驻
- 咨
- 旷
- 拓
- 杆
- 秤
- 叮
- 嘱
- 咋
- 炊
- 怪
- 婆
- 阎
- 王
- 饿
- 鬼
- 惨
- 渡
- 坎
- 囤
- 甲
- 蛙
- 鲤
- 桂
- 石
- 玉
- 溪
- 华
- 窝
- 截
- 秩
- 嗨
- 芹
- 梨
- 蕉
- S
- 煲
- 汤
- 鲫
- 揽
- 挡
- 柚
- 瑞
- 匹
- '2'
- 踹
- 吵
- 凶
- 矩
- 迟
- 脾
- 纳
- 朵
- 墨
- 袖
- 链
- 钩
- 笼
- 熄
- 盆
- 殴
- 欺
- 诈
- 厕
- 娱
- 爬
- 威
- 胁
- 阅
- 赌
- 拢
- 症
- 伪
- 脂
- 堪
- 盛
- 蚊
- 蝇
- 煎
- 晰
- 柔
- 涩
- 汁
- 腹
- 胃
- 痉
- 挛
- 颗
- 粒
- 匀
- 败
- 历
- 佳
- 乏
- 寄
- 残
- 杀
- 剂
- 疾
- 衍
- 溅
- 倘
- 褶
- 席
- 启
- 遮
- 槽
- 递
- 橱
- 迹
- 镁
- 泄
- 阀
- 柴
- 阻
- 恋
- 盲
- 浓
- 捂
- 腰
- 姿
- 缝
- 肿
- 焦
- 骗
- 伺
- 嘘
- 掩
- 褥
- 帘
- 籍
- 锥
- 锋
- 尖
- 锐
- 祸
- 秒
- 李
- 伸
- 浏
- 览
- 航
- 讯
- 谨
- 慎
- 匪
- 劫
- 医
- 族
- 忧
- 孤
- 拜
- 窄
- 唯
- 搁
- 朝
- 尺
- 盟
- 波
- 隆
- 词
- 村
- 娶
- 媳
- 县
- 聘
- 醇
- 泡
- 坨
- 淋
- 延
- 柱
- 肾
- 蒸
- 槛
- 赚
- 凡
- 恩
- 厚
- 赞
- 茎
- 蒜
- 苔
- 甘
- 菠
- 涮
- 霾
- 仍
- 云
- 追
- 丽
- 盖
- 欧
- 莱
- 雅
- 婴
- 孕
- 敲
- 约
- 惰
- 谱
- 射
- 惑
- 睹
- 奉
- 诚
- 惶
- 卓
- 勉
- 聪
- 疼
- 弃
- 奴
- 隶
- 嚷
- 眠
- 躺
- 乒
- 乓
- 琴
- 挖
- 掘
- 阵
- 浆
- 索
- 呼
- 古
- 弥
- 熔
- 抱
- 怨
- 猫
- 笑
- 挣
- 黑
- 猛
- 令
- 核
- 磊
- 橙
- 吨
- 吊
- 蘸
- 氮
- 罐
- 战
- 懈
- 渐
- 胜
- 命
- 抬
- 缘
- 睦
- 扮
- 珠
- 颁
- 蔼
- 凳
- 饰
- 缤
- 晶
- 抵
- 遥
- 腿
- 拍
- 妻
- 羽
- 绒
- 梳
- 袄
- 述
- 跆
- 屈
- 脱
- 朗
- 劝
- 胆
- 腔
- 圆
- 亚
- 宴
- 编
- 肢
- 壶
- 暑
- 怒
- 描
- 绕
- 悦
- 忆
- 嗓
- 胖
- 疙
- 瘩
- 哒
- 碴
- 棱
- 炒
- 井
- 漫
- 烘
- 焙
- 涤
- 船
- 纱
- 君
- 茉
- 莉
- 钙
- 瞩
- <_>
- 塌
- 嗷
- 屁
- 股
- 绪
- 勇
- 奋
- 荣
- 诲
- 卑
- 挫
- 昧
- 疲
- 惫
- 册
- 呈
- 僵
- 熬
- 敬
- 呦
- <sos/eos>
init: null
model_conf:
ignore_id: 0
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: /ocean/projects/cis210027p/berrebbi/espnet/egs2/aishell4/asr1/data/nlsyms.txt
cleaner: null
g2p: null
lm: transformer
lm_conf:
pos_enc: null
embed_unit: 128
att_unit: 512
head: 8
unit: 2048
layer: 16
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.3a1
distributed: false
```
</details>
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell4"]}
|
espnet/Dan_Berrebbi_aishell4_asr
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell4",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-aishell4 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'Dan\_Berrebbi\_aishell4\_asr'
This model was trained by dan\_berrebbi using aishell4 recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Sep 21 09:36:01 EDT 2021'
* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.3a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: '7887faeabbc2299922267928e190ed89cb032a36'
+ Commit date: 'Mon Sep 20 16:25:02 2021 -0400'
asr\_fine\_tune5\_100ep
-----------------------
### WER
### CER
### TER
ASR config
----------
expand
## LM config
expand
|
[
"### 'Dan\\_Berrebbi\\_aishell4\\_asr'\n\n\nThis model was trained by dan\\_berrebbi using aishell4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Sep 21 09:36:01 EDT 2021'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: '7887faeabbc2299922267928e190ed89cb032a36'\n\t+ Commit date: 'Mon Sep 20 16:25:02 2021 -0400'\n\n\nasr\\_fine\\_tune5\\_100ep\n-----------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"## LM config\nexpand"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-aishell4 #license-cc-by-4.0 #region-us \n",
"### 'Dan\\_Berrebbi\\_aishell4\\_asr'\n\n\nThis model was trained by dan\\_berrebbi using aishell4 recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Sep 21 09:36:01 EDT 2021'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.3a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: '7887faeabbc2299922267928e190ed89cb032a36'\n\t+ Commit date: 'Mon Sep 20 16:25:02 2021 -0400'\n\n\nasr\\_fine\\_tune5\\_100ep\n-----------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"## LM config\nexpand"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Emiru_Tsunoo/aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4604023/
This model was trained by Emiru Tsunoo using aishell/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell"]}
|
espnet/Emiru_Tsunoo_aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-aishell #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Emiru_Tsunoo/aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.URL'
️ Imported from URL
This model was trained by Emiru Tsunoo using aishell/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Emiru_Tsunoo/aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Emiru Tsunoo using aishell/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-aishell #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Emiru_Tsunoo/aishell_asr_train_asr_streaming_transformer_raw_zh_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Emiru Tsunoo using aishell/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Hoon_Chung/jsut_asr_train_asr_conformer8_raw_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4292742/
This model was trained by Hoon Chung using jsut/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["jsut"]}
|
espnet/Hoon_Chung_jsut_asr_train_asr_conformer8_raw_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"ja",
"dataset:jsut",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #automatic-speech-recognition #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Hoon_Chung/jsut_asr_train_asr_conformer8_raw_char_sp_valid.URL'
️ Imported from URL
This model was trained by Hoon Chung using jsut/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Hoon_Chung/jsut_asr_train_asr_conformer8_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Hoon Chung using jsut/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #ja #dataset-jsut #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Hoon_Chung/jsut_asr_train_asr_conformer8_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Hoon Chung using jsut/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Hoon_Chung/zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4014588/
This model was trained by Hoon Chung using zeroth_korean/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "kr", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["zeroth_korean"]}
|
espnet/Hoon_Chung_zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"kr",
"dataset:zeroth_korean",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"kr"
] |
TAGS
#espnet #audio #automatic-speech-recognition #kr #dataset-zeroth_korean #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Hoon_Chung/zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.URL'
️ Imported from URL
This model was trained by Hoon Chung using zeroth_korean/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Hoon_Chung/zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.URL'\n️ Imported from URL\n\nThis model was trained by Hoon Chung using zeroth_korean/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #kr #dataset-zeroth_korean #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Hoon_Chung/zeroth_korean_asr_train_asr_transformer5_raw_bpe_valid.URL'\n️ Imported from URL\n\nThis model was trained by Hoon Chung using zeroth_korean/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/)
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer
| null |
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #tensorboard #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer'
This model was trained by Karthik using DSTC2/asr1 recipe in espnet
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer'\n\nThis model was trained by Karthik using DSTC2/asr1 recipe in espnet",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #tensorboard #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_DSTC2_asr_train_asr_Hubert_transformer'\n\nThis model was trained by Karthik using DSTC2/asr1 recipe in espnet",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_DSTC2_asr_train_asr_transformer`
This model was trained by Karthik using DSTC2/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_DSTC2_asr_train_asr_transformer
| null |
[
"espnet",
"tensorboard",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #tensorboard #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'espnet/Karthik_DSTC2_asr_train_asr_transformer'
This model was trained by Karthik using DSTC2/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_DSTC2_asr_train_asr_transformer'\n\nThis model was trained by Karthik using DSTC2/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #tensorboard #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_DSTC2_asr_train_asr_transformer'\n\nThis model was trained by Karthik using DSTC2/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `espnet/Karthik_sinhala_asr_train_asr_transformer`
This model was trained by Karthik using sinhala/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["sinhala"]}
|
espnet/Karthik_sinhala_asr_train_asr_transformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:sinhala",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'espnet/Karthik_sinhala_asr_train_asr_transformer'
This model was trained by Karthik using sinhala/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_sinhala_asr_train_asr_transformer'\n\nThis model was trained by Karthik using sinhala/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-sinhala #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'espnet/Karthik_sinhala_asr_train_asr_transformer'\n\nThis model was trained by Karthik using sinhala/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4304245/
This model was trained by Shinji Watanabe using laborotv/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "ja", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["laborotv"]}
|
espnet/Shinji_Watanabe_laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"ja",
"dataset:laborotv",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"ja"
] |
TAGS
#espnet #audio #automatic-speech-recognition #ja #dataset-laborotv #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.URL'
️ Imported from URL
This model was trained by Shinji Watanabe using laborotv/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using laborotv/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #ja #dataset-laborotv #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/laborotv_asr_train_asr_conformer2_latest33_raw_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using laborotv/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best`
♻️ Imported from https://zenodo.org/record/4030677/
This model was trained by Shinji Watanabe using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/Shinji_Watanabe_librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'
️ Imported from URL
This model was trained by Shinji Watanabe using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `Shinji Watanabe/open_li52_asr_train_asr_raw_bpe7000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4630406/
This model was trained by Shinji Watanabe using gigaspeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["gigaspeech"]}
|
espnet/Shinji_Watanabe_open_li52_asr_train_asr_raw_bpe7000_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:gigaspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-gigaspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'Shinji Watanabe/open_li52_asr_train_asr_raw_bpe7000_valid.URL'
️ Imported from URL
This model was trained by Shinji Watanabe using gigaspeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'Shinji Watanabe/open_li52_asr_train_asr_raw_bpe7000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using gigaspeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-gigaspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'Shinji Watanabe/open_li52_asr_train_asr_raw_bpe7000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using gigaspeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4585546/
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["spgispeech"]}
|
espnet/Shinji_Watanabe_spgispeech_asr_train_asr_conformer6_n_fft512_hop_lengt-truncated-f1ac86
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:spgispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-spgispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.URL'
️ Imported from URL
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-spgispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_unnorm_bpe5000_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4585558/
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en_unnorm", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["spgispeech"]}
|
espnet/Shinji_Watanabe_spgispeech_asr_train_asr_conformer6_n_fft512_hop_lengt-truncated-a013d0
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:spgispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en_unnorm"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-spgispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_unnorm_bpe5000_valid.URL'
️ Imported from URL
This model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_unnorm_bpe5000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-spgispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'Shinji_Watanabe/spgispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_unnorm_bpe5000_valid.URL'\n️ Imported from URL\n\nThis model was trained by Shinji Watanabe using spgispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
audio-to-audio
|
espnet
|
## ESPnet2 ENH model
### `espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw`
This model was trained by Wangyou Zhang using chime4 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd egs2/chime4/enh1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
```
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_beamformer_mvdr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_beamformer_mvdr_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 2
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 35841
dist_launcher: null
multiprocessing_distributed: true
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: 4
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
unused_parameters: false
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
pretrain_path: null
init_param: []
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_16k/train/speech_mix_shape
- exp/enh_stats_16k/train/speech_ref1_shape
- exp/enh_stats_16k/train/noise_ref1_shape
valid_shape_file:
- exp/enh_stats_16k/valid/speech_mix_shape
- exp/enh_stats_16k/valid/speech_ref1_shape
- exp/enh_stats_16k/valid/noise_ref1_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
valid_data_path_and_name_and_type:
- - dump/raw/dt05_simu_isolated_6ch_track/wav.scp
- speech_mix
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/spk1.scp
- speech_ref1
- sound
- - dump/raw/dt05_simu_isolated_6ch_track/noise1.scp
- noise_ref1
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
eps: 1.0e-08
weight_decay: 0
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
init: xavier_uniform
model_conf:
loss_type: mask_mse
mask_type: PSM^2
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 512
hop_length: 128
separator: wpe_beamformer
separator_conf:
num_spk: 1
loss_type: mask_mse
use_wpe: false
wnet_type: blstmp
wlayers: 3
wunits: 300
wprojs: 320
wdropout_rate: 0.0
taps: 5
delay: 3
use_dnn_mask_for_wpe: true
use_beamformer: true
bnet_type: blstmp
blayers: 3
bunits: 512
bprojs: 512
badim: 320
ref_channel: 3
use_noise_mask: true
beamformer_type: mvdr_souden
bdropout_rate: 0.0
decoder: stft
decoder_conf:
n_fft: 512
hop_length: 128
required:
- output_dir
version: 0.9.7
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
booktitle={Proc. IEEE Spoken Language Technology Workshop (SLT)},
pages={785--792},
year={2021},
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
@inproceedings{li2021espnetse,
title={{ESPnet-SE}: End-to-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
author={Li, Chenda and Shi, Jing and Zhang, Wangyou and Subramanian, Aswin Shanmugam and Chang, Xuankai and Kamo, Naoyuki and Hira, Moto and Hayashi, Tomoki and Boeddeker, Christoph and Chen, Zhuo and Watanabe, Shinji},
year={2020},
eprint={2011.03706},
archivePrefix={arXiv},
primaryClass={eess.AS}
}
```
|
{"license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-to-audio"], "datasets": ["chime4"]}
|
espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw
| null |
[
"espnet",
"audio",
"audio-to-audio",
"dataset:chime4",
"arxiv:1804.00015",
"arxiv:2011.03706",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015",
"2011.03706"
] |
[] |
TAGS
#espnet #audio #audio-to-audio #dataset-chime4 #arxiv-1804.00015 #arxiv-2011.03706 #license-cc-by-4.0 #region-us
|
## ESPnet2 ENH model
### 'espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw'
This model was trained by Wangyou Zhang using chime4 recipe in espnet.
### Demo: How to use in ESPnet2
## ENH config
<details><summary>expand</summary>
</details>
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ENH model",
"### 'espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw'\n\nThis model was trained by Wangyou Zhang using chime4 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ENH config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #audio-to-audio #dataset-chime4 #arxiv-1804.00015 #arxiv-2011.03706 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ENH model",
"### 'espnet/Wangyou_Zhang_chime4_enh_train_enh_beamformer_mvdr_raw'\n\nThis model was trained by Wangyou Zhang using chime4 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"## ENH config\n\n<details><summary>expand</summary>\n\n\n\n</details>",
"### Citing ESPnet\n\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Feb 17 11:25:22 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer.yaml](conf/tuning/train_asr_conformer.yaml)
- token_type: word
- labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|53.9|65.7|66.4|
|decode_asr_model_valid.acc.ave_10best/test|1650|50.3|54.5|55.7|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 200
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 64
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0005
scheduler: warmuplr
scheduler_conf:
warmup_steps: 5000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.5
ignore_id: -1
lsm_weight: 0.0
length_normalized_loss: false
report_cer: true
report_wer: true
sym_space: <space>
sym_blank: <blank>
extract_feats_in_collect_stats: true
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 4
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iemocap"]}
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-iemocap #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/YushiUeda\_iemocap\_sentiment\_asr\_train\_asr\_conformer'
This model was trained by Yushi Ueda using iemocap recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Thu Feb 17 11:25:22 EST 2022'
* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.9.0+cu102'
* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'
+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'
Using Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment
-------------------------------------------------------------------------------------------------------------------------------------
* ASR config: conf/tuning/train\_asr\_conformer.yaml
* token\_type: word
* labels: Positive, Neutral, Negative
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/YushiUeda\\_iemocap\\_sentiment\\_asr\\_train\\_asr\\_conformer'\n\n\nThis model was trained by Yushi Ueda using iemocap recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Feb 17 11:25:22 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'\n\t+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'\n\n\nUsing Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment\n-------------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/tuning/train\\_asr\\_conformer.yaml\n* token\\_type: word\n* labels: Positive, Neutral, Negative\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-iemocap #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/YushiUeda\\_iemocap\\_sentiment\\_asr\\_train\\_asr\\_conformer'\n\n\nThis model was trained by Yushi Ueda using iemocap recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Feb 17 11:25:22 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'\n\t+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'\n\n\nUsing Conformer based encoder and Transformer based decoder with spectral augmentation and predicting transcript along with sentiment\n-------------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/tuning/train\\_asr\\_conformer.yaml\n* token\\_type: word\n* labels: Positive, Neutral, Negative\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert`
This model was trained by Yushi Ueda using iemocap recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout dfa2868243a897c2a6c34b7407eaea5e4b5508a5
pip install -e .
cd egs2/iemocap/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sat Feb 12 23:11:32 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `f6cde1c419c814a14ccd40abe557a780508cbcdf`
- Commit date: `Fri Feb 11 12:25:33 2022 -0500`
## Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
- ASR config: [conf/tuning/train_asr_conformer_hubert.yaml](conf/tuning/train_asr_conformer_hubert.yaml)
- token_type: word
- Sentiment Labels: Positive, Neutral, Negative
|dataset|Snt|Intent Classification Macro F1 (%)| Weighted F1 (%)| Micro F1 (%)|
|---|---|---|---|---|
|decode_asr_model_valid.acc.ave_10best/valid|754|66.5|76.4|75.7|
|decode_asr_model_valid.acc.ave_10best/test|1650|62.0|65.5|65.8|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_hubert_sentiment
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid/wav.scp
- speech
- sound
- - dump/raw/valid/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- i
- you
- Negative
- to
- it
- '''s'
- the
- '''t'
- that
- and
- Neutral
- Positive
- a
- know
- what
- of
- like
- we
- don
- just
- is
- do
- this
- '''m'
- me
- have
- can
- in
- for
- 'no'
- so
- not
- '''re'
- my
- but
- mean
- be
- going
- all
- was
- they
- well
- want
- yeah
- right
- get
- 'on'
- there
- he
- oh
- here
- go
- out
- with
- your
- if
- okay
- are
- she
- at
- '''ll'
- '''ve'
- got
- think
- about
- up
- see
- then
- why
- how
- time
- really
- one
- now
- or
- as
- back
- look
- her
- him
- been
- because
- 'yes'
- would
- didn
- little
- did
- good
- some
- them
- something
- need
- maybe
- never
- um
- come
- take
- god
- had
- could
- will
- uh
- am
- people
- thing
- when
- very
- let
- much
- sorry
- from
- again
- long
- give
- anything
- too
- make
- fish
- years
- where
- isn
- three
- said
- things
- nothing
- help
- work
- tell
- guess
- over
- 'off'
- business
- even
- sir
- any
- his
- around
- were
- way
- who
- new
- kind
- '''d'
- our
- everything
- more
- came
- an
- should
- down
- understand
- only
- great
- else
- man
- line
- us
- ask
- last
- doing
- say
- waiting
- other
- lot
- job
- feel
- yourself
- point
- thought
- day
- whole
- away
- coming
- better
- marry
- always
- these
- still
- wrong
- two
- sure
- care
- phone
- probably
- remember
- annie
- life
- year
- believe
- gonna
- supposed
- went
- first
- talk
- listen
- alright
- before
- thinking
- after
- stuff
- happy
- ever
- turn
- thank
- home
- fine
- into
- than
- call
- money
- stay
- actually
- every
- hope
- love
- huh
- married
- wait
- somewhere
- has
- being
- father
- larry
- hell
- wanted
- trying
- getting
- guys
- name
- saying
- bag
- hear
- girl
- hey
- flashlight
- beach
- put
- leave
- dollars
- mind
- augie
- does
- won
- fifty
- excited
- hate
- four
- done
- through
- their
- keep
- car
- lost
- doesn
- happen
- wouldn
- school
- big
- calm
- night
- '''cause'
- id
- another
- though
- myself
- nobody
- somebody
- best
- might
- same
- form
- mom
- nice
- matter
- spot
- stop
- told
- by
- shut
- enough
- five
- joe
- hard
- find
- course
- chris
- drunk
- snap
- luggage
- rather
- standing
- someone
- laugh
- took
- those
- please
- live
- six
- ridiculous
- minute
- looking
- bring
- show
- start
- brought
- days
- must
- pretty
- sort
- talking
- sand
- child
- working
- send
- next
- hundred
- whatever
- many
- moon
- moment
- champagne
- s
- problem
- end
- real
- dear
- happened
- person
- place
- fill
- awesome
- house
- such
- cool
- c
- haven
- knew
- die
- finally
- glasses
- stupid
- least
- dad
- supervisor
- totally
- each
- try
- waited
- idea
- u
- party
- asked
- anymore
- sick
- evening
- license
- kid
- wow
- flight
- felt
- pay
- since
- single
- miss
- without
- different
- mmhmm
- free
- sometimes
- yet
- couldn
- view
- hour
- knows
- drive
- themselves
- swim
- ah
- brandy
- fact
- ma
- '''am'
- already
- part
- sit
- thanks
- comes
- check
- everyone
- started
- kiss
- weren
- hotel
- own
- beast
- bad
- above
- run
- worst
- grunions
- darling
- seem
- baby
- turned
- gone
- shouldn
- exactly
- reason
- full
- both
- crazy
- pack
- bit
- swimming
- liquor
- seemed
- serious
- cause
- peter
- burden
- gosh
- forgot
- happens
- alone
- pass
- letters
- heard
- manager
- hours
- baggage
- card
- number
- argue
- seen
- walk
- forget
- kids
- family
- blanket
- honey
- open
- quite
- gotta
- forms
- mother
- old
- needs
- times
- airline
- which
- once
- service
- week
- together
- twenty
- stand
- made
- fun
- dead
- sake
- men
- kate
- today
- plane
- most
- carla
- driving
- deal
- information
- wanna
- definitely
- while
- yea
- certificate
- particular
- lots
- calling
- fortune
- write
- entire
- found
- trouble
- use
- forever
- woman
- enjoy
- room
- damn
- war
- meaning
- longer
- jacket
- ticket
- twice
- sent
- wonder
- small
- amanda
- cannot
- able
- half
- ha
- saw
- bus
- ago
- hmm
- hi
- kidding
- giving
- gave
- move
- women
- ahead
- york
- guy
- suppose
- company
- incredible
- either
- minutes
- tonight
- shoes
- utterly
- wasn
- filled
- gets
- amazing
- beautiful
- hello
- birth
- prove
- choice
- friend
- expect
- says
- blue
- anywhere
- died
- weird
- umm
- blood
- d
- face
- body
- alive
- diagram
- goes
- read
- far
- race
- wind
- fly
- interested
- california
- coast
- news
- past
- charles
- floor
- idiotic
- indeed
- absolutely
- softball
- answer
- somehow
- having
- campus
- completely
- file
- everybody
- given
- fair
- front
- telling
- tried
- sign
- helping
- dollar
- used
- takes
- hair
- behind
- head
- also
- question
- pull
- brother
- nonsense
- kill
- pocket
- cold
- mine
- watching
- shall
- divorce
- driver
- m
- makes
- cried
- security
- suitcase
- seems
- control
- set
- letter
- realized
- paper
- weeks
- address
- sweet
- lose
- huge
- death
- ones
- living
- glad
- bed
- until
- thinks
- wedding
- pieces
- parents
- ready
- almost
- forgive
- kissed
- silver
- during
- forty
- lives
- grow
- arrive
- eyes
- putting
- quiet
- poor
- presents
- sting
- tired
- row
- anyhow
- window
- v
- thousand
- watch
- ashamed
- figure
- vacation
- application
- left
- certainly
- calls
- months
- student
- close
- helpful
- called
- welcome
- major
- match
- morning
- fit
- reach
- door
- wife
- faith
- noticed
- several
- killed
- accident
- rat
- flop
- hands
- ear
- dancing
- hairs
- bugging
- dinner
- bills
- worked
- bored
- conversation
- tunis
- overbearing
- grand
- nine
- amusing
- vile
- tempered
- obviously
- tomorrow
- taken
- eight
- venice
- worth
- boy
- realize
- midnight
- evil
- sixteen
- gotten
- paying
- bottle
- smart
- cindy
- excuse
- along
- seven
- children
- figured
- jobs
- joke
- charge
- memorial
- sitting
- hardly
- young
- story
- feels
- pronouncing
- insane
- forgotten
- fast
- inspire
- grub
- tough
- arguing
- air
- toss
- instance
- raining
- pair
- dry
- socks
- selfish
- included
- yours
- mystery
- mindedness
- urgency
- pure
- urge
- insulting
- ideas
- herself
- period
- missed
- backwards
- dance
- worms
- pop
- except
- perfect
- blow
- funny
- listening
- sadistic
- bully
- cruel
- 'true'
- second
- acting
- lucky
- handle
- loved
- hit
- shaking
- destroyed
- changed
- book
- eleven
- animals
- ice
- cream
- brings
- frustrating
- otherwise
- onto
- pregnant
- operator
- baltimore
- san
- diego
- contract
- brown
- friends
- pictures
- internet
- piece
- high
- anyone
- tickets
- inconvenience
- gift
- usually
- green
- city
- couple
- chuck
- growing
- pick
- throw
- yay
- walking
- grave
- considerate
- inspired
- looked
- mistake
- believes
- avoid
- sucker
- rock
- strangers
- missing
- hide
- geez
- imagination
- overseas
- command
- earth
- monument
- difference
- zipped
- kansas
- reservations
- ahh
- formed
- barefoot
- shower
- running
- garage
- knickerbocker
- locker
- wasting
- roses
- peaches
- rosy
- mention
- shh
- behave
- exquisitely
- beautifully
- rolling
- biting
- scratching
- panthers
- suddenly
- ought
- dreadfully
- pity
- eye
- world
- making
- bark
- roll
- hoops
- insufferable
- weak
- upstairs
- insist
- boorish
- conceited
- impossible
- torment
- brute
- perfectly
- wicked
- crawling
- top
- wish
- wants
- bank
- plan
- soon
- plenty
- bags
- congratulations
- play
- carry
- ignore
- sudden
- refrigerator
- loot
- fight
- lights
- swallows
- goose
- bumps
- keeps
- fighting
- massive
- celebration
- sex
- human
- ours
- light
- minded
- social
- needed
- anyway
- words
- problems
- claim
- reimburse
- checked
- airport
- meet
- e
- responsibility
- grunion
- knees
- thousands
- important
- shows
- goddamn
- strong
- law
- sara
- brent
- passport
- aren
- month
- romantic
- leaving
- random
- applied
- interesting
- regular
- taking
- harder
- hurt
- movie
- freaking
- record
- airlines
- responsible
- honestly
- grew
- proud
- hang
- mrs
- fellow
- terrible
- contradict
- infuriate
- throws
- afraid
- suffer
- bloody
- settled
- thrash
- may
- son
- faithful
- moments
- act
- sleep
- detroit
- planning
- yard
- particularly
- natural
- phenomenon
- highlight
- flopping
- laying
- eggs
- mating
- orgy
- magic
- unexplainable
- instincts
- seaweed
- instinctual
- firecracker
- spent
- clasped
- intimate
- special
- wishes
- seriously
- refreshments
- ooh
- pinpoint
- marge
- dishes
- fat
- ring
- later
- shivers
- spine
- sillier
- poise
- trumpets
- squeakers
- sockets
- allure
- contrary
- violently
- glass
- temperamental
- fiend
- loathe
- adder
- riotous
- mentioned
- intemperate
- tots
- downstairs
- mad
- loose
- lived
- yelling
- happening
- promise
- known
- exciting
- finish
- college
- atlanta
- searching
- fired
- drinking
- jesus
- lock
- plans
- hole
- santa
- kitchen
- invite
- believing
- ann
- landing
- eats
- panties
- sore
- throat
- unmistakable
- capistrano
- lemmings
- cliffs
- invitation
- map
- heaven
- carpet
- poodle
- suicide
- pact
- turns
- court
- dies
- mustn
- vampire
- identification
- places
- danger
- hand
- middle
- situation
- option
- willing
- paid
- horrible
- pain
- anybody
- paperwork
- difficult
- dream
- sakes
- matters
- toes
- become
- habit
- hold
- survive
- break
- babe
- shit
- contact
- land
- water
- transfer
- backersen
- desk
- wallet
- stolen
- credit
- cards
- clearly
- appreciate
- complicated
- uhuh
- bucks
- win
- theatre
- resume
- riding
- helps
- less
- planes
- means
- future
- ran
- red
- wrote
- loans
- spend
- dreaming
- proof
- shooting
- crack
- cracked
- dares
- invited
- breaks
- embarrassed
- wondering
- aw
- style
- granted
- embarrassing
- mixed
- su
- spawning
- stubbed
- toe
- bodies
- expectantly
- meant
- beginning
- traumatized
- freda
- sooner
- applies
- philosophers
- rots
- trivial
- torture
- stiff
- venom
- fangs
- wake
- bended
- voice
- build
- unbelievable
- hiring
- resumes
- eventually
- aggressive
- awhile
- especially
- further
- mass
- pointless
- claus
- neither
- mmm
- cannes
- figures
- burnt
- debate
- exception
- busy
- safe
- possible
- spring
- starting
- buy
- rest
- office
- complaint
- accepted
- ten
- area
- seats
- foam
- vibrations
- drives
- popped
- slightly
- exaggerated
- scientific
- proposed
- bathroom
- awful
- scene
- adders
- afford
- packet
- forward
- customer
- brand
- yellow
- fifteen
- brian
- asking
- percent
- girlfriend
- acceptance
- patient
- patience
- dishonest
- cheese
- restaurant
- t
- sixty
- direct
- holiday
- inn
- refund
- hmmm
- receiving
- sim
- browns
- unacceptable
- northwest
- dorky
- putt
- change
- filling
- z
- x
- simple
- mail
- request
- raise
- town
- hadn
- played
- pennies
- visa
- visit
- loves
- list
- environment
- frustrated
- ride
- imagine
- flew
- nash
- replace
- paris
- personal
- issue
- flights
- track
- angry
- headstone
- cemetery
- cancer
- poetry
- palm
- l
- dropped
- bunch
- p
- chair
- broke
- o
- allow
- nights
- talent
- ignoring
- center
- lovely
- sneaking
- whose
- es
- naturally
- stays
- wide
- bought
- arm
- exact
- curtsy
- wiggle
- superficial
- paint
- naked
- vendome
- rouser
- younger
- jealous
- fascinating
- duty
- photographer
- studio
- cad
- restraint
- ill
- knee
- applying
- questions
- picture
- fake
- apartment
- cash
- drink
- upset
- sending
- flying
- speak
- details
- wherever
- unfortunate
- education
- leaves
- basically
- hospital
- messed
- sounds
- pinch
- malibu
- drop
- team
- professional
- till
- ambiguous
- seeing
- ugh
- wet
- heading
- release
- fire
- inside
- pr
- includes
- rub
- ludicrous
- wriggle
- flippancy
- acid
- sweetness
- curling
- dressing
- gown
- broach
- enjoyable
- original
- '''em'
- early
- ok
- daughter
- age
- steps
- rejected
- starts
- competitive
- hired
- worse
- itself
- nowhere
- unfortunately
- process
- fault
- decision
- package
- easy
- transferred
- straight
- suckers
- none
- returning
- throwing
- cork
- softest
- breathe
- road
- catch
- threw
- canal
- comb
- towels
- sacred
- savor
- delight
- needn
- late
- web
- website
- rough
- daddy
- talked
- feeling
- talented
- interview
- food
- looks
- misplaced
- theft
- likely
- stuck
- tags
- cult
- everywhere
- menu
- choose
- press
- lady
- bill
- department
- online
- immediately
- miles
- notice
- vote
- heavens
- yell
- anna
- tables
- hasn
- stole
- losing
- unfair
- positive
- boston
- celebrate
- system
- turning
- newspapers
- pays
- dare
- jokes
- swine
- demand
- building
- finished
- staying
- cheap
- anyways
- okey
- lobster
- wonderful
- harvard
- engineering
- summer
- lawyer
- mr
- lax
- delta
- funeral
- report
- property
- whoever
- corporate
- miso
- soup
- holy
- olivia
- camera
- power
- sold
- testing
- greens
- explain
- agreement
- undecided
- access
- babies
- street
- vegas
- slot
- honeymoon
- husband
- penny
- slots
- wheel
- cat
- citizenship
- england
- fan
- spending
- craig
- services
- monster
- baloney
- saving
- necessarily
- carousel
- cameras
- airplane
- sentimental
- value
- incredibly
- shopping
- jet
- clothes
- apologize
- allowed
- amount
- candy
- redlands
- sprinklers
- whenever
- brain
- park
- holding
- memorized
- surgery
- audience
- joy
- scholarships
- commuting
- h
- ruined
- mm
- bet
- neighborhood
- sticking
- woo
- teach
- class
- confused
- clock
- foolish
- ocean
- distinctly
- whispered
- wishing
- white
- elliott
- strange
- quest
- ultimate
- truth
- shan
- word
- disagreeable
- wench
- birthday
- national
- thin
- rent
- colors
- citizen
- account
- '''til'
- hire
- short
- fuse
- america
- audition
- sponge
- language
- arriving
- reimbursement
- computer
- cover
- ass
- dealing
- quick
- freaks
- pitch
- hitting
- housing
- force
- scholarship
- dirty
- depends
- helicopter
- wild
- sport
- games
- streets
- although
- mi
- trust
- cracker
- curtsey
- bicker
- irons
- besides
- splendid
- born
- weekends
- letting
- tear
- apart
- touch
- flipped
- hot
- outside
- flowers
- candles
- approve
- surprised
- lead
- ends
- worthless
- apparently
- worker
- annoy
- belongings
- disappeared
- under
- case
- checking
- admit
- risk
- agreed
- yesterday
- country
- financial
- aid
- within
- automated
- systems
- specific
- rate
- star
- aisle
- afternoon
- maui
- machine
- waste
- available
- confirmed
- thinkin
- liked
- kicked
- intermittently
- burned
- desire
- fade
- passion
- laughable
- cunning
- mirrors
- painted
- wooden
- snake
- suspicious
- nosey
- silly
- wonders
- order
- standard
- site
- sense
- dangerous
- cute
- whether
- considering
- opinion
- f
- few
- guarantee
- possessions
- claims
- sue
- easier
- cared
- expected
- trip
- europe
- its
- circles
- large
- store
- macy
- rotary
- instead
- showed
- hundreds
- planned
- someplace
- sensitive
- popping
- opened
- backrub
- fantasy
- damned
- sheet
- cut
- purchase
- amy
- quit
- clapping
- onstage
- eighteen
- auditioning
- rejection
- prepared
- thirty
- master
- kelly
- natalie
- pants
- isabella
- verizon
- goodbye
- fucking
- challenge
- slept
- created
- checkbook
- argument
- uhh
- perhaps
- loath
- complete
- sad
- priorities
- between
- moving
- song
- temporary
- pulling
- smith
- receptionist
- extra
- lodging
- eh
- la
- cost
- boss
- peanuts
- doctor
- production
- downtown
- april
- contracts
- incompetent
- realtor
- fix
- payphone
- verify
- electrical
- outage
- symptoms
- nature
- pilot
- hook
- realizes
- bother
- trade
- event
- meadow
- faint
- blues
- bananas
- overnight
- station
- attention
- purchasing
- terms
- taser
- excellent
- counsel
- sorority
- golfing
- library
- dork
- taco
- branch
- separate
- sacrifices
- mothers
- kicking
- videotape
- stream
- sitters
- moved
- computers
- machines
- bride
- cruise
- likes
- tabs
- plays
- giant
- renamed
- brenda
- lumber
- janet
- state
- quarters
- costs
- escort
- reliable
- board
- posting
- trail
- following
- fantastic
- mighty
- recommending
- generally
- outline
- affords
- save
- carpool
- frustration
- refuse
- anger
- fourth
- lines
- fourteen
- mileage
- candid
- packed
- replaced
- expensive
- lawsuit
- cruising
- bruising
- president
- mistakenly
- behalf
- listed
- liable
- held
- sean
- badge
- employee
- impression
- cemeteries
- urban
- oasis
- wandering
- hers
- pathetic
- ground
- stones
- tumors
- heather
- built
- prospect
- garden
- section
- parties
- feet
- poems
- curly
- tree
- crown
- john
- dunn
- begin
- wheelchair
- reciting
- envelope
- grants
- mold
- minds
- mess
- rapper
- ho
- masters
- teacher
- dash
- popular
- seasoning
- messing
- ruin
- woke
- darkest
- beating
- bush
- porch
- fresh
- rooms
- sweetest
- pets
- cheeked
- brooch
- however
- jones
- voices
- berating
- christmas
- shame
- bunker
- guard
- spread
- companies
- shipping
- shock
- group
- dual
- unattached
- engagement
- sock
- dude
- lucked
- blush
- beige
- loaded
- craziest
- offered
- spoke
- english
- accent
- illegal
- jail
- caught
- hardcore
- tropical
- bahamas
- tahiti
- wealthy
- royalty
- removed
- attitude
- extremely
- hostile
- cutting
- sentence
- jumping
- produce
- field
- shake
- across
- soaked
- dying
- georgia
- educated
- boarding
- attendance
- seat
- offer
- publicize
- abuse
- insinuating
- smug
- mouth
- tossing
- hanky
- black
- wheels
- easily
- overhead
- compartment
- data
- collecting
- lip
- coffee
- smoking
- cigarettes
- union
- differently
- numb
- sickness
- boom
- mortality
- affecting
- slow
- books
- per
- diem
- victorian
- houses
- west
- sider
- commute
- practice
- neon
- softballs
- glow
- co
- ed
- nationally
- ranked
- ping
- pong
- denigrate
- rookie
- donuts
- recently
- pitcher
- hitter
- mostly
- shortstop
- ex
- trojans
- sports
- nicer
- monica
- player
- type
- helipad
- fell
- literally
- doubt
- cares
- mustache
- papers
- crying
- floorboards
- sorted
- everyday
- seas
- bringing
- sacrifice
- guilty
- opening
- return
- jumped
- distinctively
- direction
- tiny
- action
- passed
- cheeks
- darn
- urgh
- restrain
- self
- centered
- registration
- lunch
- documents
- identifications
- deadline
- carries
- official
- documentation
- government
- wireless
- crucial
- pulls
- kinda
- girly
- radiant
- ya
- shine
- invitations
- response
- mcdonald
- level
- member
- pavement
- indicators
- prejudice
- against
- applications
- hating
- physically
- amateur
- crawl
- dumber
- cases
- etiquette
- bug
- opinions
- magically
- irresponsible
- carrousel
- contents
- main
- liability
- provides
- shops
- reimbursed
- investigate
- provide
- uncommon
- johnny
- conscious
- stories
- africa
- image
- hurts
- goout
- gradual
- impact
- subside
- heals
- parts
- football
- recognizable
- accomplished
- prestige
- load
- worrying
- decide
- tour
- friendly
- ivy
- walls
- collegiate
- g
- choices
- math
- prestigious
- departments
- orientation
- graduate
- shiloh
- valued
- customers
- previous
- purchases
- scheduling
- highly
- discounted
- uses
- corporation
- hotels
- rated
- aisles
- switch
- fortunately
- allows
- spare
- shuttle
- appropriate
- traveling
- deals
- shuttles
- sleeps
- gee
- futile
- moralists
- unbearable
- flippant
- shibboleths
- rush
- madly
- piazza
- iron
- dri
- counter
- applica
- lonely
- disappear
- video
- definitive
- magazine
- boyfriend
- stage
- golly
- concert
- crew
- freak
- guaranteed
- nervous
- hah
- persistence
- factors
- types
- male
- female
- consideration
- cooking
- reconsidering
- uhm
- retirement
- foot
- persistent
- table
- skewed
- painting
- outer
- employment
- unlucky
- planet
- normal
- peoples
- reading
- difficulties
- loading
- mishap
- cart
- shipped
- tracking
- reim
- tight
- error
- continue
- 'false'
- compensate
- policy
- gifts
- nobodies
- tag
- originally
- shoe
- core
- memories
- kathy
- lasted
- gary
- closed
- surreal
- troops
- loving
- los
- angeles
- schools
- kinds
- secrets
- explore
- rip
- nuts
- champions
- leaning
- towards
- communications
- broad
- confined
- ropes
- recording
- depending
- leads
- bypass
- zero
- pleasant
- ebay
- bye
- steve
- hint
- asks
- tone
- pretend
- protection
- rid
- submit
- print
- regarding
- grievance
- sites
- protected
- processed
- careful
- secure
- unreliable
- trash
- kept
- spotting
- certain
- specifically
- pushing
- headed
- ears
- watched
- sends
- ceaseless
- wear
- often
- pleasure
- sonya
- promoted
- nurses
- mommy
- va
- videotaped
- cousin
- postpone
- performance
- swear
- cast
- spotlight
- microphone
- tripped
- surprise
- scored
- points
- members
- loser
- marrying
- weddings
- carats
- lousy
- chaperone
- drowsy
- deserve
- cry
- tears
- happiness
- marriage
- commercials
- refection
- financially
- studied
- passing
- russel
- crowe
- pooling
- funds
- owe
- learning
- role
- auditions
- denny
- tip
- teaching
- oof
- france
- steal
- keys
- laughing
- rosenkrantz
- thingy
- bopper
- limit
- whoa
- ways
- suffered
- disease
- handsome
- gifted
- parent
- ripped
- uveny
- tricia
- chemo
- baseball
- benny
- nat
- nation
- bread
- eat
- beer
- dorm
- sometime
- mattresses
- reserved
- grauman
- scale
- whooooo
- acti
- film
- art
- academy
- films
- fuck
- ethiopia
- cuddle
- profanity
- provider
- satellites
- average
- compensating
- unbeknownst
- satellite
- exaggerate
- advising
- addressed
- fax
- dumb
- fritz
- incoming
- million
- grown
- fella
- shootin
- travel
- sat
- instinct
- goosebumps
- arms
- danced
- intimately
- spart
- strumpets
- bristling
- diamonds
- taste
- portion
- side
- stairs
- condescending
- copy
- proceed
- remove
- missy
- behaving
- sweetie
- deploy
- specialist
- increase
- triple
- promotion
- retire
- quiets
- faster
- career
- lame
- drew
- barrymore
- nasty
- mouse
- cheesy
- jane
- tarzan
- engaged
- esmeralda
- hitched
- spontaneous
- character
- conga
- dim
- pulled
- chucky
- sarah
- guiding
- graduated
- apply
- colleges
- energy
- busing
- clerk
- excuses
- qualified
- chang
- investment
- banking
- deloitte
- touche
- temp
- degrading
- smarter
- astronaut
- biomedical
- internship
- plus
- breaking
- evicting
- typing
- shoot
- degree
- science
- club
- joking
- doomed
- maryland
- cooperate
- emergency
- pounds
- urn
- deduction
- sherlock
- holmes
- vessel
- burst
- caption
- therefore
- placed
- firing
- lobby
- fastest
- ibm
- misplace
- count
- hanging
- explanation
- follow
- footsteps
- overboard
- paralyzed
- coma
- fucked
- studying
- countries
- goal
- met
- greatest
- hopefully
- mmmm
- cinema
- chapter
- professionals
- sipping
- martinis
- sushi
- vat
- assistance
- starve
- south
- central
- firm
- police
- officer
- viacom
- digits
- speaking
- network
- charging
- connect
- outages
- hurricane
- katrina
- chose
- maam
- proven
- failing
- receive
- cuts
- using
- flip
- writing
- ms
- fall
- older
- game
- orange
- pink
- goodies
- battling
- sees
- flat
- stronger
- acted
- deserves
- hats
- shore
- pokes
- nah
- paul
- boats
- dammit
- enjoys
- bound
- harm
- pleasured
- lure
- devil
- rile
- topic
- initialed
- lets
- correctly
- spelled
- signed
- shitty
- timing
- susie
- tours
- emotionally
- bullshit
- enlist
- lie
- traditional
- church
- cabins
- flowery
- naturey
- midsummer
- excitement
- hoping
- attacked
- bears
- trim
- cooler
- dog
- tanish
- contrast
- cake
- buffet
- fried
- chicken
- mashed
- potatoes
- happier
- thrilled
- ecstatic
- rushed
- pressure
- interviews
- favors
- bite
- excessive
- unemployed
- cab
- gas
- possibly
- extreme
- trained
- presentable
- quote
- buck
- chugging
- engine
- realm
- minimum
- wage
- fry
- flipper
- bottom
- clear
- affect
- cle
- dressed
- shave
- legs
- presentation
- eighty
- success
- position
- training
- mcdonalds
- tv
- rainbow
- colored
- crap
- safely
- destination
- percoes
- equivalent
- amends
- courtesy
- inconveniencing
- near
- communicate
- conditions
- frequently
- current
- expecting
- pissed
- honor
- grandmother
- condition
- inevitable
- peace
- general
- mace
- present
- knife
- puny
- underwater
- basket
- weaving
- lying
- decided
- works
- worried
- occasion
- cruisers
- vibe
- greek
- lessons
- suck
- celebrating
- crush
- throughout
- test
- waters
- movies
- vermont
- cruiser
- abused
- frat
- boys
- dorms
- dell
- requests
- fixed
- dealt
- worries
- refunded
- situa
- relevant
- ordered
- orders
- others
- incorrectly
- tomatoes
- del
- cents
- attached
- cuz
- hoped
- opportunity
- rushing
- goods
- skipped
- breath
- kleenex
- alaska
- bearing
- hated
- holes
- calf
- witch
- whore
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iemocap"]}
|
espnet/YushiUeda_iemocap_sentiment_asr_train_asr_conformer_hubert
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:iemocap",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-iemocap #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/YushiUeda\_iemocap\_sentiment\_asr\_train\_asr\_conformer\_hubert'
This model was trained by Yushi Ueda using iemocap recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Sat Feb 12 23:11:32 EST 2022'
* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.9.0+cu102'
* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'
+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'
Using Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment
-------------------------------------------------------------------------------------------------------------------------------------------------------------------------
* ASR config: conf/tuning/train\_asr\_conformer\_hubert.yaml
* token\_type: word
* Sentiment Labels: Positive, Neutral, Negative
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/YushiUeda\\_iemocap\\_sentiment\\_asr\\_train\\_asr\\_conformer\\_hubert'\n\n\nThis model was trained by Yushi Ueda using iemocap recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sat Feb 12 23:11:32 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'\n\t+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'\n\n\nUsing Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/tuning/train\\_asr\\_conformer\\_hubert.yaml\n* token\\_type: word\n* Sentiment Labels: Positive, Neutral, Negative\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-iemocap #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/YushiUeda\\_iemocap\\_sentiment\\_asr\\_train\\_asr\\_conformer\\_hubert'\n\n\nThis model was trained by Yushi Ueda using iemocap recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sat Feb 12 23:11:32 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: 'f6cde1c419c814a14ccd40abe557a780508cbcdf'\n\t+ Commit date: 'Fri Feb 11 12:25:33 2022 -0500'\n\n\nUsing Conformer based encoder, Transformer based decoder, and self-supervised learning features with spectral augmentation and predicting transcript along with sentiment\n-------------------------------------------------------------------------------------------------------------------------------------------------------------------------\n\n\n* ASR config: conf/tuning/train\\_asr\\_conformer\\_hubert.yaml\n* token\\_type: word\n* Sentiment Labels: Positive, Neutral, Negative\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
null |
espnet
|
## ESPnet2 DIAR model
### `espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best`
This model was trained by YushiUeda using mini_librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 650472b45a67612eaac09c7fbd61dc25f8ff2405
pip install -e .
cd egs2/mini_librispeech/diar1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
```
<!-- Generated by scripts/utils/show_diar_result.sh -->
# RESULTS
## Environments
- date: `Tue Jan 4 16:43:34 EST 2022`
- python version: `3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]`
- espnet version: `espnet 0.10.5a1`
- pytorch version: `pytorch 1.9.0+cu102`
- Git hash: `0b2a6786b6f627f47defaee22911b3c2dc04af2a`
- Commit date: `Thu Dec 23 12:22:49 2021 -0500`
## diar_train_diar_raw
### DER
dev_clean_2_ns2_beta2_500
|threshold_median_collar|DER|
|---|---|
|result_th0.3_med11_collar0.0|32.28|
|result_th0.3_med1_collar0.0|32.64|
|result_th0.4_med11_collar0.0|30.43|
|result_th0.4_med1_collar0.0|31.15|
|result_th0.5_med11_collar0.0|29.45|
|result_th0.5_med1_collar0.0|30.53|
|result_th0.6_med11_collar0.0|29.52|
|result_th0.6_med1_collar0.0|30.95|
|result_th0.7_med11_collar0.0|30.92|
|result_th0.7_med1_collar0.0|32.69|
## DIAR config
<details><summary>expand</summary>
```
config: conf/train_diar.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/diar_train_diar_raw
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 33757
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 3
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 3
nbest_averaging_interval: 0
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 16
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/diar_stats_8k/train/speech_shape
- exp/diar_stats_8k/train/spk_labels_shape
valid_shape_file:
- exp/diar_stats_8k/valid/speech_shape
- exp/diar_stats_8k/valid/spk_labels_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 200000
chunk_shift_ratio: 0.5
num_cache_chunks: 64
train_data_path_and_name_and_type:
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/train_clean_5_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
valid_data_path_and_name_and_type:
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/wav.scp
- speech
- sound
- - dump/raw/simu/data/dev_clean_2_ns2_beta2_500/espnet_rttm
- spk_labels
- rttm
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.01
scheduler: noamlr
scheduler_conf:
warmup_steps: 1000
num_spk: 2
init: xavier_uniform
input_size: null
model_conf:
attractor_weight: 1.0
use_preprocessor: true
frontend: default
frontend_conf:
fs: 8k
hop_length: 128
specaug: null
specaug_conf: {}
normalize: global_mvn
normalize_conf:
stats_file: exp/diar_stats_8k/train/feats_stats.npz
encoder: transformer
encoder_conf:
input_layer: linear
num_blocks: 2
linear_units: 512
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
decoder: linear
decoder_conf: {}
label_aggregator: label_aggregator
label_aggregator_conf: {}
attractor: null
attractor_conf: {}
required:
- output_dir
version: 0.10.5a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "diarization"], "datasets": ["mini_librispeech"]}
|
espnet/YushiUeda_mini_librispeech_diar_train_diar_raw_valid.acc.best
| null |
[
"espnet",
"audio",
"diarization",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #diarization #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 DIAR model
------------------
### 'espnet/YushiUeda\_mini\_librispeech\_diar\_train\_diar\_raw\_valid.URL'
This model was trained by YushiUeda using mini\_librispeech recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Jan 4 16:43:34 EST 2022'
* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.5a1'
* pytorch version: 'pytorch 1.9.0+cu102'
* Git hash: '0b2a6786b6f627f47defaee22911b3c2dc04af2a'
+ Commit date: 'Thu Dec 23 12:22:49 2021 -0500'
diar\_train\_diar\_raw
----------------------
### DER
dev\_clean\_2\_ns2\_beta2\_500
DIAR config
-----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/YushiUeda\\_mini\\_librispeech\\_diar\\_train\\_diar\\_raw\\_valid.URL'\n\n\nThis model was trained by YushiUeda using mini\\_librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Jan 4 16:43:34 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: '0b2a6786b6f627f47defaee22911b3c2dc04af2a'\n\t+ Commit date: 'Thu Dec 23 12:22:49 2021 -0500'\n\n\ndiar\\_train\\_diar\\_raw\n----------------------",
"### DER\n\n\ndev\\_clean\\_2\\_ns2\\_beta2\\_500\n\n\n\nDIAR config\n-----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #diarization #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/YushiUeda\\_mini\\_librispeech\\_diar\\_train\\_diar\\_raw\\_valid.URL'\n\n\nThis model was trained by YushiUeda using mini\\_librispeech recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Jan 4 16:43:34 EST 2022'\n* python version: '3.7.11 (default, Jul 27 2021, 14:32:16) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.5a1'\n* pytorch version: 'pytorch 1.9.0+cu102'\n* Git hash: '0b2a6786b6f627f47defaee22911b3c2dc04af2a'\n\t+ Commit date: 'Thu Dec 23 12:22:49 2021 -0500'\n\n\ndiar\\_train\\_diar\\_raw\n----------------------",
"### DER\n\n\ndev\\_clean\\_2\\_ns2\\_beta2\\_500\n\n\n\nDIAR config\n-----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.acc.best`
♻️ Imported from https://zenodo.org/record/5154341/
This model was trained by Yushi Ueda using ksponspeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "kr", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["ksponspeech"]}
|
espnet/Yushi_Ueda_ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256-truncated-eb42e5
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"kr",
"dataset:ksponspeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"kr"
] |
TAGS
#espnet #audio #automatic-speech-recognition #kr #dataset-ksponspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.URL'
️ Imported from URL
This model was trained by Yushi Ueda using ksponspeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.URL'\n️ Imported from URL\n\nThis model was trained by Yushi Ueda using ksponspeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #kr #dataset-ksponspeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'Yushi Ueda/ksponspeech_asr_train_asr_conformer8_n_fft512_hop_length256_raw_kr_bpe2309_valid.URL'\n️ Imported from URL\n\nThis model was trained by Yushi Ueda using ksponspeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
null |
espnet
|
## ESPnet2 DIAR pretrained model
### `Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best`
♻️ Imported from https://zenodo.org/record/5264020/
This model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speaker-diarization"], "datasets": ["mini_librispeech"]}
|
espnet/Yushi_Ueda_mini_librispeech_diar_train_diar_raw_max_epoch20_valid.acc.best
| null |
[
"espnet",
"audio",
"speaker-diarization",
"en",
"dataset:mini_librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #speaker-diarization #en #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 DIAR pretrained model
### 'Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.URL'
️ Imported from URL
This model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 DIAR pretrained model",
"### 'Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.URL'\n️ Imported from URL\n\nThis model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speaker-diarization #en #dataset-mini_librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 DIAR pretrained model",
"### 'Yushi Ueda/mini_librispeech_diar_train_diar_raw_max_epoch20_valid.URL'\n️ Imported from URL\n\nThis model was trained by Yushi Ueda using mini_librispeech/diar1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `akreal/espnet2_swbd_da_hubert_conformer`
This model was trained by Pavel Denisov using swbd_da recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 08c6efbc6299c972301236625f9abafe087c9f9c
pip install -e .
cd egs2/swbd_da/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/akreal_swbd_da_hubert_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Thu Jan 20 19:31:21 CET 2022`
- python version: `3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1+cu113`
- Git hash: `08c6efbc6299c972301236625f9abafe087c9f9c`
- Commit date: `Tue Jan 4 13:40:33 2022 +0100`
## asr_train_asr_raw_en_word_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.loss.ave/test_context3|2379|2379|66.3|33.7|0.0|0.0|33.7|33.7|
|decode_asr_asr_model_valid.loss.ave/valid_context3|8116|8116|69.5|30.5|0.0|0.0|30.5|30.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.loss.ave/test_context3|2379|19440|76.1|17.7|6.2|8.1|32.0|33.7|
|decode_asr_asr_model_valid.loss.ave/valid_context3|8116|66353|79.5|16.1|4.4|8.0|28.5|30.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_hubert_context3.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_hubert_context3_raw_en_word_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 35
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
keep_nbest_models: 7
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param:
- frontend.upstream
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 4000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_context3_raw_en_word_sp/train/speech_shape
- exp/asr_stats_context3_raw_en_word_sp/train/text_shape.word
valid_shape_file:
- exp/asr_stats_context3_raw_en_word_sp/valid/speech_shape
- exp/asr_stats_context3_raw_en_word_sp/valid/text_shape.word
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_context3_sp/wav.scp
- speech
- sound
- - dump/raw/train_context3_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/valid_context3/wav.scp
- speech
- sound
- - dump/raw/valid_context3/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- statement
- backchannel
- opinion
- abandon
- agree
- yn_q
- apprec
- 'yes'
- uninterp
- close
- wh_q
- acknowledge
- 'no'
- yn_decl_q
- hedge
- backchannel_q
- sum
- quote
- affirm
- other
- directive
- repeat
- open_q
- completion
- rhet_q
- hold
- reject
- answer
- neg
- ans_dispref
- repeat_q
- open
- or
- commit
- maybe
- decl_q
- third_pty
- self_talk
- thank
- apology
- tag_q
- downplay
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.0
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.5a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["swbd_da"]}
|
espnet/akreal_swbd_da_hubert_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:swbd_da",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-swbd_da #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'akreal/espnet2\_swbd\_da\_hubert\_conformer'
This model was trained by Pavel Denisov using swbd\_da recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Thu Jan 20 19:31:21 CET 2022'
* python version: '3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.10.1+cu113'
* Git hash: '08c6efbc6299c972301236625f9abafe087c9f9c'
+ Commit date: 'Tue Jan 4 13:40:33 2022 +0100'
asr\_train\_asr\_raw\_en\_word\_sp
----------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'akreal/espnet2\\_swbd\\_da\\_hubert\\_conformer'\n\n\nThis model was trained by Pavel Denisov using swbd\\_da recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Jan 20 19:31:21 CET 2022'\n* python version: '3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1+cu113'\n* Git hash: '08c6efbc6299c972301236625f9abafe087c9f9c'\n\t+ Commit date: 'Tue Jan 4 13:40:33 2022 +0100'\n\n\nasr\\_train\\_asr\\_raw\\_en\\_word\\_sp\n----------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-swbd_da #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'akreal/espnet2\\_swbd\\_da\\_hubert\\_conformer'\n\n\nThis model was trained by Pavel Denisov using swbd\\_da recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Thu Jan 20 19:31:21 CET 2022'\n* python version: '3.8.12 (default, Aug 30 2021, 00:00:00) [GCC 11.2.1 20210728 (Red Hat 11.2.1-1)]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.10.1+cu113'\n* Git hash: '08c6efbc6299c972301236625f9abafe087c9f9c'\n\t+ Commit date: 'Tue Jan 4 13:40:33 2022 +0100'\n\n\nasr\\_train\\_asr\\_raw\\_en\\_word\\_sp\n----------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
audio-to-audio
|
espnet
|
# ESPnet2 ENH pretrained model
## `anogkongda/librimix_enh_train_raw_valid.si_snr.ave`
♻️ Imported from <https://zenodo.org/record/4480771#.YN70WJozZH4>
This model was trained by anogkongda using librimix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Training config
See full config in [`config.yaml`](./config.yaml)
```yaml
config: conf/tuning/train_conformer_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_conformer_fastspeech2_raw_phn_jaconv_pyopenjtalk
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["librimix"], "inference": false}
|
espnet/anogkongda-librimix_enh_train_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"audio-source-separation",
"audio-to-audio",
"en",
"dataset:librimix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-librimix #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
# ESPnet2 ENH pretrained model
## 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'
️ Imported from <URL
This model was trained by anogkongda using librimix recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ENH pretrained model",
"## 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'\n\n️ Imported from <URL\nThis model was trained by anogkongda using librimix recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-librimix #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ENH pretrained model",
"## 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'\n\n️ Imported from <URL\nThis model was trained by anogkongda using librimix recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
audio-to-audio
|
espnet
|
## Example ESPnet2 ENH model
### `anogkongda/librimix_enh_train_raw_valid.si_snr.ave`
♻️ Imported from https://zenodo.org/record/4480771/
This model was trained by anogkongda using librimix/enh1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-enhancement", "audio-to-audio"], "datasets": ["librimix"]}
|
espnet/anogkongda_librimix_enh_train_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"speech-enhancement",
"audio-to-audio",
"en",
"dataset:librimix",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-librimix #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ENH model
### 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'
️ Imported from URL
This model was trained by anogkongda using librimix/enh1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ENH model",
"### 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by anogkongda using librimix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speech-enhancement #audio-to-audio #en #dataset-librimix #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ENH model",
"### 'anogkongda/librimix_enh_train_raw_valid.si_snr.ave'\n️ Imported from URL\n\nThis model was trained by anogkongda using librimix/enh1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
null |
espnet
|
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 13:29:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
p3_st_model_valid.acc.ave|12.0|37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp_len = 40192 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 36641
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/train_sp/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22dump//raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22dump//raw/dev/text.tc.en
- text
- text
- - /scratch/iwslt22dump//raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 12.5
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-translation"], "datasets": ["iwslt22_dialect"]}
|
espnet/brianyan918_iwslt22_dialect_st_transformer_fisherlike_4gpu_bbins16m_fix
| null |
[
"espnet",
"audio",
"speech-translation",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #speech-translation #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ST model
----------------
### 'espnet/brianyan918\_iwslt22\_dialect\_st\_transformer\_fisherlike\_4gpu\_bbins16m\_fix'
This model was trained by Brian Yan using iwslt22\_dialect recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Feb 8 13:29:21 EST 2022'
* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.8.1'
* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'
+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'
st\_transformer\_fisherlike\_4gpu\_bbins16m\_fix\_raw\_bpe\_tc1000\_sp
----------------------------------------------------------------------
### BLEU
dataset: p3\_st\_model\_valid.URL, bleu\_score: 12.0, verbose\_score: 37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp\_len = 40192 ref\_len = 42181)
ST config
---------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_st\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 13:29:21 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'\n\t+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'\n\n\nst\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix\\_raw\\_bpe\\_tc1000\\_sp\n----------------------------------------------------------------------",
"### BLEU\n\n\ndataset: p3\\_st\\_model\\_valid.URL, bleu\\_score: 12.0, verbose\\_score: 37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp\\_len = 40192 ref\\_len = 42181)\n\n\nST config\n---------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speech-translation #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_st\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 13:29:21 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'\n\t+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'\n\n\nst\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix\\_raw\\_bpe\\_tc1000\\_sp\n----------------------------------------------------------------------",
"### BLEU\n\n\ndataset: p3\\_st\\_model\\_valid.URL, bleu\\_score: 12.0, verbose\\_score: 37.4/17.3/8.6/4.5 (BP = 0.952 ratio = 0.953 hyp\\_len = 40192 ref\\_len = 42181)\n\n\nST config\n---------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Feb 2 05:32:30 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `99581e0f5af3ad68851d556645e7292771436df9`
- Commit date: `Sat Jan 29 11:32:38 2022 -0500`
## asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|27370|54.7|39.5|5.8|8.8|54.2|87.9|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|145852|84.1|7.1|8.8|11.5|27.4|87.9|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|64424|63.8|22.8|13.4|12.2|48.3|87.9|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 55101
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iwslt22_dialect"]}
|
espnet/brianyan918_iwslt22_dialect_train_asr_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/brianyan918\_iwslt22\_dialect\_train\_asr\_conformer\_ctc0.3\_lr2e-3\_warmup15k\_newspecaug'
This model was trained by Brian Yan using iwslt22\_dialect recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Wed Feb 2 05:32:30 EST 2022'
* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.8.1'
* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'
+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'
asr\_train\_asr\_conformer\_ctc0.3\_lr2e-3\_warmup15k\_newspecaug\_raw\_bpe1000\_sp
-----------------------------------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_train\\_asr\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Feb 2 05:32:30 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'\n\t+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'\n\n\nasr\\_train\\_asr\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug\\_raw\\_bpe1000\\_sp\n-----------------------------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_train\\_asr\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Feb 2 05:32:30 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'\n\t+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'\n\n\nasr\\_train\\_asr\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug\\_raw\\_bpe1000\\_sp\n-----------------------------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
null |
espnet
|
## ESPnet2 ST model
### `espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/st1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
```
<!-- Generated by scripts/utils/show_st_results.sh -->
# RESULTS
## Environments
- date: `Tue Feb 8 12:54:12 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `77fce65312877a132bbae01917ad26b74f6e2e14`
- Commit date: `Tue Feb 8 10:48:10 2022 -0500`
## st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
### BLEU
|dataset|bleu_score|verbose_score|
|---|---|---|
pen2_st_model_valid.acc.ave|13.9|44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp_len = 36614 ref_len = 42181)
## ST config
<details><summary>expand</summary>
```
config: conf/tuning/train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/st_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug_raw_bpe_tc1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 80
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: true
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 25000000
valid_batch_bins: null
train_shape_file:
- exp/st_stats_raw_bpe1000_sp/train/speech_shape
- exp/st_stats_raw_bpe1000_sp/train/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/train/src_text_shape.bpe
valid_shape_file:
- exp/st_stats_raw_bpe1000_sp/valid/speech_shape
- exp/st_stats_raw_bpe1000_sp/valid/text_shape.bpe
- exp/st_stats_raw_bpe1000_sp/valid/src_text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_sp/text.tc.en
- text
- text
- - dump/raw/train_sp/text.tc.rm.ta
- src_text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text.tc.en
- text
- text
- - dump/raw/dev/text.tc.rm.ta
- src_text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- s
- ▁
- apo
- '&'
- ;
- ▁i
- ▁you
- t
- ▁it
- ▁the
- ▁and
- ▁to
- ▁that
- ▁a
- n
- a
- ▁he
- ▁me
- m
- d
- ▁yes
- ▁she
- ▁no
- ▁in
- ▁what
- ▁for
- ▁we
- ing
- ll
- ▁they
- re
- ▁are
- ▁did
- ▁god
- ▁is
- e
- ed
- ▁so
- ▁her
- ▁do
- ▁have
- ▁of
- ▁with
- ▁go
- ▁know
- ▁not
- ▁was
- ▁on
- ▁don
- y
- ▁him
- ▁one
- ▁like
- ▁there
- '%'
- ▁pw
- ▁be
- ▁at
- ▁told
- ▁good
- ▁will
- ▁my
- ▁all
- ▁or
- c
- er
- p
- ▁how
- ▁ah
- r
- ▁but
- ▁them
- ▁see
- ▁get
- ▁can
- i
- ▁when
- ▁going
- ▁about
- ▁mean
- ▁this
- k
- ▁your
- ▁by
- ▁if
- u
- ▁come
- ▁up
- ▁tell
- g
- ▁said
- ▁then
- ▁now
- ▁yeah
- o
- ▁out
- al
- ra
- ▁because
- ▁time
- ▁well
- ▁would
- ▁p
- ▁from
- h
- ar
- f
- ▁swear
- ▁went
- b
- ▁really
- or
- ▁want
- ri
- ▁home
- ▁work
- ve
- ▁take
- ▁got
- ▁just
- l
- ▁uh
- ▁why
- en
- ▁even
- ▁am
- ▁who
- ▁make
- ▁day
- '-'
- in
- ▁something
- ▁some
- ou
- ▁us
- ▁okay
- ▁where
- ▁does
- ▁has
- ▁thank
- ▁c
- ▁his
- th
- ▁back
- ▁fine
- ▁today
- ly
- ▁b
- ▁oh
- ▁doing
- ▁everything
- ▁here
- le
- ▁thing
- ▁two
- ▁anyway
- li
- ▁had
- ▁still
- ▁say
- ro
- ▁after
- ce
- ▁hello
- ▁ma
- ▁call
- w
- ▁listen
- il
- ▁should
- ▁girl
- ▁f
- z
- ▁too
- ▁let
- ▁understand
- ▁may
- ▁much
- ▁think
- ch
- ir
- ha
- ▁other
- ▁tomorrow
- ▁were
- ▁people
- es
- ▁year
- di
- ba
- ▁right
- el
- ▁things
- ▁house
- v
- ▁actually
- un
- ▁an
- ▁give
- ▁only
- ▁better
- pe
- ▁need
- ▁buy
- ▁de
- ne
- ▁ha
- ur
- ion
- ▁made
- la
- ▁willing
- ▁nothing
- ▁called
- ▁night
- ▁yesterday
- se
- ▁came
- ▁lot
- ter
- ▁g
- po
- ▁find
- ry
- ▁car
- ▁over
- ic
- ▁stay
- ▁eat
- ent
- ▁always
- ▁very
- 'on'
- ▁put
- ▁ramadan
- ▁those
- ▁hear
- is
- ▁talk
- ▁three
- ▁anything
- ▁mo
- ▁little
- ▁been
- ▁already
- fi
- ation
- ke
- ▁first
- ▁look
- it
- ▁won
- ▁mom
- ▁way
- ▁before
- ▁ok
- ▁last
- fa
- ▁cook
- vi
- ▁hi
- ▁same
- ▁thought
- ▁also
- um
- ate
- ▁money
- ▁start
- ▁place
- us
- ▁morning
- ▁could
- ▁ask
- ▁bring
- ▁bit
- ▁lo
- ▁leave
- ▁man
- ▁left
- ine
- ▁days
- ge
- ▁la
- ▁week
- ▁friend
- ▁problem
- ▁sister
- ▁allah
- ▁feel
- ▁every
- ▁more
- fe
- ▁long
- ▁hundred
- ▁j
- ▁eh
- ho
- ca
- em
- ▁talking
- ▁exam
- ▁next
- ▁new
- ▁fun
- ▁took
- ▁alright
- co
- ▁w
- ▁um
- ▁eid
- ▁brother
- ▁our
- gh
- ow
- ▁o
- ▁four
- ni
- wa
- ▁else
- ▁finish
- bo
- ▁sleep
- ▁bless
- ▁dear
- ▁since
- ▁play
- ▁name
- hi
- ▁coming
- ▁many
- et
- ▁usual
- ▁con
- ▁maybe
- ▁off
- bi
- ▁than
- ▁any
- ▁mother
- ▁son
- om
- ▁their
- ▁keep
- ▁dinner
- ▁ten
- ▁half
- ▁help
- ▁bad
- and
- ▁pass
- ▁hot
- ▁guy
- ▁least
- ▁down
- ▁bought
- ▁dinars
- ▁working
- ▁around
- ▁normal
- ▁poor
- ▁stuff
- ▁hope
- ▁used
- ▁again
- ▁bro
- ul
- ▁phone
- ▁ex
- ▁done
- ▁six
- ▁na
- ▁month
- ▁tired
- ▁check
- ▁show
- ▁together
- oo
- ▁later
- ▁past
- ▁five
- ▁watch
- ya
- ▁coffee
- ment
- ut
- ▁plan
- ▁great
- ▁daughter
- j
- ▁another
- side
- ▁change
- ▁yet
- ting
- ▁until
- ▁honestly
- ▁whole
- ol
- ▁care
- ▁sure
- able
- id
- ▁big
- ▁spend
- ▁exactly
- ▁boy
- ▁course
- ▁end
- ▁please
- ▁started
- he
- up
- ▁found
- ▁saw
- ▁family
- ▁asked
- ▁enough
- ▁during
- ▁rest
- ▁which
- ▁gave
- ▁true
- ▁while
- ▁job
- ▁el
- ▁each
- ▁away
- ▁kids
- ▁goes
- less
- ▁twenty
- ▁eight
- ▁someone
- ▁cha
- ▁clothes
- ah
- ▁myself
- ▁nice
- ▁late
- ▁old
- ▁real
- age
- ant
- ▁fast
- ▁add
- ▁hard
- ▁these
- ful
- im
- ▁close
- ive
- ▁dad
- ▁pay
- ies
- ▁dude
- ▁alone
- ▁far
- ance
- ▁dis
- ▁seven
- ▁isn
- ▁pro
- our
- ▁thousand
- ▁break
- ▁hour
- ▁wait
- ▁brought
- ▁open
- ▁un
- ▁wedding
- ▁walk
- ▁father
- ▁ka
- ▁second
- x
- ▁saturday
- ▁salad
- ▁win
- ▁everyone
- ▁water
- ▁tunis
- ▁remember
- ity
- ▁wake
- ▁minute
- ▁school
- ▁sunday
- ▁own
- ▁shop
- ▁cold
- ▁meet
- ▁wear
- ever
- ▁send
- ▁early
- ▁gra
- tic
- ▁short
- ▁use
- ▁sometimes
- hou
- ▁love
- ▁prepare
- ▁sea
- ▁study
- ure
- ▁com
- qui
- ▁hand
- ▁both
- ja
- ▁summer
- ▁wrong
- ▁wanted
- che
- ▁miss
- ▁try
- ▁iftar
- ▁yourself
- q
- ▁live
- war
- ▁expensive
- ▁getting
- ▁waiting
- ▁once
- ▁kh
- ▁forgot
- ▁nine
- ▁anymore
- ▁soup
- ▁uncle
- ▁beach
- ▁saying
- ▁into
- ▁having
- ▁brik
- ▁room
- ▁food
- ▁visit
- ▁matter
- ▁thirty
- ▁taking
- ▁rain
- ▁aunt
- ▁never
- ▁pick
- ▁tunisia
- ▁health
- ▁head
- ▁cut
- ▁fasting
- ▁sick
- ▁friday
- ▁forget
- ▁monday
- ▁become
- ▁dress
- ated
- ▁most
- wi
- ▁hang
- ▁life
- ▁fish
- ▁happy
- ▁delicious
- ▁deal
- ▁finished
- ble
- ▁studying
- ▁weather
- ▁making
- ▁cost
- ▁bl
- ▁stayed
- ▁guess
- ▁teach
- ▁stop
- ▁near
- ▁watching
- ▁without
- ▁imagine
- ▁seriously
- fl
- ▁speak
- ▁idea
- ▁must
- ▁normally
- ▁turn
- ize
- ▁clean
- ▁tv
- ▁meat
- ▁woke
- ▁example
- ▁easy
- ▁sent
- ▁sell
- over
- ▁fifty
- ▁amazing
- ▁beautiful
- ▁whatever
- ▁enjoy
- ▁talked
- ▁believe
- ▁thinking
- ▁count
- ▁almost
- ▁longer
- ▁afternoon
- ▁hair
- ▁front
- ▁earlier
- ▁mind
- ▁kind
- ▁tea
- ▁best
- ▁rent
- ▁picture
- ▁cooked
- ▁price
- ight
- ▁soon
- ▁woman
- ▁otherwise
- ▁happened
- ▁story
- ▁luck
- ▁high
- ▁happen
- ▁arrive
- ▁paper
- ga
- ▁quickly
- ▁looking
- ub
- ▁number
- ▁staying
- ▁sit
- man
- ack
- ▁important
- ▁either
- ▁person
- ▁small
- ▁free
- ▁crazy
- ▁playing
- ▁kept
- ▁part
- ▁game
- law
- ▁till
- uck
- ▁ready
- ▁might
- ▁gone
- ▁full
- ▁fix
- ▁subject
- ▁laugh
- ▁doctor
- ▁welcome
- ▁eleven
- ▁sleeping
- ▁heat
- ▁probably
- ▁such
- ▁café
- ▁fat
- ▁sweet
- ▁married
- ▁drink
- ▁move
- ▁outside
- ▁especially
- ▁group
- ji
- ▁market
- ▁through
- ▁train
- ▁protect
- ▁turned
- ▁red
- ▁busy
- ▁light
- ▁noise
- ▁street
- ▁manage
- ▁piece
- ▁sitting
- gue
- ▁sake
- ▁party
- ish
- ▁young
- ▁case
- ▁cool
- huh
- ▁marwa
- ▁drive
- ▁pray
- clock
- ▁couscous
- ▁spent
- ▁felt
- ▁hopefully
- ▁everybody
- ▁living
- ▁pain
- line
- ▁between
- ▁match
- ▁prayer
- que
- ian
- ▁facebook
- ▁spi
- ▁eye
- ▁children
- ▁tonight
- ▁mohamed
- ▁understood
- ▁black
- ▁husband
- ▁rid
- ▁kitchen
- ▁face
- ▁swim
- ▁kid
- ▁invite
- ▁cup
- ▁grilled
- ▁wife
- ▁cousin
- ▁drop
- ▁wow
- ▁table
- ▁du
- ▁bored
- ▁neighborhood
- ▁agree
- ▁bread
- ▁hamma
- ▁straight
- ▁tuesday
- ▁anyone
- ▁lunch
- ade
- ▁himself
- ▁gather
- ▁wish
- ▁fifteen
- ▁wednesday
- ▁die
- ▁thursday
- ▁color
- ▁asleep
- ▁different
- ▁whether
- ▁ago
- ▁middle
- ▁class
- ▁cake
- shirt
- ▁fight
- ▁clear
- ▁test
- ▁plus
- ▁sousse
- ▁beginning
- ▁result
- ▁learn
- ▁crowded
- ▁slept
- ▁shoes
- ▁august
- ▁pretty
- ▁white
- ▁apparently
- ▁reach
- ▁mariem
- ▁return
- ▁road
- ▁million
- ▁stand
- ▁paid
- ▁word
- ious
- ▁few
- ▁breakfast
- ▁post
- ▁kilo
- ▁chicken
- ▁grade
- ▁read
- ▁accept
- ▁birthday
- ▁exhaust
- ▁point
- ▁july
- ▁patience
- ▁studies
- ▁trouble
- ▁along
- ▁worry
- ▁follow
- ▁hurt
- ▁afraid
- ▁trip
- ▁ahmed
- ▁remain
- ▁succeed
- ▁mercy
- ▁difficult
- ▁weekend
- ▁answer
- ▁cheap
- ▁repeat
- ▁auntie
- ▁sign
- ▁hold
- ▁under
- ▁olive
- ▁mahdi
- ▁sfax
- ▁annoy
- ▁dishes
- ▁message
- ▁business
- ▁french
- ▁serious
- ▁travel
- ▁office
- ▁wonder
- ▁student
- ▁internship
- ▁pepper
- ▁knew
- ▁kill
- ▁sauce
- ▁herself
- ▁hammamet
- ▁damn
- ▁mix
- ▁suit
- ▁medicine
- ▁remove
- ▁gonna
- ▁company
- ▁quarter
- ▁shopping
- ▁correct
- ▁throw
- ▁grow
- ▁voice
- ▁series
- gotten
- ▁taste
- ▁driving
- ▁hospital
- ▁sorry
- ▁aziz
- ▁milk
- ▁green
- ▁baccalaureate
- ▁running
- ▁lord
- ▁explain
- ▁angry
- ▁build
- ▁fruit
- ▁photo
- é
- ▁crying
- ▁baby
- ▁store
- ▁project
- ▁france
- ▁twelve
- ▁decide
- ▁swimming
- ▁world
- ▁preparing
- ▁special
- ▁session
- ▁behind
- ▁vegetable
- ▁strong
- ▁fatma
- ▁treat
- ▁cream
- ▁situation
- ▁settle
- ▁totally
- ▁stopped
- ▁book
- ▁honest
- ▁solution
- ▁vacation
- ▁cheese
- ▁ahead
- ▁sami
- ▁focus
- ▁scared
- ▁club
- ▁consider
- ▁final
- ▁naturally
- ▁barely
- ▁issue
- ▁floor
- ▁birth
- ▁almighty
- ▁engagement
- ▁blue
- ▁empty
- ▁soccer
- ▁prophet
- ▁ticket
- ▁indeed
- ▁write
- ▁present
- ▁patient
- ▁available
- ▁holiday
- ▁leaving
- ▁became
- ▁reason
- ▁apart
- ▁impossible
- ▁shame
- ▁worried
- ▁body
- ▁continue
- ▁program
- ▁stress
- ▁arabic
- ▁round
- ▁taxi
- ▁transport
- ▁third
- ▁certain
- ▁downstairs
- ▁neighbor
- ▁directly
- ▁giving
- ▁june
- ▁mini
- ▁upstairs
- ▁mistake
- ▁period
- ▁catch
- ▁buddy
- ▁success
- ▁tajine
- ▁excuse
- ▁organize
- ▁question
- ▁suffer
- ▁remind
- ▁university
- ▁downtown
- ▁sugar
- ▁twice
- ▁women
- ▁couple
- ▁everyday
- ▁condition
- ▁obvious
- ▁nobody
- ▁complete
- ▁stomach
- ▁account
- ▁september
- ▁choose
- ▁bottle
- ▁figure
- ▁instead
- ▁salary
- '0'
- '1'
- '3'
- '2'
- '5'
- '7'
- '4'
- '9'
- '8'
- /
- °
- '6'
- è
- $
- ï
- <sos/eos>
src_token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
asr_weight: 0.3
mt_weight: 0.0
mtlalpha: 1.0
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
src_token_type: bpe
bpemodel: data/token_list/tgt_bpe_unigram1000/bpe.model
src_bpemodel: data/token_list/src_bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/st_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
extra_asr_decoder: transformer
extra_asr_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
extra_mt_decoder: transformer
extra_mt_decoder_conf:
input_layer: embed
num_blocks: 2
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- src_token_list
- token_list
version: 0.10.6a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "speech-translation"], "datasets": ["iwslt22_dialect"]}
|
espnet/brianyan918_iwslt22_dialect_train_st_conformer_ctc0.3_lr2e-3_warmup15k_newspecaug
| null |
[
"espnet",
"audio",
"speech-translation",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #speech-translation #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ST model
----------------
### 'espnet/brianyan918\_iwslt22\_dialect\_train\_st\_conformer\_ctc0.3\_lr2e-3\_warmup15k\_newspecaug'
This model was trained by Brian Yan using iwslt22\_dialect recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Tue Feb 8 12:54:12 EST 2022'
* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.7a1'
* pytorch version: 'pytorch 1.8.1'
* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'
+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'
st\_train\_st\_conformer\_ctc0.3\_lr2e-3\_warmup15k\_newspecaug\_raw\_bpe\_tc1000\_sp
-------------------------------------------------------------------------------------
### BLEU
dataset: pen2\_st\_model\_valid.URL, bleu\_score: 13.9, verbose\_score: 44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp\_len = 36614 ref\_len = 42181)
ST config
---------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_train\\_st\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 12:54:12 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'\n\t+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'\n\n\nst\\_train\\_st\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug\\_raw\\_bpe\\_tc1000\\_sp\n-------------------------------------------------------------------------------------",
"### BLEU\n\n\ndataset: pen2\\_st\\_model\\_valid.URL, bleu\\_score: 13.9, verbose\\_score: 44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp\\_len = 36614 ref\\_len = 42181)\n\n\nST config\n---------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #speech-translation #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_train\\_st\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Tue Feb 8 12:54:12 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.7a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '77fce65312877a132bbae01917ad26b74f6e2e14'\n\t+ Commit date: 'Tue Feb 8 10:48:10 2022 -0500'\n\n\nst\\_train\\_st\\_conformer\\_ctc0.3\\_lr2e-3\\_warmup15k\\_newspecaug\\_raw\\_bpe\\_tc1000\\_sp\n-------------------------------------------------------------------------------------",
"### BLEU\n\n\ndataset: pen2\\_st\\_model\\_valid.URL, bleu\\_score: 13.9, verbose\\_score: 44.0/21.8/11.4/6.2 (BP = 0.859 ratio = 0.868 hyp\\_len = 36614 ref\\_len = 42181)\n\n\nST config\n---------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/brianyan918_iwslt22_dialect_transformer_fisherlike`
This model was trained by Brian Yan using iwslt22_dialect recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 77fce65312877a132bbae01917ad26b74f6e2e14
pip install -e .
cd egs2/iwslt22_dialect/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/brianyan918_iwslt22_dialect_transformer_fisherlike
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Jan 31 10:15:38 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.8.1`
- Git hash: `99581e0f5af3ad68851d556645e7292771436df9`
- Commit date: `Sat Jan 29 11:32:38 2022 -0500`
## asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|27370|53.4|41.1|5.5|9.5|56.1|88.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|145852|83.8|7.5|8.7|12.2|28.4|88.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/test1|4204|64424|62.9|23.9|13.3|13.4|50.5|88.2|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/transformer_fisherlike_4gpu_bbins16m_fix.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_transformer_fisherlike_4gpu_bbins16m_fix_raw_bpe1000_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 60761
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 3
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe1000_sp/train/speech_shape
- exp/asr_stats_raw_bpe1000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe1000_sp/valid/speech_shape
- exp/asr_stats_raw_bpe1000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /scratch/iwslt22asrdump/raw/dev/wav.scp
- speech
- kaldi_ark
- - /scratch/iwslt22asrdump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 5.0
scheduler: noamlr
scheduler_conf:
model_size: 256
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ّ
- ي
- ا
- ِ
- ل
- َ
- و
- ه
- ة
- م
- ر
- ك
- ▁ما
- ُ
- ب
- ش
- د
- ت
- ▁في
- َّ
- ▁ن
- ▁ي
- ▁ت
- ن
- ▁لا
- ح
- ▁ه
- س
- وا
- ▁م
- ف
- ▁إي
- ع
- ▁ب
- ها
- ط
- ى
- ق
- ▁الل
- ▁أ
- ج
- ▁والل
- ▁و
- ▁إيه
- ▁ا
- ▁يا
- ز
- ▁تو
- ▁بش
- ص
- ▁أه
- خ
- ات
- ▁إنت
- ▁أنا
- نا
- ▁شن
- ▁ق
- ▁ش
- ▁ك
- يت
- ين
- ▁ف
- ار
- ▁قال
- ▁باهي
- ▁ع
- ▁من
- ▁ل
- ▁مش
- ▁كان
- ▁حت
- ▁ول
- هم
- ▁ر
- ان
- ▁س
- ض
- ني
- ▁بال
- ▁على
- ▁متاع
- ▁كي
- ▁ال
- ▁ح
- ▁كل
- ▁آنا
- ▁الم
- ▁خ
- ▁الس
- ▁وال
- ون
- ور
- ▁أم
- ▁هك
- ▁آش
- ▁الد
- ▁عاد
- ▁ج
- ▁معناها
- ▁مع
- اش
- ▁الص
- ▁نهار
- ▁لل
- لها
- ▁تي
- ▁رب
- ▁خاطر
- ▁أكهو
- غ
- ▁شي
- الل
- ام
- تها
- ▁ون
- ▁آك
- ▁فهمت
- وم
- ▁موش
- مشي
- ▁ص
- ▁اليوم
- ▁مر
- ست
- ▁الب
- ▁لاباس
- تلي
- ▁الكل
- ▁عال
- ذ
- ▁فم
- ▁الك
- ▁حاجة
- ▁شوي
- اكا
- ▁ياخي
- ▁هاني
- ▁صح
- اس
- ▁آه
- ▁برشة
- ▁الن
- ▁وت
- ▁الج
- لك
- ▁راهو
- سم
- ▁الح
- مت
- ▁الت
- ▁بعد
- اج
- عد
- ▁انشا
- وش
- لت
- ▁وين
- ث
- ▁ولا
- ▁باش
- ▁فيها
- نت
- ▁إ
- ▁الأ
- ▁الف
- ▁إم
- ▁واحد
- ▁ألو
- ▁عندي
- ▁أك
- ▁خل
- ▁وي
- ▁تعمل
- أ
- ▁ريت
- ▁وأ
- ▁تعرف
- بت
- ▁الع
- ▁مشيت
- ▁وه
- ▁حاصيلو
- ▁بالل
- ▁نعمل
- ▁غ
- ▁تجي
- ▁يجي
- ▁كيفاش
- ▁عملت
- ظ
- اك
- ▁هاو
- ▁اش
- ▁قد
- ▁نق
- ▁د
- ▁زادا
- ▁فيه
- رة
- ▁بر
- ▁الش
- ▁ز
- ▁كيما
- ▁الا
- ند
- عم
- ▁نح
- ▁بنتي
- ▁نمشي
- ▁عليك
- ▁نعرفش
- ▁كهو
- ▁وم
- ▁ط
- تي
- ▁خير
- ▁آ
- مش
- ▁عليه
- له
- حت
- ▁إيا
- ▁أحنا
- ▁تع
- الا
- عب
- ▁ديما
- ▁تت
- ▁جو
- ▁مالا
- ▁أو
- ▁قلتلك
- ▁معنتها
- لنا
- ▁شكون
- ▁تحب
- بر
- ▁الر
- ▁وا
- ▁الق
- اء
- ▁عل
- ▁البارح
- ▁وخ
- ▁سافا
- ▁هوما
- ▁ولدي
- ▁
- ▁نعرف
- يف
- رت
- ▁وب
- ▁روح
- ▁علاش
- ▁هاذاك
- ▁رو
- وس
- ▁جا
- ▁كيف
- طر
- ▁غادي
- يكا
- عمل
- ▁نحب
- ▁عندك
- ▁وما
- ▁فر
- اني
- ▁قلتله
- ▁الط
- فر
- ▁دار
- ▁عليها
- ▁يعمل
- ▁نت
- ▁تح
- باح
- ▁ماهو
- ▁وكل
- ▁وع
- قت
- ▁فهمتك
- عر
- ▁وس
- ▁تر
- ▁سي
- يلة
- ▁قلت
- ▁رمضان
- صل
- ▁آما
- ▁الواحد
- ▁بيه
- ▁ثلاثة
- ▁فهمتني
- ▁ها
- بط
- ▁مازال
- قل
- ▁بالك
- ▁معناتها
- ▁ور
- ▁قلتلها
- ▁يس
- رب
- ▁ام
- ▁وبعد
- ▁الث
- ▁وإنت
- ▁بحذا
- ▁لازم
- ْ
- ▁بن
- قرا
- سك
- ▁يت
- خل
- ▁فه
- عت
- ▁هاك
- ▁تق
- ▁قبل
- ▁وك
- ▁نقول
- ▁الز
- حم
- ▁عادش
- حكي
- وها
- بة
- نس
- طل
- ▁علاه
- ذا
- ▁سا
- ▁طل
- الي
- ▁يق
- ▁دو
- حوا
- حد
- ▁نشوف
- نة
- ▁لي
- ▁تك
- ▁نا
- ▁هاذ
- ▁خويا
- ▁المر
- ▁وينك
- ▁البر
- ▁أتو
- ينا
- ▁حل
- ولي
- ▁ثم
- ▁عم
- ▁آي
- ▁قر
- از
- ▁وح
- كش
- بعة
- ▁كيفاه
- ▁نع
- ▁الحمدلله
- ▁ياسر
- ▁الخ
- ▁معاك
- ▁معاه
- ▁تقول
- دة
- ▁حكاية
- تش
- ▁حس
- ▁غدوا
- ▁بالحق
- روا
- وز
- ▁تخ
- ▁العيد
- رجع
- ▁بالي
- ▁جات
- ▁وج
- حة
- ▁وش
- ▁آخر
- ▁طا
- ▁مت
- لقا
- تك
- ▁مس
- ▁راني
- كون
- ▁صاحب
- ▁هاكا
- ▁قول
- ▁عر
- ▁عنده
- ▁يلزم
- ▁هاذا
- ▁يخ
- ▁وقتاش
- ▁وقت
- بع
- ▁العش
- ▁هاذي
- هاش
- ينة
- ▁هاذاكا
- عطي
- ▁تنج
- ▁باهية
- نيا
- فت
- ▁يحب
- ▁تف
- ▁أهلا
- وف
- ▁غدوة
- ▁بيك
- ▁بد
- عن
- ▁در
- ▁ننج
- هار
- ▁الحكاية
- مون
- وق
- ▁نورمال
- ▁عندها
- خر
- ▁بو
- ▁حب
- ▁آكا
- ▁وف
- ▁هاذيكا
- ▁ديجا
- ▁وق
- ▁طي
- لتل
- بعث
- ▁تص
- رك
- ▁مانيش
- ▁العادة
- ▁شوف
- ضر
- ▁يمشي
- ▁نعملوا
- ▁عرفت
- ▁زال
- ▁متع
- ▁عمل
- ▁بيها
- ▁نحكي
- اع
- ▁نج
- معة
- ▁والكل
- عناها
- ▁يعي
- ▁نجي
- ستن
- ▁هاذيك
- ▁عام
- ▁فلوس
- قة
- تين
- ▁بالقدا
- لهم
- ▁تخدم
- ▁ٱ
- ▁شيء
- ▁راهي
- ▁جاب
- ولاد
- ابل
- ▁ماك
- عة
- ▁نمشيوا
- وني
- شري
- بار
- انس
- ▁وقتها
- ▁جديد
- ▁يز
- ▁كر
- ▁حاسيلو
- ▁شق
- ▁اه
- ▁سايي
- ▁انشالل
- رج
- مني
- ▁بلا
- ▁صحيح
- ▁غير
- ▁يخدم
- مان
- وكا
- ▁عند
- ▁قاعدة
- ▁تس
- ربة
- ▁راس
- ▁حط
- ▁نكل
- تني
- ▁الو
- سيون
- ▁عندنا
- ▁لو
- ▁ست
- صف
- ▁ض
- ▁كامل
- ▁نخدم
- ▁يبدا
- ▁دونك
- ▁أمور
- رات
- ▁تونس
- بدا
- ▁تحكي
- ▁سو
- ▁جاي
- ▁وحدة
- ▁ساعة
- حنا
- ▁بكري
- ▁إل
- ▁وبر
- ▁كم
- ▁تبدا
- ارة
- ادي
- رق
- لوا
- ▁يمكن
- ▁خاط
- ▁وص
- جين
- ▁هاذاي
- ▁هز
- قد
- ▁قل
- ▁وكهو
- ▁نص
- ▁دي
- لقى
- ▁وأنا
- سين
- ▁يح
- ▁ماشي
- ▁شو
- ▁خذيت
- امات
- ▁كنت
- خرج
- ▁لقيت
- رتاح
- كس
- ▁حاجات
- ▁مريق
- ▁مل
- ليفون
- اوا
- ▁شفت
- ▁عاملة
- ▁تن
- ▁والا
- سأل
- ▁حد
- ▁قاللك
- ▁العباد
- ▁عالاخ
- ▁وآك
- ▁ماني
- ▁ناخذ
- ▁حم
- ▁الإ
- ▁ماضي
- ▁ث
- الة
- ▁أخرى
- رين
- ▁تشوف
- ▁نخرج
- ▁أربعة
- ▁ألف
- نيش
- ▁هاي
- آ
- ▁فيك
- رشة
- ولة
- فلة
- ▁بابا
- ▁أما
- ▁روحي
- ▁فيهم
- ▁رج
- ▁ليك
- ونس
- يرة
- ▁وأكهو
- ندي
- ▁صار
- شك
- ▁نرو
- ▁آكهو
- ▁تش
- ▁غاديكا
- ▁معاها
- ▁لب
- ▁أذاكا
- ▁آني
- ▁يوم
- عملوا
- ▁نقعد
- دوا
- ▁عد
- سمع
- متني
- ▁الخدمة
- ▁مازلت
- ▁قعدت
- ايا
- ▁برك
- قعد
- ▁خرجت
- ضح
- ▁قالل
- ▁يقول
- ▁وفي
- ▁حق
- ختي
- ▁يعني
- خدم
- ▁جيت
- ▁نرمال
- طف
- ▁عجب
- ▁تقعد
- ▁مشينا
- اية
- ▁خدمة
- لدي
- روف
- ▁الفطر
- ▁مشكل
- ▁سل
- ▁وآنا
- الط
- ▁بالس
- ▁هانا
- ▁أوه
- ▁أذيكا
- ▁وإ
- ▁عليهم
- ▁حالة
- جت
- قضي
- ▁لق
- ▁ونصف
- سعة
- عطيه
- عاو
- خانة
- ▁مخ
- ▁شبيك
- بيعة
- ▁أهوك
- يني
- ▁تعد
- ▁خال
- ▁قريب
- ▁راك
- ▁قالت
- ▁لتو
- ▁أكثر
- اعة
- ▁يظهرلي
- ▁ماشية
- سمعني
- ▁نسيت
- ▁ينج
- ▁الحمدلل
- هدي
- ▁وشن
- ▁تطي
- ▁هنا
- ▁نسمع
- ▁إنتوما
- ▁نحكيلك
- ▁قاعد
- ▁اسمعني
- خرين
- إ
- ماعة
- ▁بالر
- ▁دا
- ▁عمر
- ▁نشري
- ▁قهوة
- ▁تبارك
- ▁صب
- ▁مشات
- غر
- ▁شريت
- ▁عامل
- ▁زوج
- ثنين
- ▁برب
- ريق
- ▁نكم
- ▁لم
- بيب
- ▁مياة
- ▁مالل
- ▁قعد
- ▁سخون
- قس
- ▁وحده
- ▁اسمع
- ▁خمسة
- ▁غالي
- ▁الأو
- رلي
- ▁العظيم
- ▁ترو
- تهم
- كري
- ▁نجيب
- ▁جملة
- قول
- ▁قلتلي
- ▁إيجا
- ▁يقعد
- ▁إيام
- ▁يعطيك
- ▁نخل
- ▁دب
- يمة
- رهبة
- ▁نهز
- ▁محم
- ▁بين
- غار
- ▁نحنا
- ▁بون
- ▁الغ
- ▁شهر
- ▁بار
- رقة
- ▁نطي
- ئ
- ترو
- ▁ملا
- ▁الكرهبة
- ▁باه
- ▁عالإخ
- ▁عباد
- ▁بلاصة
- ▁مشى
- بيع
- ▁نفس
- ▁عملنا
- ▁واح
- ▁أحلاه
- ▁بحذاك
- ▁لأ
- ▁دخ
- باب
- ▁ودر
- ▁غالب
- ▁ناكل
- ▁مثلا
- ء
- ▁راقد
- ▁تفر
- ▁الوقت
- ▁تاخذ
- حذا
- نتر
- ▁نبدا
- ▁حال
- ▁مريم
- الم
- ▁جمعة
- رجول
- ▁معايا
- ▁تخرج
- ▁باس
- ▁ساعات
- ▁عندهم
- ▁نتفر
- مسة
- ▁الجمعة
- بعين
- ▁أكاهو
- ▁ميش
- مراة
- ▁خذا
- ▁ظ
- ▁سيدي
- ▁معاي
- ▁شبيه
- ▁حكا
- ▁سف
- ▁بعضنا
- ▁بالض
- ▁ليلة
- ▁زعما
- ▁الحق
- مضان
- ▁صعيب
- ▁قالتلك
- ً
- ملة
- ▁بق
- عرف
- لاطة
- ▁خرج
- ▁أخت
- ▁تقوللي
- ▁معانا
- ▁صغير
- ▁إسمه
- ▁بعض
- ▁العام
- ▁علينا
- ▁يتع
- ▁فاش
- ▁شع
- ▁معاهم
- ▁يسالش
- ▁لهنا
- ▁سمعت
- ▁البار
- ▁نتصو
- ▁الاخ
- ▁وكان
- وبة
- دمة
- ▁كون
- ▁مبعد
- ▁تسمع
- ▁بعيد
- ▁تاكل
- ▁نلقا
- لامة
- لاثة
- ▁ذ
- ▁تحس
- ▁الواح
- ▁لدار
- ▁فاتت
- ▁تاو
- ▁أحوالك
- ▁عاملين
- ▁كبيرة
- عجب
- ▁بنت
- ▁بيدي
- ▁حكيت
- ▁تحط
- ▁مسكينة
- ▁هاذوكم
- ▁نزيد
- لاث
- ▁عشرة
- ▁عيني
- ▁تعب
- ▁ياكل
- ▁وزيد
- ▁طول
- ▁حمدلله
- ▁وقتاه
- ▁معناه
- ▁وآش
- ▁ووه
- ▁وواحد
- ▁نشوفوا
- ▁عيد
- ▁بصراحة
- ▁بحذانا
- ▁قاعدين
- ▁راجل
- ▁وحدي
- ▁وعشرين
- ▁لين
- ▁خايب
- ▁قالتله
- ▁تهز
- عيد
- ▁كبير
- ▁يعرف
- ▁عارف
- ▁الفلوس
- ▁زايد
- ▁خدمت
- ▁هاذوما
- ▁سلاطة
- ▁فارغة
- ▁ساعتين
- ▁تبد
- ▁راو
- ▁مائة
- ▁بعضهم
- ▁ظاهرلي
- ▁الفازة
- كتب
- ▁القهوة
- سبوك
- ▁زاد
- ▁ضرب
- حكيلي
- ▁فوق
- ▁عاود
- ▁راي
- ▁ومبعد
- ▁حوايج
- ▁دخلت
- ▁يقوللك
- ▁زيد
- ▁زلت
- لفزة
- ▁وقال
- ▁يهب
- ▁يلزمني
- ▁الحمد
- ▁أذي
- طبيعت
- ▁دورة
- ▁عالأقل
- ▁آذاك
- ▁وبال
- ▁الجاي
- عطيني
- ▁ياخذ
- ▁احكيلي
- ▁نهبط
- ▁رقدت
- بلاصة
- ▁عزيز
- ▁صغار
- ▁أقسم
- ▁جيب
- ▁وصلت
- ▁أحوال
- ▁جيست
- ▁جماعة
- سئل
- ▁خوذ
- ▁يهز
- ▁الأخرى
- ▁آلاف
- ▁إسمع
- ▁الحقيقة
- ▁ناقص
- ▁حاط
- ▁موجود
- عباد
- ▁آذيك
- ▁خارج
- ▁الخير
- ▁البنات
- بقى
- ▁طرف
- ▁سينون
- ▁ماذاب
- ▁البحر
- ▁نرقد
- مدلله
- ▁إيجى
- ▁خالتي
- ▁فازة
- ▁بريك
- ▁شريبتك
- ▁تطلع
- ؤ
- ▁المشكلة
- ▁طري
- ▁مادام
- ▁طلبت
- ▁يلعب
- ▁نعاود
- ▁وحدك
- ▁ظاهر
- ٱ
- ژ
- ٍ
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe1000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.6a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["iwslt22_dialect"]}
|
espnet/brianyan918_iwslt22_dialect_transformer_fisherlike
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:iwslt22_dialect",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/brianyan918\_iwslt22\_dialect\_transformer\_fisherlike'
This model was trained by Brian Yan using iwslt22\_dialect recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Mon Jan 31 10:15:38 EST 2022'
* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.6a1'
* pytorch version: 'pytorch 1.8.1'
* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'
+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'
asr\_transformer\_fisherlike\_4gpu\_bbins16m\_fix\_raw\_bpe1000\_sp
-------------------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_transformer\\_fisherlike'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Mon Jan 31 10:15:38 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'\n\t+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'\n\n\nasr\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix\\_raw\\_bpe1000\\_sp\n-------------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-iwslt22_dialect #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/brianyan918\\_iwslt22\\_dialect\\_transformer\\_fisherlike'\n\n\nThis model was trained by Brian Yan using iwslt22\\_dialect recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Mon Jan 31 10:15:38 EST 2022'\n* python version: '3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.6a1'\n* pytorch version: 'pytorch 1.8.1'\n* Git hash: '99581e0f5af3ad68851d556645e7292771436df9'\n\t+ Commit date: 'Sat Jan 29 11:32:38 2022 -0500'\n\n\nasr\\_transformer\\_fisherlike\\_4gpu\\_bbins16m\\_fix\\_raw\\_bpe1000\\_sp\n-------------------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp`
♻️ Imported from https://huggingface.co/
This model was trained by byan using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/byan_librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_ac-truncated-68a97b
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp'
️ Imported from URL
This model was trained by byan using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp'\n️ Imported from URL\n\nThis model was trained by byan using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'byan/librispeech_asr_train_asr_conformer_raw_bpe_batch_bins30000000_accum_grad3_optim_conflr0.001_sp'\n️ Imported from URL\n\nThis model was trained by byan using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
audio-to-audio
|
espnet
|
# ESPnet2 ENH pretrained model
## `Chenda Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave, fs=8k, lang=en`
♻️ Imported from <https://zenodo.org/record/4498562#.YOAOApozZH4>.
This model was trained by Chenda Li using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Thu Feb 4 01:16:18 CST 2021`
- python version: `3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]`
- espnet version: `espnet 0.9.7`
- pytorch version: `pytorch 1.5.0`
- Git hash: `a3334220b0352931677946d178fade3313cf82bb`
- Commit date: `Fri Jan 29 23:35:47 2021 +0800`
## enh_train_enh_conv_tasnet_raw
config: ./conf/tuning/train_enh_conv_tasnet.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_min_8k|0.949205|17.3785|16.8028|26.9785|
|enhanced_tt_min_8k|0.95349|16.6221|15.9494|25.9032|
```
### Training config
See full config in [`config.yaml`](./exp/enh_train_enh_conv_tasnet_raw/config.yaml)
```yaml
config: ./conf/tuning/train_enh_conv_tasnet.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_conv_tasnet_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["wsj0_2mix"], "inference": false}
|
espnet/chenda-li-wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"audio-source-separation",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-wsj0_2mix #license-cc-by-4.0 #region-us
|
# ESPnet2 ENH pretrained model
## 'Chenda Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave, fs=8k, lang=en'
️ Imported from <URL
This model was trained by Chenda Li using wsj0_2mix recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ENH pretrained model",
"## 'Chenda Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave, fs=8k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Chenda Li using wsj0_2mix recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-wsj0_2mix #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ENH pretrained model",
"## 'Chenda Li/wsj0_2mix_enh_train_enh_conv_tasnet_raw_valid.si_snr.ave, fs=8k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Chenda Li using wsj0_2mix recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
audio-to-audio
|
espnet
|
# ESPnet2 ENH pretrained model
## `Chenda Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave, fs=8k, lang=en`
♻️ Imported from <https://zenodo.org/record/4498554#.YOAOEpozZH4>.
This model was trained by Chenda Li using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Python API
```text
See https://github.com/espnet/espnet_model_zoo
```
### Evaluate in the recipe
```python
# coming soon
```
### Results
```bash
# RESULTS
## Environments
- date: `Thu Feb 4 01:08:19 CST 2021`
- python version: `3.7.6 (default, Jan 8 2020, 19:59:22) [GCC 7.3.0]`
- espnet version: `espnet 0.9.7`
- pytorch version: `pytorch 1.5.0`
- Git hash: `a3334220b0352931677946d178fade3313cf82bb`
- Commit date: `Fri Jan 29 23:35:47 2021 +0800`
## enh_train_enh_rnn_tf_raw
config: conf/tuning/train_enh_rnn_tf.yaml
|dataset|STOI|SAR|SDR|SIR|
|---|---|---|---|---|
|enhanced_cv_min_8k|0.891065|11.556|10.3982|18.0655|
|enhanced_tt_min_8k|0.896373|11.4086|10.2433|18.0496|
```
### Training config
See full config in [`config.yaml`](./exp/enh_train_enh_rnn_tf_raw/config.yaml)
```yaml
config: conf/tuning/train_enh_rnn_tf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/enh_train_enh_rnn_tf_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "audio-source-separation", "audio-to-audio"], "datasets": ["wsj0_2mix"], "inference": false}
|
espnet/chenda-li-wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave
| null |
[
"espnet",
"audio",
"audio-source-separation",
"audio-to-audio",
"en",
"dataset:wsj0_2mix",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[] |
[
"en"
] |
TAGS
#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-wsj0_2mix #license-cc-by-4.0 #region-us
|
# ESPnet2 ENH pretrained model
## 'Chenda Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave, fs=8k, lang=en'
️ Imported from <URL
This model was trained by Chenda Li using wsj0_2mix recipe in espnet.
### Python API
### Evaluate in the recipe
### Results
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ENH pretrained model",
"## 'Chenda Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave, fs=8k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Chenda Li using wsj0_2mix recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #audio-source-separation #audio-to-audio #en #dataset-wsj0_2mix #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ENH pretrained model",
"## 'Chenda Li/wsj0_2mix_enh_train_enh_rnn_tf_raw_valid.si_snr.ave, fs=8k, lang=en'\n\n️ Imported from <URL\n\nThis model was trained by Chenda Li using wsj0_2mix recipe in espnet.",
"### Python API",
"### Evaluate in the recipe",
"### Results",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer`
This model was trained by ftshijt using puebla_nahuatl recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd els/puebla_nahuatl/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Nov 7 18:16:55 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_transformer_hubert_raw_bpe500_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|90532|77.0|17.0|6.0|3.6|26.6|74.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|590273|92.2|2.1|5.7|3.0|10.8|74.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|10576|242435|86.0|7.3|6.8|3.5|17.5|74.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_hubert.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_hubert_raw_bpe500_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500_sp/train/speech_shape
- exp/asr_stats_raw_bpe500_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500_sp/valid/speech_shape
- exp/asr_stats_raw_bpe500_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-150390.uytFFbyG/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /tmp/jiatong-150390.uytFFbyG/raw/dev/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-150390.uytFFbyG/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ':'
- N
- ▁A
- ▁WA
- ▁KE
- ▁YO
- ▁NE
- ▁SE
- H
- MO
- WA
- ''''
- ▁NO
- ▁I
- ▁N
- S
- ▁KI
- K
- ▁
- MAH
- KA
- TA
- L
- ▁POS
- PA
- ▁KA
- ▁TA
- ▁MO
- T
- ▁YEHWA
- I
- MEH
- ▁YA
- ▁DE
- MA
- A
- ▁TE
- TI
- TSI
- NI
- CHI
- ▁PERO
- KI
- LI
- TO
- WI
- ▁PARA
- KO
- E
- ▁O
- ▁IKA
- TE
- O
- W
- ▁NEH
- ▁NOCHI
- CH
- ▁TI
- ▁TIK
- LO
- ▁SAH
- ▁MAH
- NA
- LA
- ▁OMPA
- ▁IHKÓ
- YA
- ▁NI
- ▁PORQUE
- ▁MA
- YO
- ▁TEIN
- LIA
- ▁E
- MPA
- ▁NIKA
- X
- YAH
- ▁KWALTSI
- SA
- TSA
- ▁MOCHI
- ▁NIK
- ▁WE
- ▁TO
- TSÍ
- ▁SEMI
- ▁KITA
- WAK
- KWI
- MI
- ▁MM
- ▁XO
- ▁SEKI
- JÓ
- AH
- ▁KOMO
- R
- NE
- ▁OK
- ▁KWALI
- ▁CHI
- ▁YEH
- ▁NELI
- SE
- PO
- WAH
- PI
- ME
- KWA
- ▁PA
- ▁ONKAK
- KE
- ▁YE
- ▁T
- LTIK
- ▁TEHWA
- TAH
- ▁TIKI
- ▁QUE
- ▁NIKI
- PE
- ▁IWKI
- XI
- TOK
- ▁TAMAN
- ▁KO
- TSO
- LE
- RA
- SI
- WÍ
- MAN
- ▁TIMO
- 'NO'
- SO
- ▁MIAK
- U
- ▁TEH
- ▁KICHI
- ▁XA
- WE
- ▁KOW
- KEH
- NÍ
- LIK
- ▁ITECH
- TIH
- ▁PE
- ▁KIPIA
- ▁CUANDO
- ▁KWALTIA
- ▁HASTA
- LOWA
- ▁ENTÓ
- ▁NA
- XO
- RO
- TIA
- ▁NIKITA
- CHIHCHI
- ▁SEPA
- ▁MAHYÁ
- ▁PAHTI
- ▁K
- LIAH
- ▁SAYOH
- MATI
- ▁PI
- TS
- ▁MÁS
- XMATI
- KAH
- ▁XI
- M
- ▁ESTE
- HKO
- KOWIT
- MIKI
- CHO
- ▁TAK
- Á
- ▁KILIAH
- CHIO
- ▁KIHTOWA
- ▁KITE
- NEKI
- ▁ME
- XA
- ▁TEL
- B
- ▁KOWIT
- ▁ATA
- TIK
- ▁EKINTSI
- ▁IMA
- ▁KWA
- ▁OSO
- ▁NEHJÓ
- ▁ITEYO
- Y
- SKEH
- ▁ISTA
- ▁NIKILIA
- LIH
- ▁TIKWI
- ▁PANÉ
- KOWA
- ▁OX
- TEKI
- ▁SA
- NTE
- ▁KIKWI
- TSITSI
- NOH
- AHSI
- ▁IXO
- WIA
- LTSI
- ▁KIMA
- C
- ▁WEHWEI
- ▁TEPITSI
- ▁IHK
- ▁XIWIT
- YI
- LIS
- ▁CA
- XMATTOK
- SÁ
- ▁MOTA
- RE
- ▁TIKIHTO
- ▁MI
- ▁X
- D
- ▁SAN
- WIH
- ▁WEHKA
- KWE
- CHA
- ▁SI
- KTIK
- ▁YETOK
- ▁MOKA
- NEMI
- LILIA
- ▁¿
- TIW
- ▁KIHTOWAH
- LTI
- Ó
- MASÁ
- ▁POR
- ▁TIKITA
- KETSA
- ▁IWA
- METS
- YOH
- ▁TAKWA
- HKEH
- ▁KIKWIH
- ▁KIKWA
- NIA
- ▁ACHI
- ▁KIKWAH
- ▁KACHI
- ▁PO
- ▁IGUAL
- NAL
- ▁PILI
- ▁NIMAN
- YE
- ▁NIKMATI
- WIAH
- ▁KIPA
- ▁M
- J
- ▁KWI
- ▁WI
- WAYA
- Z
- ▁KITEKI
- G
- ▁'
- ▁IHKO
- CE
- ▁TONI
- ▁TSIKITSI
- P
- DO
- TOKEH
- NIK
- ▁TIKILIAH
- ▁KOWTAH
- ▁TAI
- ▁TATA
- TIAH
- CA
- PIL
- CHOWA
- ▁KIMATI
- ▁TAMA
- XKA
- XIWIT
- TOS
- KILIT
- ILWI
- SKI
- YEH
- DA
- WAYO
- ▁TAPA
- ▁NIMO
- CHIT
- ▁NIMITS
- ▁KINA
- PAHTI
- RI
- ▁BUENO
- ▁ESKI
- WAYAH
- PANO
- KOW
- WEYAK
- LPAN
- LTIA
- ▁KITO
- CO
- ▁TINE
- KIH
- JO
- ▁KATKA
- ▁TIKTA
- PAHTIA
- ▁XIWTSI
- ▁CHIKA
- ▁KANAH
- ▁KOYO
- MPI
- ▁IXIWYO
- IHTIK
- ▁KWE
- ▁XIW
- WILIA
- XTIK
- ▁VE
- ▁TIKMATI
- ▁KOKOLIS
- LKWI
- ▁AHKO
- MEKAT
- ▁TIKMA
- ▁NIMITSILIA
- ▁MITS
- XTA
- ▁CO
- ▁KOMA
- ▁KOMOHKÓ
- F
- ▁OKSEKI
- ▁TEISÁ
- ▁ESO
- ▁IKOWYO
- ▁ES
- TOHTO
- XTI
- ▁TSI
- ▁TIKO
- PIHPI
- ▁OKSÉ
- ▁WEHKAPAN
- KALAKI
- ▁WEL
- ▁MIGUEL
- TEKITI
- ▁TOKNI
- ROWA
- ▁MOSKALTIA
- Í
- XOKO
- ▁TIKCHI
- ▁EHE
- ▁KWO
- LPI
- HTOK
- TSTI
- TÍ
- ▁TEIHSÁ
- KILO
- ▁PUES
- SKIA
- HTIW
- LILIAH
- ▁IHWA
- ▁KOSTIK
- ▁TIKIHTOWAH
- ▁CHA
- ▁COMO
- ▁KIMANA
- CU
- TAMAN
- WITS
- ▁KOKO
- ILPIA
- ▁NIMONO
- ▁WELI
- ▁NIKWI
- WTOK
- ▁KINEKI
- KOKOH
- ▁P
- LTIAH
- XKO
- ▁ONKAYA
- TAPOWI
- MATTOK
- ▁MISMO
- ▁NIKIHTO
- ▁NIKMATTOK
- MESKIA
- ▁SOH
- KWOWIT
- XTIA
- WELITA
- ▁DESPUÉS
- ▁IXWA
- ZA
- TSAPOT
- SKAL
- ▁SIEMPRE
- TINEMI
- Ñ
- ▁ESKIA
- NELOWA
- ▁TZINACAPAN
- ▁DI
- XIWYO
- ▁AHA
- ▁AHWIA
- É
- ▁KIKWIAH
- MATTOKEH
- ▁ACHTO
- XTILIA
- TAPAL
- ▁KIHTO
- TEHTE
- ▁PORIN
- ▁TSOPE
- ▁KAHFE
- GU
- ▁NIMITSTAHTANI
- ▁TAHTA
- ▁KOWTATI
- ISWAT
- ▁TIKPIA
- ▁KOMEKAT
- TIOWIH
- ▁TIMONOHNO
- ▁TIEMPO
- WEHKA
- QUI
- ▁TIHTI
- ▁XOXOKTIK
- ▁TAXKAL
- EHE
- ▁AJÁ
- NANAKAT
- NIWKI
- ▁CI
- ▁ITSMOL
- ▁NIKPIA
- TEKPA
- ▁BO
- ▁TASOHKA
- Ú
- ¡
- '8'
- '9'
- '0'
- '1'
- '2'
- ¿
- Ò
- '4'
- À
- '7'
- '5'
- '3'
- ́
- V
- ̈
- Ï
- '6'
- Q
- Ì
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: s3prl
frontend_conf:
frontend_conf:
upstream: hubert_large_ll60k
download_dir: ./hub
multilayer_feature: true
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: linear
preencoder_conf:
input_size: 1024
output_size: 80
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["puebla_nahuatl"]}
|
espnet/ftshijt_espnet2_asr_puebla_nahuatl_transfer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:puebla_nahuatl",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-puebla_nahuatl #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/ftshijt\_espnet2\_asr\_puebla\_nahuatl\_transfer'
This model was trained by ftshijt using puebla\_nahuatl recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Sun Nov 7 18:16:55 EST 2021'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.4a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: ''
+ Commit date: ''
asr\_train\_asr\_transformer\_hubert\_raw\_bpe500\_sp
-----------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/ftshijt\\_espnet2\\_asr\\_puebla\\_nahuatl\\_transfer'\n\n\nThis model was trained by ftshijt using puebla\\_nahuatl recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Nov 7 18:16:55 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_hubert\\_raw\\_bpe500\\_sp\n-----------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-puebla_nahuatl #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/ftshijt\\_espnet2\\_asr\\_puebla\\_nahuatl\\_transfer'\n\n\nThis model was trained by ftshijt using puebla\\_nahuatl recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Nov 7 18:16:55 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_hubert\\_raw\\_bpe500\\_sp\n-----------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_totonac_transformer`
This model was trained by ftshijt using totonac recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd els/totonac/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_totonac_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Sun Nov 7 09:22:09 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_transformer_specaug_raw_bpe250_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|3547|59.8|32.9|7.3|6.5|46.7|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|5018|55.5|35.7|8.8|6.1|50.6|92.0|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|22510|88.1|4.4|7.4|3.9|15.8|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|32990|86.9|4.3|8.8|4.0|17.1|92.0|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/dev|530|9360|70.3|15.8|13.8|4.3|34.0|87.4|
|decode_asr_lm_lm_train_bpe250_valid.loss.ave_asr_model_valid.acc.best/test|704|13835|70.5|16.0|13.6|4.4|33.9|92.0|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_specaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_specaug_raw_bpe250_sp
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe250_sp/train/speech_shape
- exp/asr_stats_raw_bpe250_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe250_sp/valid/speech_shape
- exp/asr_stats_raw_bpe250_sp/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-7359.okvPvI3Z/raw/train_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - /tmp/jiatong-7359.okvPvI3Z/raw/dev/wav.scp
- speech
- kaldi_ark
- - /tmp/jiatong-7359.okvPvI3Z/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- ':'
- ▁N
- NI
- N
- ▁IYMA
- ▁NA
- NA
- ▁WA
- WA
- ▁
- ''''
- KA
- ▁MA
- MA
- T
- ▁XA
- TA
- NCHU
- WI
- ▁LI
- ▁NI
- PA
- YI
- ▁PUS
- K
- ▁PI
- ▁X
- S
- ▁TA
- YA
- ▁LA
- Q
- QA
- TI
- ▁KA
- QO
- W
- ▁KAH
- ▁PALA
- H
- X
- XA
- ▁KI
- A
- LH
- I
- LA
- ▁CHA
- ▁A
- ▁XLI
- ▁LHI
- U
- ▁K
- KANI
- KU
- Y
- ▁LU
- Á
- ▁CHU
- O
- KI
- ▁KIWI
- NTLA
- ▁TLA
- M
- ▁TAWA
- ▁TI
- ▁S
- WANI
- CHA
- LHI
- LI
- ▁TU
- ▁PALHA
- Í
- ▁CHANÁ
- ▁KILHWAMPA
- KÁN
- ▁WAYMA
- E
- SA
- ▁E
- ▁LHU
- LHA
- PU
- ▁LHA
- ▁PA
- ▁LAK
- ▁ANTA
- ▁KITI
- NCHÚ
- SI
- TLA
- PI
- ▁KINI
- CHI
- ▁PEROH
- ▁PU
- QÓ
- QALHCHIWINA
- TU
- ▁TLHA
- ▁WI
- NÁ
- ▁KAN
- ▁NAYI
- CH
- 'NO'
- ▁U
- TSA
- MÁ
- NQO
- ▁ANA
- ▁LIKWA
- ▁XTA
- J
- ▁QALH
- TO
- TÁ
- ▁USA
- ▁PORQUE
- ▁MI
- L
- ▁TAWÁ
- XI
- LHAQAPASA
- P
- CHIWI
- WÁ
- NTI
- ▁JKA
- Ú
- NTLHA
- R
- TSI
- C
- STA
- ▁LH
- LHU
- MPI
- ▁I
- ▁NILH
- ▁KATSI
- ▁LHAK
- MAKLHAKASKI
- ▁WANIKÁN
- ▁WIXI
- ▁TSI
- KÚ
- NÍ
- ▁PAKS
- NU
- TLHA
- YÁ
- KUCHAN
- XAQATLI
- ▁MAX
- ▁LAQAPASA
- ▁LAQ
- QALH
- KATSI
- Ó
- LAQAPASA
- ▁J
- ▁QAMA
- NTU
- MI
- KIWI
- ▁KIN
- ▁XANAT
- ▁CHI
- JA
- ▁IY
- ▁TSU
- MAKLAKAS
- ▁MAQA
- LÁ
- ▁KATSIYA
- ▁TLANKA
- ▁STAK
- ▁XLA
- ▁LHIKWA
- ▁SQA
- ▁P
- TAHNA
- ▁TLAQ
- ▁JKATSI
- MAKLAKASKINKA
- YÁW
- WATIYA
- CHÁ
- ▁IPORQUEI
- ▁AKXNI
- TSU
- ▁TSINÓ
- ▁STAKA
- ▁AKXNÍ
- LAKATA
- KATSÍ
- ▁XALHAK
- TLAWAYA
- SPUT
- ▁XATAWA
- QALHCHIWI
- PÁ
- JU
- ▁XAXANAT
- ▁PÉREZ
- ▁AKTSU
- ▁JKI
- NTÚ
- ▁KATSIYÁ
- ▁IESTEI
- LAQAPASÁ
- ▁MASKI
- ▁LAQSQATÁ
- ▁TLHANKA
- ▁WANIKANI
- ▁LÓPEZ
- MAKLAKASKINKÁN
- ▁ANTÁ
- ▁TACHIWÍ
- ▁SEBAST
- ▁CANO
- ▁XKUTNI
- ▁UKXILH
- TANKAH
- LAKASKINQO
- LAKAPASTAK
- ▁XCHACHAT
- TAKAWANÍ
- ▁TLÁ
- ▁TSINOH
- KAXTLAWA
- ▁NÚÑEZ
- ▁XLAKASKINKA
- ▁WÁTIYA
- ONCE
- Z
- É
- D
- Ñ
- V
- F
- G
- '1'
- B
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram250/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe250_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 256
attention_heads: 4
attention_dropout_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["totonac"]}
|
espnet/ftshijt_espnet2_asr_totonac_transformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:totonac",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-totonac #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/ftshijt\_espnet2\_asr\_totonac\_transformer'
This model was trained by ftshijt using totonac recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Sun Nov 7 09:22:09 EST 2021'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.4a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: ''
+ Commit date: ''
asr\_train\_asr\_transformer\_specaug\_raw\_bpe250\_sp
------------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/ftshijt\\_espnet2\\_asr\\_totonac\\_transformer'\n\n\nThis model was trained by ftshijt using totonac recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Nov 7 09:22:09 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_specaug\\_raw\\_bpe250\\_sp\n------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-totonac #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/ftshijt\\_espnet2\\_asr\\_totonac\\_transformer'\n\n\nThis model was trained by ftshijt using totonac recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Sun Nov 7 09:22:09 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_specaug\\_raw\\_bpe250\\_sp\n------------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR model
### `espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer`
This model was trained by ftshijt using yolo_mixtec recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
pip install -e .
cd els/yolo_mixtec/asr1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Wed Nov 10 02:59:39 EST 2021`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.4a1`
- pytorch version: `pytorch 1.9.0`
- Git hash: ``
- Commit date: ``
## asr_train_asr_transformer_specaug_raw_bpe500
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|81348|84.1|11.8|4.1|2.5|18.3|82.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|626187|93.4|2.2|4.4|2.4|9.0|82.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_lm_lm_train_bpe500_valid.loss.ave_asr_model_valid.acc.best/test|4985|325684|90.7|5.2|4.1|2.2|11.5|82.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_transformer_specaug.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_transformer_specaug_raw_bpe500
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 15
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5
grad_clip_type: 2.0
grad_noise: false
accum_grad: 2
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 32
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_bpe500/train/speech_shape
- exp/asr_stats_raw_bpe500/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_bpe500/valid/speech_shape
- exp/asr_stats_raw_bpe500/valid/text_shape.bpe
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - /tmp/st-jiatong-54826.tbQP9L0N/raw/train/wav.scp
- speech
- kaldi_ark
- - /tmp/st-jiatong-54826.tbQP9L0N/raw/train/text
- text
- text
valid_data_path_and_name_and_type:
- - /tmp/st-jiatong-54826.tbQP9L0N/raw/dev/wav.scp
- speech
- kaldi_ark
- - /tmp/st-jiatong-54826.tbQP9L0N/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- '4'
- '3'
- '1'
- '2'
- A
- ▁NDI
- '''4'
- '''1'
- U
- ▁BA
- O
- ▁I
- E
- 4=
- ▁KU
- ▁TAN
- ▁KA
- '''3'
- NI
- ▁YA
- RA
- 3=
- 2=
- IN
- NA
- ▁TA
- AN
- ▁KAN
- ▁NI
- ▁NDA
- ▁NA
- ▁JI
- KAN
- CHI
- (3)=
- I
- UN
- 1-
- ▁SA
- (4)=
- ▁JA
- XI
- ▁KO
- ▁TI
- TA
- KU
- BI
- ▁YU
- ▁KWA
- KA
- XA
- 1=
- ▁YO
- RI
- NDO
- ▁XA
- TU
- ▁TU
- ▁ÑA
- ▁KI
- ▁XI
- YO
- NDU
- NDA
- ▁CHI
- (2)=
- ▁BI
- ▁NU
- KI
- (1)=
- YU
- 3-
- ▁MI
- 'ON'
- ▁A
- BA
- 4-
- KO
- ▁NDU
- ▁ÑU
- ▁NDO
- NU
- ÑU
- '143'
- ▁SI
- ▁SO
- 13-
- NDI
- ▁AN
- ▁SU
- TIN
- SA
- ▁BE
- TO
- RUN
- KWA
- KWI
- ▁NDE
- ▁KWI
- XIN
- ▁U
- SI
- SO
- ▁TUN
- EN
- ▁KWE
- YA
- (4)=2
- NDE
- TI
- TUN
- ▁TIN
- MA
- ▁SE
- ▁XU
- SU
- ▁LU
- ▁KE
- ▁
- MI
- ▁RAN
- (3)=2
- 14-
- ▁MA
- KUN
- LU
- N
- ▁O
- KE
- NGA
- ▁IS
- ▁JU
- '='
- ▁LA
- ÑA
- JA
- CHUN
- R
- TAN
- PU
- ▁TIEM
- LI
- LA
- CHIU
- ▁PA
- M
- ▁REY
- ▁BAN
- JI
- L
- SUN
- ▁SEÑOR
- ▁JO
- ▁TIO
- KWE
- CHU
- S
- ▁YE
- KIN
- XU
- BE
- ▁CUENTA
- ▁SAN
- RRU
- ▁¿
- CHA
- ▁TO
- RRA
- LO
- TE
- ▁AMIGU
- PA
- XAN
- ▁C
- C
- ▁CHA
- ▁TE
- ▁HIJO
- ▁MB
- ▁PI
- G
- ▁ÁNIMA
- ▁CHE
- ▁P
- B
- NDIO
- SE
- ▁SANTU
- MU
- ▁PADRE
- D
- JU
- Z
- ▁TORO
- ▁PO
- LE
- ▁LI
- RO
- ▁LO
- ▁MESA
- CA
- ▁CHIU
- DO
- ▁BU
- ▁BUTA
- JO
- T
- TRU
- RU
- ▁MBO
- ▁JUAN
- ▁MM
- ▁CA
- ▁M
- ▁MAS
- ▁DE
- V
- ▁MAÑA
- ▁UTA
- DA
- ▁MULA
- ▁YOLOXÓCHITL
- ▁CONSEJU
- ▁Y
- ▁LE
- ÓN
- ▁MISA
- TIU
- ▁CANDELA
- ▁PATRÓN
- ▁PADRINU
- ▁MARCU
- ▁V
- ▁G
- Í
- ▁XE
- ▁MU
- ▁XO
- NGUI
- ▁CO
- ▁HOMBRE
- ▁PESU
- ▁PE
- ▁D
- ▁MACHITI
- CO
- REN
- ▁RANCHU
- ▁MIS
- ▁MACHU
- J
- ▁PAN
- CHO
- H
- ▁CHU
- Y
- ▁TON
- GA
- X
- ▁VI
- ▁FE
- ▁TARRAYA
- ▁SANTÍSIMA
- ▁N
- ▁MAYÓ
- ▁CARRU
- ▁F
- ▁PAPÁ
- ▁PALOMA
- ▁MARÍA
- ▁PEDRU
- ▁CAFÉ
- ▁COMISARIO
- ▁PANELA
- ▁PELÓN
- É
- ▁POZO
- ▁CABRÓN
- ▁GUACHU
- ▁S
- RES
- ▁COSTUMBRE
- ▁SEÑA
- QUI
- ▁ORO
- CH
- ▁MAR
- SIN
- SAN
- ▁COSTA
- ▁MAMÁ
- ▁CINCUENTA
- ▁CHO
- ▁PEDR
- ▁JUNTA
- MÚ
- ▁TIENDA
- ▁JOSÉ
- NC
- ▁ES
- ▁SUERTE
- ▁FAMILIA
- ▁ZAPATU
- NTE
- ▁PASTO
- ▁CON
- Ñ
- ▁BOTE
- CIÓN
- ▁RE
- ▁BOLSA
- ▁MANGO
- ▁JWE
- ▁GASTU
- ▁T
- ▁B
- ▁KW
- ÍN
- ▁HIJA
- ▁CUARENT
- ▁VAQUERU
- ▁NECHITO
- ▁NOVIA
- ▁NOVIO
- JWE
- ▁PUENTE
- ▁SANDÍA
- ▁MALA
- Ó
- ▁ABONO
- ▁JESÚS
- ▁CUARTO
- ▁EFE
- ▁REINA
- ▁COMANDANTE
- ▁ESCUELA
- ▁MANZANA
- ▁MÁQUINA
- LLA
- ▁COR
- ▁JERÓNIMO
- ▁PISTOLA
- NGI
- CIO
- ▁FRANCISCU
- ▁TEODORO
- CER
- ▁SALUBI
- ▁MEZA
- ▁MÚSIC
- ▁RU
- ▁CONSTANTINO
- ▁GARCÍA
- ▁FRENU
- ▁ROSA
- ▁CERVEZA
- ▁CIGARRU
- ▁COMISIÓN
- ▁CUNIJO
- ▁FRANCISCO
- ▁HÍJOLE
- ▁NUEVE
- ▁MUL
- ▁PANTALÓN
- ▁CAMISA
- ▁CHINGADA
- ▁SEMANA
- ▁COM
- GAR
- ▁MARTÍN
- ▁SÁBADO
- ▁TRABAJO
- ▁CINCO
- ▁DIE
- ▁EST
- NDWA
- ▁LECHIN
- ▁COCO
- ILLU
- ▁CORRE
- ▁MADR
- ▁REC
- ▁BAUTISTA
- ▁VENTANA
- ▁CUÑAD
- ▁ANTONIU
- ▁COPALA
- LÍN
- ▁SECUND
- ▁COHETE
- ▁HISTORIA
- ▁POLICÍA
- ENCIA
- ▁CAD
- ▁LUIS
- ▁DOCTOR
- ▁GONZÁLEZ
- ▁JUEVE
- ▁LIBRU
- ▁QUESU
- ▁VIAJE
- ▁CART
- ▁LOCO
- ▁BOL
- ▁COMPADRE
- ▁JWI
- ▁METRU
- ▁BUENO
- ▁TRE
- ▁CASTILLO
- ▁COMITÉ
- ▁ETERNO
- ▁LÍQUIDO
- ▁MOLE
- ▁CAPULCU
- ▁DOMING
- ▁ROMA
- ▁CARAJU
- ▁RIATA
- ▁TRATU
- ▁SEIS
- ▁ADÁN
- ▁JUANCITO
- ▁HOR
- ''''
- ▁ARRÓ
- ▁COCINA
- ▁PALACIO
- ▁RÓMULO
- K
- ▁ALFONSO
- ▁BARTOLO
- ▁FELIPE
- ▁HERRER
- ▁PAULINO
- ▁YEGUA
- ▁LISTA
- Ú
- ▁ABRIL
- ▁CUATRO
- ▁DICIEMBRE
- ▁MARGARITO
- ▁MOJONERA
- ▁SOLEDAD
- ▁VESTIDO
- ▁PELOTA
- RRET
- ▁CAPITÁN
- ▁COMUNIÓN
- ▁CUCHARA
- ▁FERNANDO
- ▁GUADALUPE
- ▁MIGUEL
- ▁PELÚN
- ▁SECRETARIU
- ▁LENCHU
- ▁EVA
- ▁SEGUND
- ▁CANTOR
- ▁CHILPANCINGO
- ▁GABRIEL
- ▁QUINIENTO
- ▁RAÚL
- ▁SEVERIAN
- ▁TUMBADA
- ▁MALINCHI
- ▁PRIMU
- ▁MORAL
- ▁AGOSTO
- ▁CENTÍMETRO
- ▁FIRMA
- ▁HUEHUETÁN
- ▁MANGUERA
- ▁MEDI
- ▁MUERT
- ▁SALAZAR
- ▁VIERNI
- LILL
- ▁LL
- '-'
- ▁CAMPESINO
- ▁CIVIL
- ▁COMISARIADO
- )
- (
- Ã
- ‘
- ¿
- Ü
- ¡
- Q
- F
- Á
- P
- Ÿ
- W
- Ý
- <sos/eos>
init: xavier_uniform
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/token_list/bpe_unigram500/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_bpe500/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: transformer
encoder_conf:
input_layer: conv2d
num_blocks: 12
linear_units: 2048
dropout_rate: 0.1
output_size: 512
attention_heads: 4
attention_dropout_rate: 0.0
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
input_layer: embed
num_blocks: 6
linear_units: 2048
dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.4a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "noinfo", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["yolo_mixtec"]}
|
espnet/ftshijt_espnet2_asr_yolo_mixtec_transformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:yolo_mixtec",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"noinfo"
] |
TAGS
#espnet #audio #automatic-speech-recognition #dataset-yolo_mixtec #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
ESPnet2 ASR model
-----------------
### 'espnet/ftshijt\_espnet2\_asr\_yolo\_mixtec\_transformer'
This model was trained by ftshijt using yolo\_mixtec recipe in espnet.
### Demo: How to use in ESPnet2
RESULTS
=======
Environments
------------
* date: 'Wed Nov 10 02:59:39 EST 2021'
* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'
* espnet version: 'espnet 0.10.4a1'
* pytorch version: 'pytorch 1.9.0'
* Git hash: ''
+ Commit date: ''
asr\_train\_asr\_transformer\_specaug\_raw\_bpe500
--------------------------------------------------
### WER
### CER
### TER
ASR config
----------
expand
### Citing ESPnet
or arXiv:
|
[
"### 'espnet/ftshijt\\_espnet2\\_asr\\_yolo\\_mixtec\\_transformer'\n\n\nThis model was trained by ftshijt using yolo\\_mixtec recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Nov 10 02:59:39 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_specaug\\_raw\\_bpe500\n--------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #dataset-yolo_mixtec #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"### 'espnet/ftshijt\\_espnet2\\_asr\\_yolo\\_mixtec\\_transformer'\n\n\nThis model was trained by ftshijt using yolo\\_mixtec recipe in espnet.",
"### Demo: How to use in ESPnet2\n\n\nRESULTS\n=======\n\n\nEnvironments\n------------\n\n\n* date: 'Wed Nov 10 02:59:39 EST 2021'\n* python version: '3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]'\n* espnet version: 'espnet 0.10.4a1'\n* pytorch version: 'pytorch 1.9.0'\n* Git hash: ''\n\t+ Commit date: ''\n\n\nasr\\_train\\_asr\\_transformer\\_specaug\\_raw\\_bpe500\n--------------------------------------------------",
"### WER",
"### CER",
"### TER\n\n\n\nASR config\n----------\n\n\nexpand",
"### Citing ESPnet\n\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `ftshijt/mls_asr_transformer_valid.acc.best`
♻️ Imported from https://zenodo.org/record/4458452/
This model was trained by ftshijt using mls/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "es", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mls"]}
|
espnet/ftshijt_mls_asr_transformer_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"es",
"dataset:mls",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"es"
] |
TAGS
#espnet #audio #automatic-speech-recognition #es #dataset-mls #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'ftshijt/mls_asr_transformer_valid.URL'
️ Imported from URL
This model was trained by ftshijt using mls/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'ftshijt/mls_asr_transformer_valid.URL'\n️ Imported from URL\n\nThis model was trained by ftshijt using mls/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #es #dataset-mls #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'ftshijt/mls_asr_transformer_valid.URL'\n️ Imported from URL\n\nThis model was trained by ftshijt using mls/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## ESPnet2 ASR pretrained model
### `jv_openslr35`
♻️ Imported from https://zenodo.org/record/5090139/
This model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "jv", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["jv_openslr35"]}
|
espnet/jv_openslr35
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"jv",
"dataset:jv_openslr35",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"jv"
] |
TAGS
#espnet #audio #automatic-speech-recognition #jv #dataset-jv_openslr35 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## ESPnet2 ASR pretrained model
### 'jv_openslr35'
️ Imported from URL
This model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## ESPnet2 ASR pretrained model",
"### 'jv_openslr35'\n️ Imported from URL\n\nThis model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #jv #dataset-jv_openslr35 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## ESPnet2 ASR pretrained model",
"### 'jv_openslr35'\n️ Imported from URL\n\nThis model was trained by jv_openslr35 using jv_openslr35/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
# ESPnet2 ASR pretrained model
## `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best`
♻️ Imported from <https://zenodo.org/record/3957940#.YN7zwJozZH4>
This model was trained by kan-bayashi using jsut/tts1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Training config
See full config in [`config.yaml`](./config.yaml)
```yaml
config: null
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_raw_bpe
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mini-an4"]}
|
espnet/kamo-naoyuki-mini_an4_asr_train_raw_bpe_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:mini-an4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-mini-an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
# ESPnet2 ASR pretrained model
## 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'
️ Imported from <URL
This model was trained by kan-bayashi using jsut/tts1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
### Training config
See full config in 'URL'
|
[
"# ESPnet2 ASR pretrained model",
"## 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n\n️ Imported from <URL\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-mini-an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"# ESPnet2 ASR pretrained model",
"## 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n\n️ Imported from <URL\nThis model was trained by kan-bayashi using jsut/tts1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\n\n\nor arXiv:",
"### Training config\n\nSee full config in 'URL'"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/aishell_conformer`
♻️ Imported from https://zenodo.org/record/4105763/
This model was trained by kamo-naoyuki using aishell/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["aishell"]}
|
espnet/kamo-naoyuki_aishell_conformer
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:aishell",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-aishell #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/aishell_conformer'
️ Imported from URL
This model was trained by kamo-naoyuki using aishell/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/aishell_conformer'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using aishell/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-aishell #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/aishell_conformer'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using aishell/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4414883/
This model was trained by kamo-naoyuki using chime4/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["chime4"]}
|
espnet/kamo-naoyuki_chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.acc.ave
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:chime4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-chime4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using chime4/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using chime4/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-chime4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/chime4_asr_train_asr_transformer3_raw_en_char_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using chime4/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scpdatadirha_irwav.scp_noise_db_range10_17_noise_scpdatadirha_noisewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob1._sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4415021/
This model was trained by kamo-naoyuki using dirha_wsj/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["dirha_wsj"]}
|
espnet/kamo-naoyuki_dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scp-truncated-2fd1f8
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:dirha_wsj",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-dirha_wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scpdatadirha_irwav.scp_noise_db_range10_17_noise_scpdatadirha_noisewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob1._sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using dirha_wsj/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scpdatadirha_irwav.scp_noise_db_range10_17_noise_scpdatadirha_noisewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob1._sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using dirha_wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-dirha_wsj #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/dirha_wsj_asr_train_asr_transformer_cmvn_raw_char_rir_scpdatadirha_irwav.scp_noise_db_range10_17_noise_scpdatadirha_noisewav.scp_speech_volume_normalize1.0_num_workers2_rir_apply_prob1._sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using dirha_wsj/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4430974/
This model was trained by kamo-naoyuki using hkust/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "zh", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["hkust"]}
|
espnet/kamo-naoyuki_hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20-truncated-934e17
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"zh",
"dataset:hkust",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"zh"
] |
TAGS
#espnet #audio #automatic-speech-recognition #zh #dataset-hkust #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using hkust/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using hkust/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #zh #dataset-hkust #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/hkust_asr_train_asr_transformer2_raw_zh_char_batch_bins20000000_ctc_confignore_nan_gradtrue_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using hkust/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft400_frontend_confhop_length160_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4543003/
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend-truncated-55c091
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft400_frontend_confhop_length160_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft400_frontend_confhop_length160_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft400_frontend_confhop_length160_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft512_frontend_confhop_length256_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4543018/
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend-truncated-b76af5
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft512_frontend_confhop_length256_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft512_frontend_confhop_length256_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_frontend_confn_fft512_frontend_confhop_length256_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_accum_grad2_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4541452/
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer5_raw_bpe5000_schedule-truncated-c8e5f9
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_accum_grad2_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_accum_grad2_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer5_raw_bpe5000_scheduler_confwarmup_steps25000_batch_bins140000000_optim_conflr0.0015_initnone_accum_grad2_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.acc.ave`
♻️ Imported from https://zenodo.org/record/4604066/
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["librispeech"]}
|
espnet/kamo-naoyuki_librispeech_asr_train_asr_conformer6_n_fft512_hop_length2-truncated-a63357
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-librispeech #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/librispeech_asr_train_asr_conformer6_n_fft512_hop_length256_raw_en_bpe5000_scheduler_confwarmup_steps40000_optim_conflr0.0025_sp_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using librispeech/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
automatic-speech-recognition
|
espnet
|
## Example ESPnet2 ASR model
### `kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.acc.best`
♻️ Imported from https://zenodo.org/record/3957940/
This model was trained by kamo-naoyuki using mini_an4/asr1 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```python
# coming soon
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{hayashi2020espnet,
title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit},
author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu},
booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={7654--7658},
year={2020},
organization={IEEE}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
{"language": "en", "license": "cc-by-4.0", "tags": ["espnet", "audio", "automatic-speech-recognition"], "datasets": ["mini_an4"]}
|
espnet/kamo-naoyuki_mini_an4_asr_train_raw_bpe_valid.acc.best
| null |
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:mini_an4",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null |
2022-03-02T23:29:05+00:00
|
[
"1804.00015"
] |
[
"en"
] |
TAGS
#espnet #audio #automatic-speech-recognition #en #dataset-mini_an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us
|
## Example ESPnet2 ASR model
### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'
️ Imported from URL
This model was trained by kamo-naoyuki using mini_an4/asr1 recipe in espnet.
### Demo: How to use in ESPnet2
### Citing ESPnet
or arXiv:
|
[
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using mini_an4/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
[
"TAGS\n#espnet #audio #automatic-speech-recognition #en #dataset-mini_an4 #arxiv-1804.00015 #license-cc-by-4.0 #region-us \n",
"## Example ESPnet2 ASR model",
"### 'kamo-naoyuki/mini_an4_asr_train_raw_bpe_valid.URL'\n️ Imported from URL\n\nThis model was trained by kamo-naoyuki using mini_an4/asr1 recipe in espnet.",
"### Demo: How to use in ESPnet2",
"### Citing ESPnet\n\nor arXiv:"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.