modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-09-06 06:27:01
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 542
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-09-06 06:26:44
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
ali2066/finetuned_token_2e-05_16_02_2022-14_18_19
|
ali2066
| 2022-02-16T13:20:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-14_18_19
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-14_18_19
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
Zohar/distilgpt2-finetuned-restaurant-reviews
|
Zohar
| 2022-02-16T12:53:21Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilgpt2-finetuned-restaurant-reviews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilgpt2-finetuned-restaurant-reviews
This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on a subset of the Yelp restaurant reviews dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4668
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 3.6331 | 1.0 | 2536 | 3.5280 |
| 3.5676 | 2.0 | 5072 | 3.4793 |
| 3.5438 | 3.0 | 7608 | 3.4668 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.11.0
|
chaitanya97/wav2vec2-large-xls-r-300m-hindi-colab
|
chaitanya97
| 2022-02-16T11:24:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hindi-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hindi-colab
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 7.2810
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 23.4144 | 0.8 | 4 | 29.5895 | 1.0 |
| 19.1336 | 1.6 | 8 | 18.3354 | 1.0 |
| 12.1562 | 2.4 | 12 | 11.2065 | 1.0 |
| 8.1523 | 3.2 | 16 | 8.8674 | 1.0 |
| 6.807 | 4.0 | 20 | 7.8106 | 1.0 |
| 6.1583 | 4.8 | 24 | 7.2810 | 1.0 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_100_2epochs
|
joe5campbell
| 2022-02-16T10:34:00Z | 7 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_100_2epochs
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_100_2epochs
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6279
- Train Accuracy: 0.6824
- Validation Loss: 0.7791
- Validation Accuracy: 0.2667
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7045 | 0.4882 | 0.7236 | 0.2667 | 0 |
| 0.6279 | 0.6824 | 0.7791 | 0.2667 | 1 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
premrawat/en_ner_model
|
premrawat
| 2022-02-16T09:23:12Z | 6 | 0 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_model
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3624161074
- name: NER Recall
type: recall
value: 0.384341637
- name: NER F Score
type: f_score
value: 0.3730569948
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_model` |
| **Version** | `0.1.1` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 37.31 |
| `ENTS_P` | 36.24 |
| `ENTS_R` | 38.43 |
| `TOK2VEC_LOSS` | 305790.85 |
| `NER_LOSS` | 801195.82 |
|
premrawat/en_ner_skills
|
premrawat
| 2022-02-16T09:14:23Z | 6 | 5 |
spacy
|
[
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_ner_skills
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.3980582524
- name: NER Recall
type: recall
value: 0.3404507711
- name: NER F Score
type: f_score
value: 0.3670076726
---
| Feature | Description |
| --- | --- |
| **Name** | `en_ner_skills` |
| **Version** | `0.1.0` |
| **spaCy** | `>=3.2.1,<3.3.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (1 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `SKILL` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 36.70 |
| `ENTS_P` | 39.81 |
| `ENTS_R` | 34.05 |
| `TOK2VEC_LOSS` | 607659.90 |
| `NER_LOSS` | 491709.76 |
|
Minowa/distilbert-base-uncased-finetuned-ner
|
Minowa
| 2022-02-16T07:09:20Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9239501818582607
- name: Recall
type: recall
value: 0.9378006488421524
- name: F1
type: f1
value: 0.9308238951809905
- name: Accuracy
type: accuracy
value: 0.9837800054013695
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0596
- Precision: 0.9240
- Recall: 0.9378
- F1: 0.9308
- Accuracy: 0.9838
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2381 | 1.0 | 878 | 0.0707 | 0.9100 | 0.9240 | 0.9170 | 0.9805 |
| 0.0563 | 2.0 | 1756 | 0.0583 | 0.9246 | 0.9382 | 0.9314 | 0.9835 |
| 0.03 | 3.0 | 2634 | 0.0596 | 0.9240 | 0.9378 | 0.9308 | 0.9838 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/bert-finetuned-ner
|
jatinshah
| 2022-02-16T03:50:43Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9330024813895782
- name: Recall
type: recall
value: 0.9491753618310333
- name: F1
type: f1
value: 0.9410194377242012
- name: Accuracy
type: accuracy
value: 0.9861511744275033
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0599
- Precision: 0.9330
- Recall: 0.9492
- F1: 0.9410
- Accuracy: 0.9862
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0852 | 1.0 | 1756 | 0.0647 | 0.9147 | 0.9345 | 0.9245 | 0.9826 |
| 0.0305 | 2.0 | 3512 | 0.0599 | 0.9333 | 0.9463 | 0.9398 | 0.9858 |
| 0.0212 | 3.0 | 5268 | 0.0599 | 0.9330 | 0.9492 | 0.9410 | 0.9862 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 1.18.3
- Tokenizers 0.10.3
|
ali2066/finetuned_token_2e-05_16_02_2022-01_55_54
|
ali2066
| 2022-02-16T01:18:01Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_55_54
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_55_54
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1722
- Precision: 0.3378
- Recall: 0.3615
- F1: 0.3492
- Accuracy: 0.9448
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3781 | 0.1512 | 0.2671 | 0.1931 | 0.8216 |
| No log | 2.0 | 76 | 0.3020 | 0.1748 | 0.2938 | 0.2192 | 0.8551 |
| No log | 3.0 | 114 | 0.2723 | 0.1938 | 0.3339 | 0.2452 | 0.8663 |
| No log | 4.0 | 152 | 0.2574 | 0.2119 | 0.3506 | 0.2642 | 0.8727 |
| No log | 5.0 | 190 | 0.2521 | 0.2121 | 0.3623 | 0.2676 | 0.8756 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
jkang/espnet2_librispeech_100_conformer
|
jkang
| 2022-02-16T01:05:55Z | 4 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"dataset:librispeech_100",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: noinfo
datasets:
- librispeech_100
license: cc-by-4.0
---
## ESPnet2 ASR model
### `jkang/espnet2_librispeech_100_conformer`
- This model was trained by jaekookang using librispeech_100 recipe in [espnet](https://github.com/espnet/espnet/).
- Gradio Demo: [🤗 ESPNet2 ASR Librispeech Conformer](https://huggingface.co/spaces/jkang/espnet2_asr_librispeech_100h)
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 140704c146f8beeed74973f5258379f6133dcdfb
pip install -e .
cd egs2/librispeech_100/asr1
./run.sh --skip_data_prep false --skip_train true --download_model jkang/espnet2_librispeech_100_conformer
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Fri Feb 11 01:42:52 KST 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `140704c146f8beeed74973f5258379f6133dcdfb`
- Commit date: `Tue Feb 8 16:06:02 2022 -0500`
- GPU: NVIDIA GeForce RTX 3090 (single GPU took: 13h)
## asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|54402|94.5|5.1|0.4|0.7|6.3|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|50948|84.8|13.7|1.5|2.1|17.3|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|52576|94.2|5.3|0.5|0.8|6.6|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|52343|84.7|13.8|1.5|2.0|17.3|81.5|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|288456|98.2|1.1|0.8|0.7|2.5|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|265951|93.3|4.1|2.6|2.0|8.7|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|281530|98.0|1.1|0.9|0.7|2.7|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|272758|93.5|4.0|2.5|1.9|8.4|81.5|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|decode_asr_asr_model_valid.acc.ave/dev_clean|2703|69558|92.0|5.0|3.0|0.7|8.7|56.6|
|decode_asr_asr_model_valid.acc.ave/dev_other|2864|64524|81.3|13.2|5.4|2.4|21.1|80.7|
|decode_asr_asr_model_valid.acc.ave/test_clean|2620|66983|91.8|5.1|3.1|0.6|8.8|57.4|
|decode_asr_asr_model_valid.acc.ave/test_other|2939|66650|81.2|13.1|5.7|2.1|20.9|81.5|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_conformer_lr2e-3_warmup15k_amp_nondeterministic
ngpu: 1
seed: 2022
num_workers: 4
num_att_plot: 0
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: false
collect_stats: false
write_collected_feats: false
max_epoch: 70
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: 400
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 16000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_clean_100_sp/wav.scp
- speech
- kaldi_ark
- - dump/raw/train_clean_100_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- kaldi_ark
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.002
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 15000
token_list:
- <blank>
- <unk>
- ▁THE
- S
- ▁AND
- ▁OF
- ▁TO
- ▁A
- ▁IN
- ED
- ▁I
- ▁HE
- ▁WAS
- ▁THAT
- ING
- ▁IT
- ''''
- ▁HIS
- ▁HAD
- ▁WITH
- ▁YOU
- ▁FOR
- T
- ▁AS
- ▁HER
- LY
- ▁NOT
- ▁BUT
- ▁SHE
- ▁BE
- D
- E
- ▁IS
- ▁AT
- ▁ON
- ▁HIM
- ▁THEY
- ▁BY
- ▁HAVE
- Y
- ▁MY
- ▁SO
- ▁ALL
- ▁THIS
- ▁WERE
- ▁WHICH
- ▁ME
- ▁FROM
- ▁ONE
- ▁SAID
- ▁WE
- N
- ER
- ▁NO
- ▁THERE
- ▁WHEN
- ▁AN
- ▁THEIR
- ▁OR
- ▁WOULD
- ▁WHO
- ▁THEM
- R
- ▁IF
- ▁WHAT
- ▁ARE
- ▁BEEN
- ▁OUT
- ▁UP
- M
- ▁WILL
- ▁DO
- ▁MAN
- ▁COULD
- C
- ▁THEN
- ▁INTO
- ▁MORE
- ▁SOME
- ES
- P
- ▁VERY
- ▁NOW
- ▁YOUR
- ▁LITTLE
- ▁TIME
- ▁ABOUT
- ▁DID
- ▁THAN
- ▁LIKE
- ▁HAS
- L
- G
- AL
- IN
- ▁UPON
- ▁CAN
- ▁WELL
- ▁OTHER
- ▁OVER
- US
- ▁TWO
- ▁ONLY
- ▁ANY
- ▁OUR
- O
- EN
- RE
- ▁MADE
- U
- ▁AFTER
- ▁SEE
- ▁S
- ▁DOWN
- ▁BEFORE
- LL
- ST
- B
- ▁OLD
- ▁DAY
- ▁MISS
- ▁GREAT
- ▁US
- ▁KNOW
- OR
- ▁SUCH
- ▁GOOD
- ▁WAY
- A
- ▁THESE
- ▁CAME
- ▁UN
- ▁SHOULD
- ▁HOW
- ▁MISTER
- ▁GO
- ▁MUCH
- ▁WHERE
- ▁MUST
- ▁NEVER
- ▁COME
- ▁BACK
- ION
- 'ON'
- ▁LONG
- F
- ▁AGAIN
- ▁FIRST
- LE
- ▁MEN
- ▁EVEN
- NESS
- ▁MIGHT
- ▁OWN
- ▁MAY
- K
- ▁HIMSELF
- ▁SAY
- ▁JUST
- ▁THROUGH
- ▁RE
- ▁AM
- ▁ITS
- ▁WENT
- ▁THOUGHT
- ▁
- ▁DE
- ▁MAKE
- I
- ▁HAND
- ▁THINK
- ▁HOUSE
- ▁HERE
- IC
- H
- ATION
- ▁LIFE
- IT
- ▁EYES
- ▁MOST
- ▁WITHOUT
- ▁TOO
- ▁THOSE
- ABLE
- ▁EVERY
- ▁DON
- ▁MANY
- ▁AWAY
- ITY
- VE
- W
- ▁STILL
- ▁BEING
- ▁C
- ▁LAST
- ▁NIGHT
- ▁O
- ▁HEAD
- AN
- ▁FOUND
- ▁NOTHING
- ▁YOUNG
- ▁WHILE
- ▁TAKE
- ▁GET
- ▁PEOPLE
- RO
- ▁OFF
- ▁THOUGH
- EST
- ▁YET
- ▁THREE
- TH
- ▁RIGHT
- ▁UNDER
- AR
- ▁FACE
- IES
- ▁ROOM
- ▁NEW
- ▁SAW
- RA
- V
- ▁ASKED
- ▁TELL
- ERS
- ▁SAME
- MENT
- ▁HEART
- LESS
- ▁WORK
- ▁PLACE
- ▁ANOTHER
- ▁EVER
- ▁LEFT
- ▁SHALL
- ▁FATHER
- ▁PUT
- ▁ONCE
- ▁TOOK
- ▁LET
- ▁ALWAYS
- ▁SEEMED
- ▁PART
- IL
- UR
- ▁WHY
- ▁TOLD
- ▁GIVE
- ▁LOVE
- CE
- ▁MIND
- ▁LOOKED
- ▁HEARD
- ▁SOON
- ▁LOOK
- ▁MOTHER
- ▁FAR
- IVE
- ▁BECAUSE
- ▁HOME
- OUS
- ▁T
- EL
- ▁D
- ▁SOMETHING
- ▁SIDE
- ▁KING
- IS
- ATE
- ▁MOMENT
- ENT
- RY
- ▁THINGS
- ▁ST
- ▁LIGHT
- ▁FIND
- ▁GOING
- ▁THING
- ▁WORLD
- IR
- AT
- ▁WATER
- ▁END
- ▁DOOR
- ISH
- ▁KNEW
- ▁WOMAN
- ▁SIR
- ▁EACH
- RI
- ▁HAVING
- ▁AGAINST
- ▁FEW
- ▁E
- ▁BEGAN
- ▁BETTER
- ▁YES
- ▁NAME
- ▁ENOUGH
- ET
- ▁HARD
- ▁VOICE
- ▁YEARS
- ▁GOT
- ▁WHOLE
- ▁WHITE
- ▁WANT
- ▁GIRL
- ▁DONE
- ▁SEEN
- ▁HUNDRED
- ▁CALLED
- ▁BETWEEN
- ▁MORNING
- FUL
- AS
- ▁FELT
- TER
- ▁KIND
- X
- CH
- ▁HERSELF
- ANT
- ▁TOWARD
- ▁HALF
- ▁OH
- ▁AMONG
- ▁HOWEVER
- ▁TURNED
- ▁ALSO
- ▁BOTH
- ▁POOR
- ▁PERHAPS
- ▁REPLIED
- ▁COURSE
- UL
- ▁QUITE
- ▁REST
- ▁DOES
- ▁MYSELF
- NG
- LO
- ANCE
- ▁MA
- ▁SET
- ▁SMALL
- ▁B
- ▁SURE
- ▁F
- ▁GAVE
- ▁PRESENT
- ▁HIGH
- ▁ALMO
- ▁R
- CK
- ▁WHOM
- ▁NEAR
- ▁CARE
- ▁WAR
- ▁GOD
- ▁TOGETHER
- ▁SAT
- ▁SHOW
- TE
- NE
- ▁BEST
- ▁UNTIL
- ▁OPEN
- ▁W
- ▁FOUR
- ▁DEAR
- ▁HANDS
- ▁WORDS
- ▁SINCE
- ▁LAND
- ▁DIS
- MAN
- ▁ANYTHING
- ▁FEET
- ▁NEXT
- ▁GENERAL
- LING
- ▁LAY
- ▁NOR
- ▁STOOD
- ▁BLACK
- ▁POWER
- ▁BROUGHT
- Z
- IE
- ▁ROUND
- ▁BELIEVE
- ▁LARGE
- ▁ALONG
- ▁HELP
- ▁DAYS
- ▁FIVE
- ▁K
- ▁HOPE
- AM
- ▁CO
- ▁KEEP
- ▁FULL
- ▁WALK
- ▁MASTER
- ATED
- ▁NATURE
- ▁JOHN
- ▁POINT
- ▁DUR
- ▁MATTER
- ▁MONEY
- ▁CHILD
- ▁LOOKING
- ▁RATHER
- ▁AIR
- IA
- ▁P
- ▁TWENTY
- ▁FIRE
- OL
- ▁LESS
- ▁SHORT
- ▁PASSED
- ▁INDEED
- TY
- ▁CASE
- ▁WORD
- ▁WISH
- ▁COUNTRY
- LED
- ID
- ▁BOY
- ▁SOUND
- ▁FORM
- ▁CRIED
- LA
- ▁FRIEND
- TON
- ▁FACT
- ▁UNCLE
- ▁TAKEN
- ▁AL
- ▁TEN
- IAN
- ▁GONE
- ▁SEA
- ▁REASON
- TING
- ▁WHOSE
- ▁OTHERS
- AC
- ▁LI
- ▁DEATH
- ▁CERTAIN
- ▁ANSWERED
- ▁THEMSELVES
- ▁LADY
- ▁STATE
- ▁CAR
- ▁WIFE
- ▁THOUSAND
- ▁TRUE
- ▁BEHIND
- AGE
- ▁DOCTOR
- ▁FEAR
- ▁OFTEN
- OM
- ▁TILL
- ▁HA
- IOUS
- ▁AROUND
- IST
- ▁SENT
- ▁SPEAK
- ▁WOMEN
- ▁GROUND
- VER
- ENCE
- NA
- ▁TALK
- ▁CHILDREN
- TION
- CO
- MO
- ▁HEAR
- ▁ORDER
- ▁LEAVE
- ▁PRO
- ▁ALREADY
- ▁LA
- ▁FINE
- SE
- ▁BA
- PP
- ▁THUS
- AD
- ▁NEED
- ▁SIGHT
- ▁CALL
- ▁FELL
- ▁MANNER
- MP
- ▁BECAME
- UM
- ▁WATCH
- OW
- ▁FOOT
- ▁CANNOT
- ▁BODY
- ▁TOWN
- ▁LIVE
- INE
- ▁RETURNED
- ▁WONDER
- MA
- ▁G
- UT
- ▁CLOSE
- UN
- IM
- ▁ALONE
- ▁DIDN
- ▁LORD
- ▁RED
- ARY
- ▁GIVEN
- ▁SIX
- ▁EVERYTHING
- ▁DARK
- ▁DEAD
- ▁STRONG
- ▁SON
- ▁COMING
- URE
- ▁HELD
- ▁ABOVE
- ▁REALLY
- ▁BEAUTIFUL
- ▁SECOND
- ARD
- ▁EVENING
- ▁CON
- ▁HOUR
- ▁FELLOW
- ▁ROSE
- ▁PERSON
- ▁EX
- ▁CH
- ▁FORCE
- ▁MO
- ▁ARM
- ▁CAUSE
- ▁TURN
- ▁CITY
- ▁DOUBT
- ▁QUESTION
- TIC
- ▁DEEP
- ▁HAIR
- ICAL
- ▁MEAN
- ▁DI
- ▁CLEAR
- ▁SOMETIMES
- ▁STRANGE
- ▁FEEL
- ▁HO
- ▁IMP
- WARD
- AUGHT
- ▁CAPTAIN
- ▁USE
- ▁UNDERSTAND
- ▁KEPT
- ▁BR
- ▁WOOD
- ▁PRE
- ▁YEAR
- ▁TI
- ▁LEAST
- ▁BED
- ▁SA
- ▁TABLE
- ▁BECOME
- ▁FREE
- ▁FAMILY
- ME
- ▁EYE
- ▁WHETHER
- ▁MAKING
- ▁WITHIN
- ▁SORT
- ▁ANSWER
- ▁PO
- ▁SAYS
- ▁EARTH
- ▁RETURN
- ▁SUDDENLY
- ▁FRIENDS
- ▁GREEN
- ▁SUN
- ▁FAIR
- ▁TH
- ▁FALL
- ▁EITHER
- ▁BO
- ▁PRINCE
- ▁THOU
- ▁ITSELF
- ▁CHURCH
- ▁BIG
- ▁ABLE
- ▁DIFFERENT
- ▁SEVERAL
- ▁DAUGHTER
- ▁WON
- ▁WIND
- ▁BAD
- ▁LOST
- ▁READ
- ▁STORY
- ▁APPEARED
- DE
- ▁NUMBER
- ▁SP
- ▁LOW
- ▁ROAD
- ▁POSSIBLE
- ▁HUMAN
- ▁RIVER
- ▁STREET
- ▁GA
- ▁COLD
- ▁MET
- ▁ACT
- ▁BROTHER
- ▁AGE
- ▁KNOWN
- ▁CONTINUED
- ▁BRING
- ▁ILL
- ▁RUN
- ▁LAW
- ▁SUBJECT
- ▁CUT
- J
- PER
- ▁PA
- ▁TROUBLE
- ▁GLAD
- HE
- ▁SLEEP
- MEN
- ▁LATE
- ▁MEANS
- ▁ASK
- ▁REACHED
- ▁RAN
- AK
- ▁HORSE
- ▁USED
- WAY
- OP
- ▁WINDOW
- ▁SNOW
- ▁PAST
- ▁OBJECT
- ▁THEREFORE
- IONS
- ▁TREE
- ▁COMP
- ▁BLUE
- CA
- ▁VI
- ▁SIGN
- ▁EIGHTEEN
- ▁GARDEN
- ▁BUSINESS
- ▁PETER
- ▁FOLLOWED
- ▁SEEM
- ▁HOLD
- ▁HAPPY
- ▁LONGER
- ▁ACROSS
- ▁BU
- BE
- ▁ELSE
- ▁PLAY
- ▁SOUL
- ▁STAND
- ▁ARMS
- ▁SCHOOL
- ▁PRINCESS
- ▁CERTAINLY
- LT
- ▁ENGLISH
- ▁SEVEN
- ▁PER
- ▁IDEA
- ▁LE
- ▁BOOK
- ▁FEELING
- ▁HUSBAND
- ▁LINE
- PT
- THOUGH
- ▁OUGHT
- ▁RICH
- IP
- ▁VIEW
- ▁DREAM
- ▁SENSE
- ▁LO
- ▁READY
- ▁CARRIED
- ▁M
- ▁REGARD
- ▁CHANCE
- ▁WANTED
- ▁LIVED
- ▁LATER
- ▁INTEREST
- ▁EN
- ▁EFFECT
- ▁CLA
- ▁CHANGE
- ▁CA
- ▁REAL
- ▁SUPPOSE
- LES
- ▁ART
- ▁TIMES
- ▁MAR
- IF
- ▁WILD
- ▁ADDED
- ▁LETTER
- IAL
- ▁THANK
- ▁PARTY
- LAND
- ▁PAY
- ▁BREATH
- ▁TAKING
- ▁COURT
- ▁COUNT
- ILY
- ▁COMMON
- ▁PUBLIC
- ▁PURPOSE
- ▁PRETTY
- ▁TRUTH
- ▁STAY
- ▁EM
- NT
- ▁SH
- ▁REMEMBER
- ▁ENTERED
- ▁RECEIVED
- RED
- ▁SPOKE
- ▁USUAL
- ▁THY
- ▁FIGURE
- ▁LED
- ▁TREES
- ▁TRIED
- ▁FORWARD
- NED
- ▁HAT
- ▁BLOOD
- ▁BEYOND
- ▁BANK
- ▁LIVING
- ▁JOY
- ▁HOURS
- ▁ENGLAND
- ▁STONE
- VI
- GE
- ▁SWEET
- ▁POSITION
- ▁FRONT
- ▁GIRLS
- ▁VISIT
- ▁CHARACTER
- ▁SPIRIT
- ▁TA
- BO
- QUE
- QUI
- ▁OPENED
- ▁OCCASION
- ▁MEET
- ▁EIGHT
- ▁REMAIN
- ▁PASS
- TO
- ▁NORTH
- ▁SERVICE
- ▁SISTER
- ▁SE
- ▁BEAR
- ▁PLEASURE
- ▁CHIEF
- ▁FOREST
- ▁BELL
- ▁EXPERIENCE
- ▁STRUCK
- ▁CARRY
- ORY
- ▁WARM
- 'NO'
- ▁WORTH
- ▁SAYING
- ▁SILENCE
- ▁CROSS
- ▁JE
- ▁H
- ▁BEAUTY
- PH
- ▁DEAL
- KE
- ▁SECRET
- DY
- ▁MILES
- ▁LU
- ▁DOING
- ▁BOYS
- ▁CROWD
- ▁ACCOUNT
- REW
- ISM
- TI
- ▁FE
- ▁NONE
- ▁RO
- ▁NEARLY
- ▁CHA
- ▁YOUTH
- ▁CAP
- HA
- ▁BIT
- ▁LIE
- ▁ATTENTION
- ▁STANDING
- ▁STAR
- ▁RESPECT
- ▁FURTHER
- ATIONS
- ▁ROCK
- ▁BOW
- EM
- ▁EARLY
- ▁MOUTH
- ▁BOAT
- UB
- ▁IMMEDIATELY
- ▁EXCEPT
- SHIP
- ▁PICTURE
- ▁BRIGHT
- ▁WA
- ▁GREW
- ▁LEAD
- ▁CUR
- ▁TONE
- RRY
- RS
- ▁WIDE
- CHE
- ▁FORTH
- IG
- OS
- ▁NEITHER
- ▁YOURSELF
- ▁SMILE
- ▁DRESS
- ▁OPINION
- ▁HAPPENED
- ▁WAIT
- ▁SIT
- ▁SHIP
- ▁AH
- ▁DESIRE
- ▁THICK
- ▁THIRD
- ▁GRAND
- ▁FOLLOW
- ▁GATHER
- ▁HILL
- ALLY
- ▁COMPANY
- ▁CHAIR
- DER
- ▁TOP
- ▁PAR
- ▁LENGTH
- ▁THIRTY
- ▁MINE
- ▁MI
- ▁EAT
- ▁EQUAL
- ▁AFRAID
- ▁FRESH
- ▁TAIL
- ▁FILLED
- ▁SU
- ▁MINUTES
- ▁FAST
- BU
- ▁ENTER
- ▁QUEEN
- ▁UTTER
- AG
- ▁FLOOR
- ▁SHA
- DI
- ▁HEAVEN
- ▁STOPPED
- ▁GUARD
- ▁HALL
- ▁BAR
- ▁COMPLETE
- ▁NINE
- ▁WEEK
- ▁GOLD
- VA
- ▁FIFTY
- ▁BEAT
- ▁PRESS
- ▁ATTEMPT
- ▁EXCLAIMED
- DO
- ▁CONF
- ▁SEEMS
- ▁STARTED
- ▁EL
- ▁HAR
- ▁EXPRESSION
- ▁TRA
- ▁WONDERFUL
- ▁SAINT
- ▁APPEARANCE
- ▁GRAVE
- ▁OFFICE
- ▁INSTEAD
- ▁SILENT
- ▁SOUTH
- ▁AGO
- ▁CAMP
- ▁LOVED
- ▁PATH
- ▁LEARN
- ▁PLAN
- ▁GOVERNMENT
- OUR
- PPED
- ▁SITTING
- ▁SEAT
- TEN
- RESS
- SIDE
- ▁MOVED
- ▁DIE
- ▁RESULT
- ▁SPRING
- ▁PLEASE
- ▁RI
- ▁NATURAL
- ▁ANNE
- ▁STA
- ▁CORNER
- ▁WALL
- ▁IMPOSSIBLE
- ▁BROWN
- ▁SUIT
- ▁MUSIC
- PI
- ▁TRY
- ▁DIED
- ▁TEARS
- ▁JU
- ▁COMFORT
- ▁DANGER
- ▁MEASURE
- ▁PROPERTY
- ▁BORN
- CON
- ▁CR
- ▁BROKEN
- ▁MASS
- EVER
- IER
- ▁EXPRESS
- ▁POCKET
- ▁SCARCE
- ▁SELF
- NY
- ▁MADAME
- ▁LAUGHED
- ▁TOUCH
- ▁APPEAR
- ▁LONDON
- ▁SAFE
- ▁SHARP
- ▁ATTACK
- ▁JANE
- ▁COVERED
- ▁OUTSIDE
- ▁WHATEVER
- ▁PLACED
- ▁RACE
- ▁SHORE
- ▁LAID
- ▁ROMAN
- ▁PERSONAL
- UP
- AU
- ▁REMAINED
- ▁HAPPINESS
- ▁AFTERNOON
- ▁DISTANCE
- ▁STORM
- ▁MARRIED
- ▁FRANK
- ▁VALLEY
- ▁BOUND
- ▁TALKING
- ▁JO
- ▁QUICK
- ▁STEP
- AND
- ▁ARMY
- ▁EFFORT
- ▁FRENCH
- ▁V
- LEY
- ▁PARTICULAR
- ▁START
- ATING
- OO
- LU
- ▁TRANS
- ▁HAPPEN
- ▁HABIT
- ▁VILLAGE
- ▁BELOW
- ▁GENTLEMAN
- BLE
- ▁BILL
- ▁SAVE
- ACT
- ▁SOCIETY
- ▁MAJOR
- ▁QUARTER
- ▁SKY
- ▁GUESS
- CY
- ▁SAD
- ILE
- ▁SL
- ▁PLEASANT
- ▁STRAIGHT
- ▁STRENGTH
- ▁FORTUNE
- ▁WRONG
- ▁COMMAND
- ▁BOX
- ▁QUIET
- ISE
- ▁JA
- IBLE
- ▁TREAT
- ▁GLANCE
- ▁NECESSARY
- ▁FORGET
- ▁MOUNTAIN
- ▁WINTER
- ▁DREW
- ▁WAV
- ▁PLAIN
- ▁ENTIRELY
- ▁TEA
- ▁SOFT
- ▁QUICKLY
- ▁INFLUENCE
- ▁DINNER
- ▁FOOD
- ▁CHAPTER
- ▁YE
- ▁REACH
- ▁GETT
- ▁PAPER
- ▁GIVING
- ▁BEGINNING
- ▁SEND
- ▁FIGHT
- ▁SCENE
- ▁RUSH
- ▁PI
- ▁MARK
- ▁NA
- ▁BROKE
- ▁CLASS
- ▁BATTLE
- ▁EASY
- ▁GROUP
- BY
- ▁STOP
- ▁DIRECTION
- ▁BESIDE
- ▁MOR
- HAM
- UFF
- ▁WEST
- ▁OBLIG
- ▁COLOR
- ▁SINGLE
- ▁EASILY
- ▁PALE
- ▁ACTION
- ▁INTER
- ▁STRANGER
- ▁WI
- ▁CONVERSATION
- ▁BLOW
- ▁MARY
- ▁MU
- ▁TERRIBLE
- ▁THINKING
- ▁PULL
- ▁MOON
- AB
- ▁REP
- ▁ESPECIALLY
- ▁HEAVY
- ▁SICK
- ▁LUCK
- ▁TRAIN
- ▁GUN
- ▁GU
- ▁WAITING
- ▁TURNING
- ITIES
- ▁BREAD
- ▁BELONG
- ▁LOUD
- ▁REPORT
- ▁AMERICAN
- ▁JOURNEY
- ▁ANXIOUS
- ▁LIPS
- ▁KILLED
- IGHT
- GO
- ▁CONSIDER
- ▁PROBABLY
- ▁PALACE
- ▁HISTORY
- ▁LAKE
- ▁SHUT
- ▁SIMPLY
- WA
- ▁PAIN
- ▁HORSES
- ▁SEEING
- FULLY
- ▁EXPECTED
- ▁EVIL
- ▁BURN
- ▁SIMPLE
- ▁DIRECT
- IFIED
- HER
- ▁SLOWLY
- ▁LEG
- UGH
- ▁SAIL
- RIC
- ▁WISHED
- ▁RULE
- ▁LAD
- ▁MORAL
- ▁MOVE
- ▁FOLLOWING
- ▁SILVER
- ▁SEARCH
- ▁CHANGED
- ▁HANDSOME
- ▁COULDN
- ▁PASSION
- ▁HU
- ▁SMILED
- ▁STREAM
- ▁CONCERN
- ▁PRESENCE
- STER
- ▁CONTENT
- ▁BOARD
- ▁SHAPE
- ▁DECIDED
- ▁MARRY
- ▁PERFECT
- ▁STEPS
- ▁CLOSED
- ABLY
- DEN
- ▁WEAK
- ▁SUFFICIENT
- ▁SHADOW
- ▁EXPECT
- ▁SPOT
- ▁DUTY
- ▁SPEAKING
- ▁BESIDES
- ▁FIELD
- ▁ROLL
- ▁TRYING
- ▁EAR
- ▁VER
- ▁MARRIAGE
- ▁SHOT
- ▁SLAVE
- ▁MILL
- ▁NATION
- ▁NECK
- ▁ARRIVED
- ▁TALL
- ▁GRACE
- LIN
- ▁FORTY
- ▁BROAD
- ▁SUMMER
- ▁COUSIN
- ▁BEGIN
- ▁CATCH
- ▁FO
- ▁PE
- ▁MEANT
- ▁THIN
- IO
- ▁GROW
- ▁TRO
- ▁NOTICE
- ▁CRY
- ▁FISH
- ▁COM
- ▁DEGREE
- ▁HONOUR
- ▁UNDERSTOOD
- ▁SHOP
- ▁TRUST
- ▁CONDITION
- ▁FARM
- IZ
- ▁SUDDEN
- ▁SUCCESS
- ▁SURPRISE
- ORS
- ▁THOUGHTS
- UND
- ▁ALLOWED
- ITE
- ▁NARROW
- ▁GLASS
- ▁SERIOUS
- ▁STICK
- ▁GAME
- ▁SPENT
- ▁SELL
- ▁GRA
- ▁LOWER
- ▁RAISED
- ▁PIN
- ▁ALLOW
- ▁CALM
- FT
- ▁L
- ▁PU
- ▁FIT
- ACH
- ▁SUFFER
- ▁LEGS
- ▁SUPPORT
- ▁FRANCE
- ▁LATTER
- OV
- ▁TASTE
- ▁GATE
- ▁INSTANT
- ▁MINUTE
- ▁OFFER
- ▁GREATER
- ▁PORT
- ILL
- ▁INDIVIDUAL
- ▁AUNT
- ▁EAST
- ▁ADVANTAGE
- ▁FASHION
- ▁SWORD
- ▁TWELVE
- ▁HONOR
- ▁MOVEMENT
- ▁ISLAND
- ACK
- ▁WOODS
- NCH
- ▁PLEASED
- ▁ENEMY
- ▁RAIN
- ▁VARIOUS
- ▁OBSERVED
- ▁LADIES
- ▁BELIEVED
- ▁CAST
- ▁RISE
- ▁BALL
- ▁MONTHS
- ICE
- ▁MURDER
- ▁CONDUCT
- ▁SOCIAL
- ▁TENDER
- ▁LEARNED
- ▁FRA
- ▁FIRM
- CLOCK
- ▁PREVENT
- ▁RING
- LIE
- ▁GOLDEN
- ▁DECLARED
- ▁BUILDING
- ▁WRITE
- ▁ATTEND
- ▁CARRIAGE
- ▁SITUATION
- IDE
- ▁NOBLE
- ▁HUNG
- ▁RUNN
- ▁YELLOW
- ▁KNOWLEDGE
- ▁YORK
- ▁PUSH
- ▁LEAVING
- ▁POST
- ▁CIRCUMSTANCES
- ▁SEEK
- ▁FINALLY
- ▁MAIN
- ▁LETTERS
- ▁POL
- ▁ADD
- FE
- ▁ANCIENT
- ▁MARCH
- ▁WINE
- ▁STATES
- ▁WALLS
- ▁PRISONER
- ▁ISABEL
- ▁TEMPER
- ▁JUDGE
- ▁FAINT
- ▁POND
- ▁GRASS
- ▁FAM
- OUT
- ▁LAUGH
- ▁GRAY
- IGN
- ▁ESCAPE
- ▁KILL
- ▁PRAY
- ▁COMES
- ▁ABSOLUTE
- ▁BLIND
- ▁WIN
- ▁HOST
- ▁MERELY
- ▁RID
- ▁EVERYBODY
- ▁MATERIAL
- ▁STRETCH
- ▁DUE
- ▁ROW
- ▁TIN
- ▁PROMISE
- ▁LISTEN
- ▁WALKING
- ▁COMPANION
- ▁INDIAN
- ▁BREAK
- ▁BENEATH
- ▁RUIN
- ▁EDGE
- ▁WOR
- ▁FORMER
- ▁WORSE
- ▁EVIDENTLY
- ▁HARM
- ▁CENT
- ▁PIECE
- ▁LOT
- ▁PRESIDENT
- ▁SPECIAL
- ▁LABOR
- ▁HEALTH
- GA
- ▁PLACES
- ▁BEN
- ▁SOMEWHAT
- ▁DROPPED
- ▁AFFECTION
- ▁EXACTLY
- ▁DARKNESS
- ▁FALLEN
- ▁DRESSED
- ▁BILLY
- ▁ACCEPT
- ▁FL
- ▁HOT
- ▁REPEATED
- ▁MEETING
- PA
- ▁PERIOD
- ▁HONEST
- ▁INSTANCE
- ▁FLA
- ▁PASSAGE
- ▁NE
- ▁POSSESSION
- ▁WEAR
- ▁PEACE
- ▁COAT
- ▁HOUSES
- ▁MOUNTAINS
- ▁FIFTEEN
- ▁WELCOME
- ▁YARD
- ▁PROPER
- ▁MUS
- ADE
- ▁RECEIVE
- ▁SKIN
- ▁GROWN
- ▁AFTERWARDS
- ANG
- ▁DA
- ▁DIFFICULT
- ▁PERSONS
- ▁ACCORDING
- ▁FARMER
- ▁SPEECH
- ▁IMPORTANT
- PAR
- ▁PERFECTLY
- ▁MIN
- ▁CONSIDERED
- ▁NU
- ▁DEPEND
- ▁MORROW
- ▁MOUNT
- ▁KISS
- ▁LYING
- ▁SUFFERING
- ▁EXIST
- ERY
- OOK
- BA
- ▁PAINT
- AH
- ▁CAT
- ▁PURE
- ▁WISE
- ▁PRIVATE
- ▁REBECCA
- ▁VESSEL
- ▁CLEAN
- ▁GENTLEMEN
- ▁IRON
- ▁STORE
- ▁FUR
- ▁INDIANS
- ▁LOSE
- ▁BATH
- ▁NEWS
- ▁CHI
- ▁FA
- ▁CHARGE
- ▁PRIEST
- ▁WRITTEN
- ▁FORGOTTEN
- ▁TRAIL
- ▁CLOTHES
- ▁ALIVE
- ▁SUB
- ▁REPLY
- ▁THROW
- ▁AB
- ▁SOLDIERS
- ▁ISN
- ▁COTTAGE
- ▁COURAGE
- ▁CONTAIN
- ▁BUILT
- ▁PAID
- ▁HUNT
- ▁CASTLE
- HOOK
- ▁MERE
- GGED
- ▁NI
- ▁UNC
- ▁PREPARED
- ▁BARE
- ▁SMILING
- ▁SPREAD
- ▁WEATHER
- ▁EDWARD
- ▁GERMAN
- ▁CURIOUS
- ▁SERVANT
- ▁DISCOVERED
- ▁TRAVEL
- EY
- ▁DANCE
- ▁PEN
- BR
- GEN
- ▁BREAKFAST
- ▁CHAMBER
- ▁WILLIAM
- ▁TERROR
- ▁SPITE
- ▁TIRED
- ▁LOCK
- ▁CONSIDERABLE
- TLE
- ▁MANAG
- ▁DRY
- ▁FINISHED
- ▁MILLION
- ▁FRE
- ▁MIS
- ▁PASSING
- ▁DRAW
- ▁BON
- ▁VA
- ▁VEN
- ▁MAKES
- ▁VAIN
- ▁BOTTOM
- ▁DRINK
- ▁FUTURE
- ▁RACHEL
- ▁SORROW
- ▁SIXTEEN
- ▁KNIT
- ▁PROUD
- WI
- ▁TOBY
- ▁NOISE
- ▁SLIGHT
- ▁PROCEED
- ▁FER
- ▁COVER
- ▁DRAWING
- ▁FAVOR
- ▁CATHERINE
- ▁NEWSPAPER
- ▁NOBODY
- ▁ROOF
- ▁WEALTH
- ▁PROVE
- ▁DRAWN
- TTED
- OKE
- ▁DETERMINED
- ▁DOG
- ▁REMEMBERED
- ▁OPENING
- ▁FLOWERS
- ▁GENTLE
- ▁KNIGHT
- ▁RECOVER
- ▁DESERT
- ▁MOTION
- ▁NICE
- ▁INTENTION
- ▁GROWING
- ▁CLOUD
- ▁MONTH
- HOOD
- ▁POT
- UDE
- ▁PLANT
- ▁MAD
- ▁ENJOY
- ▁FAT
- ▁COR
- ▁KNOWING
- ▁IDEAS
- IZED
- ▁CHEEK
- ▁EUROPE
- ▁KNOCK
- ▁ALARM
- ▁TONGUE
- ▁SPACE
- ▁PATSY
- ▁MISTRESS
- ▁HENRY
- ▁JERRY
- ▁LIKED
- ▁PLAYED
- ▁BOOKS
- ▁MODER
- ▁CORN
- ▁ELIZABETH
- ▁CLUB
- ▁BRAIN
- ▁TROOP
- ▁COOK
- ▁DU
- ▁FUN
- DAY
- ▁QUA
- ▁FLOW
- ▁DARE
- ▁DELIGHT
- ▁WOUND
- ▁DESCEND
- ▁EVERYWHERE
- ▁FRIGHTENED
- ▁GEORGE
- ▁PECULIAR
- ▁MACHINE
- ▁PATIENT
- ▁MEADOW
- ▁PEASANT
- ▁BURST
- ▁ORDINAR
- ▁SONG
- ▁BRAVE
- ▁EXISTENCE
- ▁LUCY
- ▁J
- ▁CAREFULLY
- ▁PRESENTLY
- ▁GEN
- ▁COW
- LLY
- ▁PROMISED
- UOUS
- ▁LIFTED
- ▁MEANING
- ALL
- ▁FAIL
- NER
- ▁REGULAR
- ▁VIRTUE
- ▁STUDY
- ▁PROTECT
- ▁FOND
- ▁FANCY
- ▁STOCK
- ▁KEY
- ▁JUSTICE
- ▁PACK
- LET
- ▁AFFAIRS
- ▁DIFFICULTY
- ▁WORE
- ▁COST
- ▁HEAT
- ▁SHOULDER
- ▁OFFERED
- ▁MISTAKE
- ▁DOLLARS
- ▁LOOKS
- QUA
- ▁BREAST
- ▁PRINCIPLE
- ▁CHARLES
- ▁TEETH
- ▁OCCUPIED
- ▁DROP
- ▁PAPA
- ▁SHEEP
- ▁KNOWS
- ▁DECK
- ▁BORE
- ▁EXC
- ▁SURPRISED
- ▁STATION
- ▁PL
- ▁PR
- ▁OURSELVES
- ▁SYMPATHY
- ▁RUTH
- ▁EXCITED
- ▁CONTROL
- ▁ANGRY
- ▁IMAGINATION
- ▁WITNESS
- ▁HOLDING
- THER
- DA
- ▁TRADE
- ▁CREATURE
- ▁SISTERS
- ▁JOIN
- LAS
- ▁ALTOGETHER
- ▁CIVIL
- ▁EMPTY
- ▁LEAP
- ▁HURT
- ▁BOLD
- ▁TASK
- ▁POLICE
- ▁DRAGON
- ▁MAID
- ▁CLAIM
- ▁SHAME
- ▁PHYSICAL
- ▁CONC
- ▁SEIZED
- ▁OB
- ▁LIVES
- ▁HEIGHT
- ▁GI
- ▁PAL
- ▁CHARMING
- ▁FEELINGS
- ▁SERVANTS
- ▁DELIVER
- ▁FRUIT
- ▁SATISFIED
- ▁STRUGGLE
- ▁WROTE
- ▁CONCEAL
- ▁MOVING
- ▁FLASH
- ▁OPPOSITE
- ▁HURRY
- ▁ROUGH
- ▁PRICE
- ▁AWFUL
- ▁SAND
- ▁SLIPP
- ▁SHOWN
- ▁SPRA
- ▁AGREED
- ▁FIXED
- ▁PERCEIVED
- ▁UPPER
- ▁FINGER
- ▁FINGERS
- ▁EAGER
- LF
- ▁EARS
- LIGHT
- ▁IMAGINE
- ▁LIKELY
- ▁COAST
- ▁UNITED
- ▁VAN
- ▁EXPLAINED
- ▁TELLING
- ▁DANGEROUS
- ▁DICK
- ▁COOL
- ▁CAL
- ▁INSIST
- BI
- ▁SECURE
- ▁HILLS
- ▁SAN
- ▁CHEER
- ▁FILL
- ▁BUY
- ZA
- HI
- ▁CLOTH
- ▁POSSESSED
- ▁ADVANCE
- ▁METHOD
- ATIVE
- ▁GREATLY
- ▁SMOKE
- ▁HIGHER
- ▁COMPANIONS
- ▁ANIMALS
- ▁GALL
- ▁QUIETLY
- ▁TRAVELL
- ▁RESOLVED
- ▁FLEW
- ▁CARLYLE
- ▁MEMORY
- ▁RESIST
- ▁GRAHAM
- ▁LAUGHING
- ▁FAITH
- ▁BIRD
- CRI
- ▁LEAVES
- ▁AMERICA
- ▁DEMAND
- BOARD
- ▁AWAKE
- ▁CURIOSITY
- ▁LANGUAGE
- ▁VIOLENT
- ▁AWARE
- ▁DOUBLE
- ▁LOOSE
- LIKE
- ▁ADAM
- ▁RISING
- ▁HOTEL
- ▁BAND
- ▁ENGAGED
- ▁HEADS
- ▁LOG
- ▁FORMED
- ▁WINDOWS
- ▁PREFER
- RUS
- ▁THROWN
- ▁ARCH
- ▁PAUSE
- ▁SERVE
- KIN
- ▁FALLING
- ▁VO
- ▁WHISPERED
- ▁POWERFUL
- ▁ER
- ▁DEPART
- ▁CRUEL
- ▁EXAMPLE
- ▁SMOOTH
- ▁INTRODUC
- ▁RELIGION
- ▁SEVENTEEN
- ▁ABSENCE
- ▁PRINT
- ▁SHINING
- ▁ICE
- ▁POET
- ▁DREADFUL
- ▁REQUIRED
- ▁ORIGINAL
- ▁POINTED
- ▁INSIDE
- ▁BROTHERS
- ▁PRODUCED
- ▁SPOKEN
- ▁CREATURES
- ▁FLY
- ▁TOM
- ▁PURSU
- ▁SYSTEM
- ▁EXCELLENT
- ▁EXCITEMENT
- ▁MIDDLE
- ▁FALSE
- ▁REGRET
- ▁RAY
- ▁PHYSICIAN
- ▁COP
- ▁VALUE
- ▁TOUCHED
- ▁FLAT
- ▁OAK
- ▁SUM
- ▁LOSS
- ▁PAPERS
- ▁STEPP
- ▁REVER
- ▁SHADE
- SOME
- ▁LISTENED
- ▁N
- ▁DISCOVER
- ▁BITTER
- TERN
- ▁HOLE
- ▁ADVANCED
- ▁PICK
- ARTAGNAN
- ▁CORPORAL
- ▁ASLEEP
- ▁TEMPLE
- ▁INDICAT
- IUM
- ▁FARTHER
- ▁EXCUSE
- ▁FLU
- ▁NOSE
- ▁SIXTY
- ▁SUPPOSED
- ▁PROVED
- ▁RATE
- ▁SHOULDERS
- ▁AFFAIR
- ▁FIELDS
- ▁REMARKED
- AVE
- ▁WEEKS
- ▁ESTABLISH
- ▁PARIS
- ▁ADMIT
- ▁NEIGHBOR
- ▁ATTRACT
- ▁CUSTOM
- ▁DISTINGUISH
- ▁SURFACE
- ▁COUPLE
- ▁DEVIL
- ▁LIMIT
- ▁ROYAL
- ▁FOOL
- ▁RARE
- ▁PRIDE
- ▁PROFESSOR
- ▁SAKE
- ▁DALE
- ▁VAST
- ▁REFUSED
- ▁FAILED
- ▁BAG
- ▁ROB
- ▁WASH
- ▁FAIRY
- ▁FREQUENT
- ▁MARILLA
- ▁PROGRESS
- ▁RELIEF
- ▁DROVE
- ▁DOZEN
- ▁AHEAD
- ▁ADVENTURE
- ▁GRANT
- ▁PRIM
- ▁MENTAL
- ▁PAIR
- ▁IMPRESSION
- ▁WOUNDED
- ▁FULLY
- ▁DISAPPEARED
- ▁MILE
- ▁DRIVE
- ▁MUD
- ▁SIZE
- ▁ANIMAL
- ZE
- ▁GRE
- ▁REPRESENT
- ▁ACQUAINTANCE
- ▁INSTRUMENT
- ▁SPLENDID
- ▁UNKNOWN
- ▁CORONEL
- ▁EMPEROR
- ▁EARNEST
- ▁EXTEND
- ▁BRIEF
- ▁RENDER
- ▁PARENTS
- ▁GENTLY
- ▁CALLING
- ▁TRIBE
- ▁CHRISTIAN
- ▁INTERESTING
- ▁LAMP
- ▁JIMM
- ▁DIV
- ▁LOVER
- UCH
- ▁HID
- ▁NEEDED
- ▁ORDERED
- ▁MEAL
- ▁SLOW
- ▁DAM
- ▁CLOUDS
- ▁DAN
- ▁GAR
- ▁EXPLAIN
- ▁QUI
- ▁CLIMB
- ▁HURRIED
- ▁MURMUR
- ▁SWIFT
- ▁ARTHUR
- ▁JEFF
- ▁KINGDOM
- ▁MESSAGE
- ▁PROTEST
- ▁ORGAN
- ▁RISK
- ▁FORGIVE
- ▁OCCURRED
- ▁PEARL
- ▁ODD
- ▁INFORMATION
- ▁BUSY
- ▁TRI
- ▁LACK
- ▁BAY
- ▁FLEET
- ▁CROWN
- ▁WAITED
- ▁BIRDS
- ▁PITY
- ▁SUCCEEDED
- ▁INFORMED
- ▁WISHES
- ▁DIRECTLY
- ▁CABIN
- ▁AUGUST
- ▁COUNTENANCE
- ▁HORROR
- ▁PHILIP
- ▁POPULAR
- ▁PREVIOUS
- ▁CONTRARY
- ▁ARTICLE
- ▁DIFFERENCE
- ▁HIDDEN
- ▁HUGE
- ▁AUTHORITY
- ▁POUND
- ▁JUMP
- ▁SPI
- ▁SHAKE
- ▁EVENTS
- ▁FRO
- ▁LEAN
- ▁CRO
- ▁TRIM
- ▁SHARE
- ▁FISHER
- ▁SETTLED
- ▁QUESTIONS
- ▁SI
- ▁VAL
- ▁APPROACHED
- ▁SUGGESTED
- ▁CONTINU
- ▁PERFORM
- ▁ACKNOWLEDG
- ▁CLIFF
- ▁COLONEL
- ▁GHOST
- ▁MAJESTY
- ▁EMOTION
- ▁SUPPER
- ▁DISTANT
- ▁INTERESTED
- ▁JACK
- ▁HUM
- ▁TRAMP
- ▁BRI
- ▁POUR
- ▁SHIPS
- ▁CHAIN
- ▁DY
- ▁RANK
- ▁MATTERS
- ▁LOVELY
- AW
- ▁PAT
- ▁WORKING
- ▁CONSEIL
- ▁EVIDENCE
- ▁MERCHANT
- ▁SOLEMN
- ▁CONSTANT
- ▁MINISTER
- ▁OFFICIAL
- ▁SENTIMENT
- ▁CENTURY
- ▁DELAY
- ▁JAMES
- ▁MATCH
- ▁FOREIGN
- ▁AROSE
- ▁BEAST
- ▁BAB
- ▁WIT
- ▁REMARKABLE
- ▁THOR
- ▁COMPAR
- ▁MAL
- ▁NEARER
- ▁FOURTH
- ▁GREY
- ▁MENTION
- ▁RUBB
- ▁CHARM
- ▁BARON
- ▁DESIRED
- SCAR
- ▁HOPED
- ▁TEACHER
- ▁MON
- ITCH
- BEL
- ▁PARTS
- ▁EIGHTY
- LAC
- GGING
- ▁REFLECT
- ▁COLLECT
- ▁BULL
- ▁CONSCIOUS
- ▁MOMENTS
- ▁DISTURB
- ▁COLLEGE
- ▁EGGS
- ▁STUPID
- ▁YESTERDAY
- ▁EXAMINE
- ▁FAULT
- ▁DEPTH
- ▁ROOT
- ▁MOUSE
- ▁SOUGHT
- ▁TURTLE
- ▁NATIVE
- ▁CRACK
- ▁SOLD
- ▁INVIT
- ▁PICKED
- ▁CEASED
- ▁HEARING
- ▁MIDS
- ▁PLAYING
- ▁STAGE
- ▁UNTO
- ▁GAIN
- ▁MIST
- ▁ORDERS
- ▁KNEES
- ▁TALE
- ▁DISTINCT
- ▁BENT
- ▁DESPAIR
- ▁TRIUMPH
- ▁SQUARE
- ▁THROAT
- ▁BOUGHT
- ▁PERMIT
- ▁SPEND
- ▁TRIP
- ▁THREATEN
- ▁ROME
- INESS
- ▁EXPOS
- GON
- ▁WRITING
- ▁INCREASED
- ▁PORTION
- ▁TENT
- IUS
- ▁YO
- ▁INTENDED
- ▁NAMED
- RATION
- ▁NOTIC
- ▁PIPE
- ▁WILLING
- ▁INSTANTLY
- ▁SERVED
- ▁BAL
- ▁POSSESS
- ▁CRE
- ▁ADMIRATION
- ▁LIBERTY
- ▁OPPORTUNITY
- ▁SELDOM
- ▁BIRTH
- ▁GLOW
- ▁INCLUD
- ▁REQUEST
- ▁TYPE
- ▁SLEPT
- ▁CRIME
- ▁MOTIVE
- ▁ELSIE
- ▁BEGUN
- ▁CONSENT
- ▁ADMITTED
- ▁AVOID
- ▁ADDRESS
- ▁HATE
- ▁DEMANDED
- ▁APPARENTLY
- ▁SUGGESTION
- ▁CONSIDERATION
- ▁BLESS
- ▁PROCEEDED
- NCY
- ▁PRISON
- ▁CONT
- ▁SHOUTED
- ▁FACES
- ▁SPIRITS
- ▁DEVELOP
- ▁ACCIDENT
- ▁ADVICE
- ▁INNOCENT
- ▁INSTINCT
- ▁UNCONSCIOUS
- ▁MYSTERIOUS
- ▁PRETEND
- ▁PEEP
- ▁ANYONE
- ▁DUKE
- ▁PLUM
- VILLE
- ▁SEVERE
- ▁ALAS
- ▁DELIGHTED
- ▁ISSUE
- ▁ASKING
- ▁CROW
- ▁ACCEPTED
- ▁RIDE
- ▁DOORS
- ▁TAR
- ▁PREPAR
- ▁SUGGEST
- WOOD
- ▁CITIZEN
- ▁ENTRANCE
- ▁LINCOLN
- ▁POLITICAL
- ▁PRACTICAL
- ▁STIFF
- ▁WIDOW
- ▁CAPITAL
- ▁CLEVER
- ▁MAMMA
- ▁CREDIT
- ▁OBEY
- ▁STRING
- ▁DAILY
- ▁ARGUMENT
- ▁HEAP
- ▁APARTMENT
- ▁FLIGHT
- ▁ELDER
- ▁PUR
- ▁PAGE
- ▁DUST
- ▁GAZE
- ▁NATIONAL
- ▁BABY
- DDING
- ISTS
- ▁TEACH
- ▁STREETS
- CAL
- ▁GE
- AFF
- ▁GOES
- ▁POSSIBL
- UNG
- ▁LINES
- GUE
- ▁VOTE
- ▁HUNTING
- ▁QUO
- ▁RESEMBL
- ▁BASKET
- ▁CIRCLE
- ▁CONSEQUENCE
- ▁KITCHEN
- ▁TREASURE
- ▁NEVERTHELESS
- ▁FANCI
- ▁ASSEMBL
- ▁GRIEF
- ▁VEIL
- ▁SEASON
- ▁INVENT
- ▁VIRGINIA
- ▁HUT
- ▁GUEST
- ▁ROAR
- ▁BEHOLD
- ▁VICTORY
- ▁CAPABLE
- ▁DULL
- ▁SHOE
- ▁FLOAT
- ▁MERRY
- ▁IMMEDIATE
- ETH
- ▁ELEANOR
- ▁EXPLANATION
- ▁PARLIAMENT
- ▁PRINCIPAL
- ▁PROPORTION
- ▁RESOLUTION
- ▁UNUSUAL
- ▁BLUFF
- ▁NINETEEN
- ▁SENSATION
- ▁VISIBLE
- ▁INCOME
- ▁FATE
- ▁SUPER
- ▁LAUGHTER
- ▁EASE
- ▁LOAD
- ▁JEW
- ▁ZE
- ▁FEVER
- ▁WEDDING
- ▁JOINED
- ▁TRACE
- ▁LEADER
- ▁CLEARLY
- ▁FLOWER
- ▁TERMS
- ▁EMPLOYED
- OCK
- ▁PARTICULARLY
- ▁MEMBERS
- ▁CONFESS
- ▁GRO
- ▁ADDRESSED
- ▁CHRIST
- ▁ACCOMPANI
- ▁AFFORD
- ▁AMOUNT
- ▁BRILLIANT
- ▁COMMUNICAT
- ▁FIERCE
- ▁RECORD
- ▁SACRIFICE
- ▁TEMPT
- ▁CORDIAL
- ▁COLOUR
- ▁PROOF
- ▁ESTATE
- ▁PARDON
- ▁ADVIS
- ▁ATTITUDE
- ▁IMPORTANCE
- ▁BOOT
- ▁SHOCK
- ▁FIR
- ▁PLENT
- ▁HIT
- ▁MEMBER
- ▁SUR
- ▁SEATED
- ▁MAG
- AVING
- ▁FAVOUR
- ▁REMARK
- ▁DIM
- ▁FAITHFUL
- ▁SAVED
- CHI
- ▁SIN
- THE
- ▁CONFIDENCE
- ▁EXTRAORDINARY
- ▁FORTUNATE
- ▁MISFORTUNE
- ▁PATIENCE
- ▁RELIGIOUS
- ▁SATISFACTION
- ▁POSITIVE
- ▁SIMILAR
- ▁EXCHANG
- ▁RETREAT
- ▁FLESH
- ▁ADMIRE
- ▁SPIRITUAL
- ▁DAWN
- ▁BURIED
- ▁URGE
- ▁SUNDAY
- ▁FOX
- ▁EMMA
- ▁NURSE
- ▁SNAPP
- ▁PARK
- ▁OBTAIN
- ▁RECOGNIZED
- ▁SPEED
- ▁MAGIC
- ▁LAWS
- ▁REMOVED
- ▁HAM
- ▁PRESERV
- ▁AID
- HOUSE
- ▁MENTIONED
- ▁CONSCIENCE
- ▁CONTEMPT
- ▁DETAIL
- ▁IMMENSE
- ▁NERVOUS
- ▁PRISCILLA
- ▁UNFORTUNATE
- ▁UNHAPPY
- ▁COMPLAIN
- ▁TWICE
- ▁WHISTL
- ▁SNAKE
- ▁WASHINGTON
- ▁PIRATE
- ▁WICKED
- ▁BODIES
- ▁DESIGN
- ▁JASON
- ▁VAGUE
- ▁CONSIST
- ▁GIFT
- ▁ANGEL
- ▁RODE
- ▁FOLD
- ▁BRIDE
- ▁ANGER
- ▁BASE
- ITUDE
- ▁CONCLUDED
- ▁ALTER
- ▁FRI
- ▁PANT
- ▁BID
- ▁HIGHEST
- ▁SAILOR
- MPLE
- ▁OBSERV
- ▁CHEERFUL
- IFICATION
- RID
- ▁DESCRIBED
- ▁BIN
- ▁JEWEL
- ▁ARTIST
- ▁PEER
- ▁NORA
- ▁SKI
- ▁DIAMOND
- ▁ENCOURAGE
- ▁PRIVILEGE
- ▁PROJECT
- ▁ANYBODY
- ▁ENCOUNTER
- ▁HOLLOW
- ▁YIELD
- ▁BOBBY
- ▁SAVAGE
- ▁SOMEBODY
- ▁OTHERWISE
- ▁PRAISE
- ▁PROBLEM
- ▁DISTRESS
- ▁UGLY
- ▁WARRIOR
- ▁MOURN
- ▁RELIEV
- ▁DESK
- ▁FOOLISH
- ▁STARTLED
- ▁SKILL
- SHONE
- ▁LONE
- ▁OBSERVATION
- ▁DENI
- ▁NEST
- ▁SOLDIER
- ▁RELATION
- ▁TRULY
- ▁VISITOR
- ▁OFFICERS
- ERSON
- ▁YA
- ▁EVIDENT
- ▁DREAMS
- ▁KEEPING
- ▁PLAINLY
- ▁DRUNK
- ▁EMBRAC
- ▁INTELLIGENCE
- ▁LIEUTENANT
- ▁PERSUADE
- ▁SURROUNDING
- ▁UNIVERSAL
- ▁GLEAM
- ▁SUPERIOR
- ▁WHEEL
- ▁JEALOUS
- ▁QUEER
- ▁PIERRE
- ▁MILK
- ▁RAIL
- ▁FLUSH
- ▁STAIRS
- ▁JESUS
- ▁HORN
- ▁REGION
- ▁SAFETY
- ▁KA
- ▁GUIDE
- ▁CAKE
- ▁CUP
- ▁INQUIRED
- ▁DEFI
- ▁LESSON
- ▁WRETCHED
- ▁PACE
- ▁TEST
- ▁READING
- ▁ENTIRE
- ▁NET
- ▁DOGS
- ▁COMMANDER
- ▁PRODUCE
- ▁GAINED
- ▁ARRIVAL
- ▁FAMILIAR
- ▁MEANWHILE
- ▁SUSPICION
- ▁CHOICE
- ▁IMPULSE
- ▁THRUST
- ▁PROCESS
- ▁SUMMON
- ▁SHEPHERD
- ▁HASTILY
- ▁GRASP
- ▁COUNTESS
- ▁STYLE
- ▁DWELL
- ▁MERIT
- ▁PITCH
- ▁HUNGRY
- ▁SPORT
- ▁LOUISE
- ▁STERN
- ▁PROVIDED
- ▁ASSUME
- ▁EARLIE
- ▁RAGE
- ▁U
- ▁RAPIDLY
- PORT
- ▁SUCCESSFUL
- ▁FLED
- ▁AGREE
- ▁CONDITIONS
- ▁RELATIONS
- ▁DREAD
- ▁NATURALLY
- ▁EARL
- ▁GAY
- ▁HYPNOTI
- ▁PUTT
- ▁GAZ
- ▁JIM
- ▁PAUS
- ▁PROPOS
- ▁ADMINISTRATION
- ▁ELEVEN
- ▁HOSPITAL
- ▁MAGISTRATE
- ▁STRIKE
- ▁DIGNITY
- ▁GLORY
- ▁BOTTLE
- ▁THRONE
- ▁RECKON
- ▁COSETTE
- ▁MOREOVER
- ▁APPLI
- ▁HIND
- ▁PRODUCT
- ▁POOL
- ▁TRIAL
- HAN
- ▁ERIC
- ▁CUB
- ▁PIECES
- ▁EXCEPTION
- ▁ENJOYED
- ▁DARED
- ▁TRU
- ▁CLOSELY
- ▁RAPID
- ▁AFFECTED
- ▁REQUIRE
- ▁SOFTLY
- ▁BROW
- UCK
- ▁MARKED
- ▁SEVENT
- ▁ELECT
- ▁FORGOT
- ▁CORRECT
- ▁FRANCS
- ▁MARGUERITE
- ▁SCIENCE
- ▁UNEXPECTED
- ▁FOUGHT
- ▁MILITA
- ▁THUNDER
- ▁VOYAGE
- ▁GANEM
- ▁FREEDOM
- ▁NODDED
- ▁CAPTURE
- ▁MORTAL
- ▁OWNER
- ▁POLITE
- ▁VISION
- ▁EDUCATION
- ▁GOVERNOR
- ▁RAV
- ▁REWARD
- ▁HASTE
- ▁REPEAT
- ▁DETERMIN
- ▁PITI
- ▁KNEE
- LINE
- ▁DEVOTED
- ▁INTERRUPTED
- ▁FOLKS
- ▁EXTREME
- ▁APPROACH
- ▁CONTINUE
- ▁BEARING
- ▁CHAP
- ▁ACQUAINTED
- ▁GLIMPSE
- ▁GRADUALLY
- ▁SUNSHINE
- ▁PRACTICE
- ▁SUPPLI
- ▁DAVID
- ▁DRIFT
- ▁SHOWING
- ▁LEVEL
- ▁PROMPT
- ▁QUARREL
- ▁REPRESENTATIVE
- ▁PLUNG
- ▁GIANT
- FALL
- ▁STOUT
- CHA
- WEPT
- ▁GLANC
- ▁SALT
- ▁CHOSEN
- ▁BUCK
- ▁REALIZED
- ▁REALITY
- ▁TUR
- ▁DRIVEN
- ▁CARD
- ▁PRAYER
- ▁TERM
- AID
- ▁HOLY
- ▁ENDURE
- ▁RANGE
- ▁HANG
- ▁SAM
- LAN
- ▁CAVE
- INA
- ▁GRI
- ▁SIGH
- ▁NEIGHBOUR
- ▁COUNCIL
- ▁EXERCISE
- ▁NAUTILUS
- ▁SOMEWHERE
- ▁SYLVIA
- ▁THOROUGH
- ▁VICTIM
- ▁BRIDGE
- ▁COMPELLED
- ▁INCLINED
- ▁OVERCOME
- ▁RESERVE
- ▁ARREST
- ▁PRECIOUS
- ▁DUTCH
- ▁OCEAN
- ▁ACQUIR
- ▁RECALL
- ▁DESTIN
- ▁ATTACH
- ▁SLIM
- ▁WEEP
- ▁CONSCIOUSNESS
- ▁TIGHT
- ▁WAKE
- ▁COMFORTABLE
- ▁ACTIVE
- ▁WINGS
- ▁GRIN
- ▁AFFECT
- ▁WHIT
- ▁IDEAL
- ▁EASTER
- ▁APPROACHING
- ▁CREATED
- ▁PLANS
- ▁INCREASE
- ▁FLYING
- ▁SHOUT
- OES
- MISSION
- ▁ARMED
- ABILITY
- ▁BLUSH
- ▁CONNECTION
- ▁MATTHEW
- ▁MEDICINE
- ▁REMIND
- ▁EXHIBIT
- ▁BLOCK
- ▁DESERVE
- ▁LISTENING
- ▁TITLE
- ▁FLOUR
- ▁FLAME
- ▁AGENT
- ▁USEFUL
- ▁BRIG
- ▁BOIL
- ▁ASSURED
- ▁REFLECTION
- ▁PINE
- ▁WAG
- ▁YOUNGER
- ▁BEARD
- ▁KINDNESS
- CTUALLY
- ▁ACTUAL
- ▁WEIGHT
- ▁LILY
- ▁IMPRESS
- ▁DESCRIBE
- ▁BEHELD
- ▁COMMUNITY
- ▁DESPERATE
- ▁DISPLAY
- ▁ENEMIES
- ▁MELANCHOLY
- ▁MIRROR
- ▁RECOMMEND
- ▁SPANISH
- ▁BLAME
- ▁VOLUME
- ▁SHOOT
- ▁COMBIN
- ▁SHAKING
- ▁SOUTHERN
- ▁MYSTERY
- ▁EVERYONE
- ▁COMMISSION
- ▁COMPOSED
- ▁UDO
- ▁IMAGE
- ▁DECEIV
- ▁FAILURE
- ▁PATTY
- ▁ALICE
- ▁FRAME
- ▁MODEST
- ▁MAGNIFICENT
- ▁BRANCHES
- ▁REIGN
- ▁RAG
- ▁PARISH
- ▁KATE
- ▁AMID
- ▁SLEEPING
- ▁ANNOUNCED
- ▁EAGERLY
- ▁WIRE
- ▁LAP
- ▁ARAB
- ▁EATING
- ▁RUM
- ▁CAREFUL
- ▁DISCUSS
- WORTH
- ▁DISTRICT
- ▁FOREHEAD
- ▁FRANCIS
- ▁INCIDENT
- ▁APPEAL
- ▁EMBARRASS
- ▁MAINTAIN
- ▁PRONOUNC
- ▁FURNISH
- ▁STRAIN
- ▁ELEMENT
- ▁SILK
- ▁FEAST
- ▁RECENT
- ▁DANCING
- ▁LODGE
- ▁ASHAMED
- ▁TRICK
- ▁BOBO
- ▁STUFF
- ▁ET
- ▁ASSERT
- ▁SANK
- ▁TREATMENT
- ECI
- ▁SWIM
- ▁BECOMING
- ▁SINGING
- ▁PLATE
- ▁SCATTERED
- ▁EXTREMELY
- ▁GRIM
- ▁SANG
- ▁FIGHTING
- ▁FACTOR
- ▁PAINFUL
- ▁HIDE
- ▁FUNN
- ▁AFTERWARD
- ▁FROG
- ▁VENTURE
- ▁DISAPPOINT
- ▁COMRADE
- ▁MONSIEUR
- ▁OBVIOUS
- ▁PASSENGER
- ▁PROFOUND
- ▁PUBLISH
- ▁ACCUSTOM
- ▁BLOOM
- ▁SMITH
- ▁RELATIVE
- ▁ACCUSE
- ▁MANIFEST
- ▁SOLID
- ▁MONSTER
- ▁MARIUS
- ▁CANDLE
- ▁PROCUR
- ▁INTERFERE
- ▁HOUSEHOLD
- ▁DEVELOPMENT
- ▁AGREEABLE
- ▁HALT
- ▁NECESSITY
- FOLD
- ▁CITIES
- ▁REGI
- ▁GLOOMY
- BBL
- ▁SEPARATED
- ▁CHEST
- ▁STRIP
- ▁SPAR
- ▁DUN
- ▁SETTLE
- ▁STARED
- ▁HANGING
- ▁FEATURES
- ▁PILE
- ▁ORIGIN
- ARIES
- ▁LION
- ▁ALI
- ▁ASTONISHMENT
- ▁COMPLIMENT
- ▁DELICATE
- ▁COUNSEL
- ▁FIFTH
- ▁SUPPRESS
- ▁BURDEN
- ▁COMPLEX
- ▁ADDITION
- ▁CRUSH
- ▁TWIST
- ▁PIANO
- ▁BRUSH
- ▁CHECK
- ▁ANNIE
- ▁SHELTER
- ▁IMPROV
- ▁WESTERN
- ▁LOCAL
- ▁APPLE
- ▁GREET
- ▁MASK
- ▁RUSSIAN
- ▁TOWER
- ▁CREW
- ▁TIP
- ▁WANDERING
- ▁READER
- ▁WANDERED
- ▁DESTROY
- ▁OBSERVE
- MORE
- ▁ESCAPED
- ▁PET
- ▁BUILD
- ▁REAR
- ▁DESTROYED
- HIN
- ▁OWE
- ▁RANG
- ▁TEAR
- ▁NED
- ▁OFFICER
- ▁TRAP
- ▁OCCUR
- ▁APPOINTED
- ▁ATMOSPHERE
- ▁CHOOSE
- ▁CONCLUSION
- ▁CULTIVAT
- ▁DESCRIPTION
- ▁ENORMOUS
- ▁EXHAUSTED
- ▁LANDSCAPE
- ▁NATASHA
- ▁PROSPECT
- ▁REFRESH
- ▁SPECIES
- ▁SURROUNDED
- ▁WEAPON
- ▁BLANK
- ▁DEFEND
- ▁EDITH
- ▁HORRIBL
- ▁BETRAY
- ▁FERKO
- ▁LABOUR
- ▁NEGRO
- ▁RESUMED
- ▁LEAF
- ▁MUSKET
- ▁INTENSE
- ▁MERCY
- ▁ADOPT
- ▁SCORE
- ▁DASH
- ▁LAWYER
- ▁SLOPE
- ▁CHUCK
- ▁ASSISTANCE
- ▁BROOK
- ▁BREAKING
- ▁ASSIST
- ▁GROAN
- ▁HELEN
- ▁BEHAV
- ▁MAIDEN
- ▁CRIS
- ▁SHOUTING
- ▁NAY
- ▁PIG
- ▁ACCORDINGLY
- ETTE
- ▁DESIR
- ▁RUB
- ▁GRU
- ▁PIT
- ▁HEAVI
- ▁OBTAINED
- ▁SPARE
- ▁BRANCH
- ▁COUNTER
- ▁APART
- ▁AMBITION
- ▁ASTONISHED
- ▁CORRESPOND
- ▁DRIVING
- ▁ENERGY
- ▁HISTORIAN
- ▁REVOLUTION
- ▁SWEEP
- ▁TREMBLING
- ▁CRAFT
- ▁FAMILIES
- ▁LITERATURE
- SBURG
- ▁FEMALE
- ▁TILNEY
- ▁GENEROUS
- ▁SUBMIT
- ▁INTELLECTUAL
- ▁ORCHARD
- ▁STORIES
- ▁DIANA
- ▁VEIN
- ▁TRIFL
- ▁TWIN
- ▁WORSHIP
- ▁MARBLE
- ▁GALLANT
- ▁SENSIBLE
- ▁NEAT
- ▁BROWNIE
- ▁JUNE
- ▁SHAW
- ▁WORST
- ▁USELESS
- ▁FISHING
- ▁CRYING
- ▁MAYBE
- ▁VARI
- ▁PRESERVE
- ▁VOL
- ▁EMPLOY
- ▁INTERRUPT
- ▁SLIGHTLY
- ▁ACCOMPLISHED
- NEY
- ▁STEAM
- ▁BALANC
- ▁LEANING
- ▁SIGHED
- ▁REFUSE
- ▁IMAGINED
- ▁DATE
- GROUND
- ▁ENTERTAIN
- ▁PERCEIVE
- ▁ABROAD
- ▁CHEESE
- ▁DESTRUCTION
- ▁ESSENTIAL
- ▁EXPEDITION
- ▁GRANDFATHER
- ▁INFINITE
- ▁LIBRARY
- ▁MULTITUDE
- ▁NEGLECT
- ▁SWALLOW
- ▁VILLEFORT
- ▁BELOVED
- ▁COMMITTEE
- ▁CONFIDENT
- ▁PURPLE
- ▁PURCHAS
- ▁SCRAP
- ▁SPOIL
- ▁LIKEWISE
- ▁EXTRA
- ▁STRAW
- ▁SALUT
- ▁SOURCE
- ▁HASTENED
- ▁RESENT
- ▁FLOCK
- ▁LOFT
- ▁FLO
- ▁CLO
- ▁CONVINCED
- ▁GOODNESS
- ▁HYPNOTIZ
- ▁SETTING
- ▁HAIL
- ▁PHI
- ▁GROVE
- ▁DISCOVERY
- ▁DAMP
- ▁WHISPER
- ▁LIFT
- ▁HOP
- ▁SUSPECTED
- ▁SCR
- OLI
- ▁FAC
- ▁BUSH
- ▁FOREVER
- ▁BARRICADE
- ▁CONSTITUTION
- ▁ENDEAVOR
- ▁ENTHUSIASM
- ▁EXECUTION
- ▁HYACINTH
- ▁PERCEVAL
- ▁PSYCHE
- ▁REPROACH
- ▁THIRTEEN
- ▁ABSORB
- ▁GRATITUDE
- ▁MERCER
- ▁REPUTATION
- ▁SCREAM
- ▁PUPIL
- ▁RETIRED
- ▁STEEP
- ▁SUMMIT
- ▁MISERABLE
- ▁STRICT
- ▁MINGLED
- ▁DEFEAT
- ▁REVEAL
- ▁LOVING
- ▁GOOSE
- ▁ECHO
- ▁AWAIT
- ▁MOOD
- ▁CRAWLEY
- ▁CELL
- ▁ENGAGEMENT
- ▁PRECED
- ▁SOMEONE
- ▁ARRANGEMENT
- ▁PICKET
- ▁GASP
- ▁HUMOR
- ▁INVITATION
- ▁JOB
- WITHSTAND
- ▁LAMENT
- ▁CLASSES
- ▁HUNGER
- ▁DISPOSED
- ▁STEAMER
- ▁FEARFUL
- ▁GER
- ▁FINAL
- ▁FLAG
- ▁JULY
- ▁DIG
- WORK
- ▁OPPOS
- ▁ANXIETY
- ▁AUDIENCE
- ▁BACHELOR
- ▁COLUMN
- ▁HANDKERCHIEF
- ▁IMPATIENT
- ▁JUDGMENT
- ▁KNIFE
- ▁SOVEREIGN
- ▁STRIKING
- ▁THOMPSON
- ▁EMPIRE
- ▁FULFIL
- ▁CONSULT
- ▁JENNY
- ▁THENARDIER
- ▁POYSER
- ▁FOURTEEN
- ▁JAPANESE
- ▁INDULG
- ▁MARTIAN
- ▁COUNTRIES
- ▁FETCH
- ▁CRITIC
- ▁ROBBER
- ▁CROOK
- ▁DEPARTURE
- ▁MABEL
- ▁PREACH
- ESCENT
- ▁WHIP
- ▁NAIL
- ▁DELIGHTFUL
- ▁DISCUSSION
- ▁SENTENCE
- ▁LANE
- ▁ENGINEER
- ▁ARRANGED
- MMY
- ▁LEST
- ▁RENT
- MMED
- ▁LIST
- ▁ROBE
- ▁MISSION
- ▁GRACEFUL
- ▁LIGHTN
- STONE
- COURT
- ▁CONCEPTION
- ▁CONTRACT
- ▁DROWN
- ▁EXPERIMENT
- ▁HITHERTO
- ▁PLAGUE
- ▁PORTHOS
- ▁SHRIEK
- ▁DETECT
- ▁ACCENT
- ▁ERECT
- ▁SAZEN
- ▁PROFIT
- ▁VIVID
- ▁SQUIRE
- ▁OPERATION
- ▁SMELL
- ▁SIMON
- ▁EXTENT
- ▁KEEN
- ▁EMERG
- ▁REVIV
- ▁REGIMENT
- ▁DISAPPOINTMENT
- ▁STOLE
- ▁DIVINE
- ▁GUILTY
- ▁COWARD
- ▁EXPECTATION
- ▁SIGNOR
- ▁MODE
- ▁CENTRE
- ▁FIL
- HOW
- ▁WEARI
- ▁TOTAL
- ▁VICTOR
- ▁GOVERN
- ▁RAISE
- ▁ABANDON
- ▁ABSURD
- ▁ASPECT
- ▁CRIMINAL
- ▁DEFINITE
- ▁DELIBERAT
- ▁FEATHER
- ▁FLORINA
- ▁MIDNIGHT
- ▁RICHMOND
- ▁SATISFY
- ▁SINGULAR
- ▁STEADILY
- ▁SUPREME
- ▁TIMBER
- ▁PSYCHOLOG
- ▁GESTURE
- ▁VALUABLE
- ▁INTERVAL
- ▁CONFUSION
- ▁FLUTTER
- ▁SACRED
- ▁DISEASE
- ▁UNDERTAKE
- ▁PENETRAT
- ▁MARVEL
- ▁NORTHERN
- ▁GRIEV
- ▁GENIUS
- ▁SADDLE
- ▁NOVEL
- ▁MISERY
- ▁CONVICTION
- ▁SINK
- ▁WAGON
- ▁ARISE
- ▁COMMENT
- ▁BARN
- UPON
- ▁FENCE
- ▁ASSOCIATION
- ▁BONES
- ▁IDLE
- ▁DOUBTFUL
- ▁PREPARATION
- IZZ
- ▁RAIS
- ▁BITTERLY
- ▁JOE
- ▁RELI
- ADI
- ▁METAL
- ▁EXACT
- ▁GLOOM
- FIELD
- ▁DANGLARS
- ▁DISGRACE
- ▁EXAMINATION
- ▁FASCINAT
- ▁GLITTER
- ▁INCREASING
- ▁MESSENGER
- ▁PATRIOT
- ▁PLATFORM
- ▁PROVISION
- ▁QUALITIES
- ▁SELECT
- ▁STEADY
- ▁POVERTY
- ▁POWDER
- ▁PROPHET
- ▁HOLLAND
- ▁TRUNK
- ▁VARIETY
- ▁PLANCHET
- ▁CONQUER
- ▁CONCEIVE
- ▁COMBAT
- ▁STOOP
- ▁SHIRT
- ▁GENERATION
- ▁COMMITTED
- ▁INSULT
- ▁CONFUSED
- ▁RADIAN
- ▁DEBT
- ▁IMITAT
- ▁DART
- ▁CAROLINE
- ▁SWAM
- ▁WREN
- ▁CHILDHOOD
- ▁BRAND
- ▁JOKE
- ▁FRIENDSHIP
- ▁DIRT
- ▁JOLL
- ▁BUSHES
- ▁MINK
- ▁ROUT
- ▁EQUALITY
- ▁HESITATED
- ▁BARK
- ▁ANTI
- ▁STATEMENT
- PHER
- ▁SUNK
- ▁DAT
- ▁BACKWARD
- ▁SUSPECT
- ▁OBJECTION
- ▁RAP
- ▁CHIN
- ▁MATE
- ▁REDUC
- ▁GREGG
- ▁ACCOMPANY
- ▁ANYWHERE
- ▁BENEFIT
- ▁CLERK
- ▁EXPENSE
- ▁FETNAH
- ▁INTERPRET
- ▁LUKASHKA
- ▁NUMEROUS
- ▁SURGEON
- ▁PUZZL
- ▁RESCUE
- ▁GRATEFUL
- ▁APPROV
- ▁RIVAL
- ▁NIECE
- ▁FLOOD
- ▁VANISHED
- ▁ERROR
- ▁BLAZ
- ▁TUMBL
- ▁WENDY
- ▁PERSIST
- ▁CONSOL
- ▁SOAP
- ▁HUMOUR
- ▁FITTED
- ▁HOUSEKEEPER
- ▁ENABL
- ▁OCCASIONALLY
- ▁HATRED
- ▁SWELL
- ▁WORRY
- ▁RUST
- ▁PURSUIT
- ▁INTIMATE
- ▁SEAL
- ▁COLLECTION
- ▁TREMBLED
- ▁DENY
- ▁HUMANITY
- ▁FATAL
- ▁COCK
- ▁DRIVER
- ▁HOPELESS
- ▁MISTAKEN
- ▁LUC
- ▁ACCOMPLISH
- ▁COAL
- ▁ACCORD
- ▁PURSE
- ▁SEPARATE
- ▁ARRIVE
- ▁SMOK
- ▁MADAM
- ▁ASSOCIAT
- ▁INSTRUCT
- ▁CELEBR
- ▁CHANNEL
- ▁CIVILIZATION
- ▁DOCTRINE
- ▁ENDEAVOUR
- ▁GLACIER
- ▁INTELLIGENT
- ▁INVOLVE
- ▁LEATHER
- ▁MUTTERED
- ▁OLENIN
- ▁PENCROFT
- ▁PERPLEX
- ▁SPECTATOR
- ▁UNIVERSITY
- ▁ATTAIN
- ▁INEVITABL
- ▁YONDER
- ▁ENCHANT
- ▁REPAIR
- ▁CURRENT
- ▁ASCEND
- ▁CREEK
- ▁SPARKL
- ▁RUE
- ▁BEAVER
- ▁INFANT
- ▁CONTINUALLY
- ▁CLASP
- ▁IRISH
- ▁ROLLIN
- ▁PUNISHMENT
- ▁LUNCH
- ▁AGONY
- ▁RUDE
- ▁DRAGG
- ▁INQUIRI
- ▁SEX
- ▁TERRIFI
- ▁ROBIN
- ▁PROFESSIONAL
- ▁SPUR
- ▁GRAIN
- ▁VINE
- ▁PENN
- ▁ROC
- ▁CHASE
- ▁INFORM
- ▁WRITER
- ▁AVO
- ▁TAP
- ▁CREAT
- ▁WHIL
- ▁BARR
- ▁ASSURE
- ▁CIRCUMSTANCE
- ▁OIL
- ▁ROUSE
- ▁COLUMB
- ▁CUNNING
- ▁DOMESTIC
- ▁GLORIOUS
- ▁INDIGNATION
- ▁PRECISELY
- ▁PRUDENCE
- ▁RAILROAD
- ▁SATURDAY
- ▁UTMOST
- ▁VIOLENCE
- ▁WHIRL
- ▁CALCULAT
- ▁OVERWHELM
- ▁PERPETUAL
- ▁QUARLES
- ▁SLENDER
- ▁TELEGRAPH
- ▁ALOUD
- ▁OPPRESS
- ▁CROPPER
- ▁CANADIAN
- ▁HERBERT
- ▁TIMID
- ▁SUPPLY
- ▁STROLL
- ▁CREEP
- ▁OATH
- ▁DUSK
- ▁EXCESS
- ▁HUMBLE
- ▁FURIOUS
- ▁RIDGE
- ▁BULLET
- ▁PONY
- ▁STATU
- ▁ENJOYMENT
- ▁CONWAY
- ▁DIFFICULTIES
- ▁PATCH
- ▁JOYCE
- ▁CLOCK
- ▁RESTORED
- ▁ARGU
- ▁WIG
- ▁CHATT
- ▁PLAC
- ▁REMOVE
- ▁TORN
- ▁DISAPPEAR
- TIME
- WELL
- ▁RECOGNIZE
- ▁FISHE
- ▁DECLARE
- ISTIC
- ▁AUTHOR
- ▁WHISK
- ▁COFFEE
- ▁COMPREHEND
- ▁DISGUISE
- ▁ELZEVIR
- ▁ENTERPRISE
- ▁HOLIDAY
- ▁HORIZON
- ▁IGNORANT
- ▁INTERVIEW
- ▁OLIVER
- ▁RONICKY
- ▁CAPACITY
- ▁DISPOSITION
- ▁EXTERNAL
- ▁OPPOSITION
- ▁REPUBLIC
- ▁WHEAT
- ▁CORPSE
- ▁DARLING
- ▁THRILL
- ▁INHABITANTS
- ▁ORNAMENT
- ▁SHIFT
- ▁RECOGNISE
- ▁SHIVER
- ▁BOAST
- ▁HINT
- ▁BOSTON
- ▁MULTI
- IFYING
- ▁STEAL
- ▁INSTRUCTIONS
- ▁ELECTRIC
- ▁SWING
- ▁SOOTH
- ▁SCALE
- ▁MORLAND
- ▁DISLIKE
- ▁FLATTER
- ▁COACH
- ▁LEIF
- ▁STAMP
- ▁ANYHOW
- ▁MOTIONLESS
- ▁ANDREA
- ▁LOSING
- ▁PAUL
- ▁CAROL
- ▁ADVANC
- ▁IMAGIN
- ▁CENTER
- ▁JAR
- ▁SUCCEED
- ▁DISMISS
- CTOR
- ▁RECEIV
- ▁DRAG
- ▁INTENT
- ▁BARBAR
- ▁PUNISH
- ▁ABRUPTLY
- ▁BERNARD
- ▁DECISION
- ▁INDEPENDENT
- ▁PROVINCE
- ▁SLEEVE
- ▁TREMENDOUS
- ▁UNPLEASANT
- ▁LEISURE
- ▁THRONG
- ▁THUMB
- ▁BANNER
- ▁CONTRADICT
- ▁RESTRAIN
- ▁DIVIDED
- ▁WRAPPED
- ▁HAUNT
- ▁SNEER
- CHESTER
- ▁JULIA
- ▁MILD
- ▁CONTACT
- ▁MEANTIME
- ▁NEEDLE
- ▁BLOT
- ▁BARREL
- ▁ISABELLA
- ▁THEATRE
- ▁ESTABLISHMENT
- ▁MARKET
- ▁CHINA
- ▁FORBID
- ▁PERISH
- ▁DOORWAY
- ▁CARLING
- ▁PERIL
- ▁PRIZE
- ▁HATCH
- ▁CURL
- ▁REFER
- ▁DEVOT
- EMBER
- MONT
- ▁CANOE
- ▁PROFESSION
- ▁CONVICT
- ▁CRAWL
- ▁ACTIVITY
- ▁BEWILDER
- ▁BREEZE
- ▁CONTEMPLAT
- ▁DISGUST
- ▁FATIGUE
- ▁MERRICK
- ▁PRAIRIE
- ▁REFORM
- ▁SPECTACLE
- ▁STUDENT
- ▁TUMULT
- ▁UNIFORM
- ▁VIGOROUS
- ▁CONDEMN
- ▁GENUINE
- ▁THOMAS
- ▁ARROW
- ▁PILLOW
- ▁FEEBLE
- ▁RALPH
- ▁SCHEME
- ▁COLLAR
- ▁JUSTINIAN
- ▁NERVE
- ▁OYSTER
- ▁BENNET
- ▁DUTIES
- ▁BINGLEY
- ▁CHRISTMAS
- ▁CONVEY
- ▁DESPIS
- ▁RATTL
- ▁GARMENTS
- ▁GOWN
- ▁BERYL
- ▁BARRIER
- ▁CHARACTERISTIC
- ▁MEDITAT
- ▁DISCOURSE
- ▁STAFF
- ▁KARA
- ▁MONTE
- ▁READILY
- ▁VENTUR
- ▁HENCE
- ▁ROPE
- ▁CRIES
- ▁ANGLE
- ▁RESPECTABLE
- ▁MOAN
- ▁OUTLINE
- BORN
- ▁FIX
- ▁INTEND
- LIA
- ▁CHILL
- ▁CREP
- ▁CHOSE
- ▁SPECULAT
- ▁ATTRIBUT
- ▁BUFFALO
- ▁ENTREAT
- ▁ENVELOP
- ▁FREDERICK
- ▁IMPATIENCE
- ▁INDIFFERENCE
- ▁INDUSTRY
- ▁INSTITUTION
- ▁LYNDE
- ▁RETAIN
- ▁TROUTINA
- ▁UNCOMFORTABL
- ▁VENGEANCE
- ▁JENKS
- ▁CONGRESS
- ▁SMART
- ▁THITHER
- ▁DISAGREE
- ▁IMPROVEMENT
- ▁PISTOL
- ▁GOSSIP
- ▁ETERNAL
- ▁BELIEF
- ▁SLEDGE
- ▁AROUSED
- ▁ORANGE
- ▁FASTENED
- ▁MONKEY
- ▁WITHDREW
- ▁OFFEND
- ▁PIERC
- ▁MOONLIGHT
- ▁OARS
- ▁GROOM
- ▁FIDDLER
- ▁BARBARA
- SHIRE
- ▁ATTENDANT
- ▁DIVERS
- ▁DUCK
- ▁PROPOSAL
- ▁GROWTH
- ▁CURATE
- ▁STEWAR
- ▁MOCK
- ▁SUCCESSION
- ▁CREATION
- ▁PARTIAL
- ▁SWU
- ▁FROST
- ▁EIGHTH
- ▁AWE
- ▁PERCH
- ▁LACE
- SPOON
- ▁ARRANGE
- SERIES
- ▁FOG
- ▁SCU
- ▁ABRAHAM
- ▁ADMIRAL
- ▁BARBICANE
- ▁CAMPAIGN
- ▁CONSEQUENTLY
- ▁CULTURE
- ▁GRAMMONT
- ▁GWYNPLAINE
- ▁HAPPILY
- ▁HOOPDRIVER
- ▁INDEPENDENCE
- ▁LEOPOLD
- ▁MISCHIEF
- ▁MONTGOMERY
- ▁NECESSARILY
- ▁PSYCHIC
- ▁RABBIT
- ▁REFUGE
- ▁RESPONSIBILIT
- ▁SENATOR
- ▁UNCERTAIN
- ▁MENSTRUA
- ▁FANNY
- ▁SUBSTANCE
- ▁APRIL
- ▁ELBOW
- ▁QUALITY
- ▁BORDER
- ▁BRUTAL
- ▁CARPET
- ▁SOLITAR
- ▁FROWN
- ▁SCENT
- ▁ANNOY
- ▁NAKED
- ▁BOSOM
- ▁CONSUM
- ▁TIGER
- ▁ITALIAN
- ▁PARSON
- ▁DECLIN
- ▁NEIGHBORHOOD
- ▁GREGGORY
- ▁EXCEED
- ▁SILLY
- ▁ICELAND
- ▁HIDEOUS
- ▁STRU
- ▁ALTERNAT
- ▁CABINET
- ▁ABILITY
- ▁BEECH
- ▁SECRETARY
- ▁CONTEST
- ▁MONK
- ▁PADD
- ▁EVA
- ▁CREST
- ▁FINISH
- ▁APPARENT
- ▁MIX
- ▁SLIP
- ▁LUXURI
- ▁AUTUMN
- ▁CIRCULAR
- ▁COMPOSITION
- ▁DISPLEAS
- ▁EXCELLENC
- ▁FURNITURE
- ▁GRADUATE
- ▁INDIFFERENT
- ▁JOSEPH
- ▁OCCUPATION
- ▁POSSIBILITY
- ▁RENEWED
- ▁RESPONDED
- ▁PREVAIL
- ▁HOARSE
- ▁PRACTIS
- ▁FAREWELL
- ▁JULIET
- ▁OVERHEAD
- ▁THREAD
- ▁APPLICATION
- ▁SOLITUDE
- ▁ADAPT
- ▁FALK
- ▁LARK
- ▁COARSE
- ▁MANKIND
- ▁KICK
- ▁BATTER
- ▁SOLICIT
- ▁RESIGN
- ▁MOTOR
- ▁STEEL
- ▁CONTRIV
- ▁AUTHORITIES
- ▁HARSH
- ▁FAVORITE
- ▁TALENT
- ▁FLEECE
- ▁AGITATION
- ▁ABBE
- ▁STUCK
- ▁HEDGE
- ▁BIBLE
- ▁RECOLLECTION
- ▁PARTNER
- ▁DAMON
- ▁SHINE
- ▁HOOK
- ▁CONFESSION
- ▁ASSENT
- ▁ELDE
- ▁BIGGE
- ▁PEACEFUL
- SCRIBED
- ▁WEIGH
- CARLET
- ▁DECIDE
- ▁RECOLLECT
- ▁BOHEMIA
- ▁CALIFORNIA
- ▁CONSTRUCT
- ▁DEMONSTRAT
- ▁DISTRIBUT
- ▁FRIGHTFUL
- ▁GNOME
- ▁IGNORANCE
- ▁JANUARY
- ▁JULIUS
- ▁MEMORIES
- ▁OCCUPY
- ▁PHRASE
- ▁WHIRLWIND
- ▁WILMINGTON
- ▁CARLINI
- ▁CHAUVELIN
- ▁ESTEEM
- ▁GENZABURO
- ▁GLOBE
- ▁LECOQ
- ▁MARGARET
- ▁MONARCH
- ▁NAPOLEON
- ▁SCORN
- ▁STAGGER
- ▁SUSTAIN
- ▁TRADITION
- ▁ADJUST
- ▁FROZEN
- ▁IMPRISON
- ▁LANTERN
- ▁MICHEL
- ▁STOMACH
- ▁TORRENT
- ▁WITHDRAW
- ▁FRANZ
- ▁POISON
- ▁SURVEY
- ▁BRITISH
- ▁ELEVAT
- ▁AWOKE
- ▁ESTHER
- ▁INHERIT
- ▁TRAVERS
- ▁STOPPING
- ▁IRELAND
- ▁COMPARATIVE
- ▁SOBB
- ▁FAVOURITE
- ▁CANVAS
- ▁CLOAK
- ▁GLAR
- ▁ASSISTANT
- ▁DAMAGE
- ▁PEAK
- ▁DISTINCTION
- FARE
- ▁DOLLAR
- ▁BEGGAR
- LUSIVE
- ▁MODEL
- ▁SECUR
- ▁DISPOS
- ▁SLID
- ▁PEA
- ▁SPEEDI
- HOLD
- ▁SNAP
- ▁CIGAR
- ▁AFFLICT
- ▁AMAZEMENT
- ▁LAUNCELOT
- ▁LEAGUE
- ▁MARIPOSA
- ▁POPULATION
- ▁UNEASY
- ▁BLOSSOM
- ▁CATERPILLAR
- ▁INCLINATION
- ▁SUSPEND
- ▁SYNDIC
- ▁TAYLOR
- ▁WILSON
- ▁CONTRAST
- ▁PORTRAIT
- ▁CORONER
- ▁GREEK
- ▁BUNDLE
- ▁BLEW
- ▁THORPE
- ▁ORPHAN
- ▁MUSCLE
- ▁DEAF
- ▁SURVIV
- ▁EXCEEDINGLY
- ▁TENDENC
- ▁ISRAEL
- ▁QUANTIT
- ▁PENSION
- ▁DRIED
- TEXT
- ▁REFERENCE
- ▁REPOSE
- ▁FOLLY
- ▁REPLACE
- ▁TERR
- ▁ANKLE
- ▁SUNLIGHT
- ▁SECURITY
- ▁SHOV
- ▁RAW
- CULAR
- ▁JACKET
- ▁TUNE
- ▁HOBB
- ▁MARTIN
- DUCED
- ▁FIST
- ▁BEGG
- ▁CHOK
- ▁INQUIRE
- ▁INTELLECT
- ▁AMUSEMENT
- ▁APPROPRIATE
- ▁CONGRATULAT
- ▁CONVENTION
- ▁DISCOURAG
- ▁EXQUISITE
- ▁FOUNTAIN
- ▁JUNIOR
- ▁NONSENSE
- ▁OBSTACLE
- ▁SPECIMEN
- ▁SWEAR
- ▁TRANQUIL
- ▁VEHICLE
- ▁WISDOM
- ▁ASCERTAIN
- ▁CAUTIOUS
- ▁CENTURIES
- ▁CORRUPT
- ▁EXPLOR
- ▁TURKEY
- ▁BARGAIN
- ▁CONFOUND
- ▁FUNCTION
- ▁GRACIOUS
- ▁MONICA
- ▁ILLUSTRAT
- ▁CRUMB
- ▁REMEDY
- ▁REMOTE
- ▁REVENGE
- ▁BABYLON
- ▁CAUTION
- ▁INTERIOR
- ▁CRISTEL
- ▁BRAZ
- ▁THIRST
- ▁PROBABLE
- ▁HARMONY
- ▁CHARITY
- ▁DECAY
- ▁COLONI
- ▁AVAIL
- ▁REPULS
- ▁ABSENT
- ▁PULSE
- ▁PRESUM
- ▁CRANE
- ▁NEIGHBOURHOOD
- ▁SUNSET
- ▁CANNON
- ▁GRAPE
- ▁SOFA
- ▁DRANK
- MINOUS
- ▁DECLARATION
- ▁CLOSING
- ▁MEEK
- ▁STARV
- ▁BUNCH
- ▁PERFORMANCE
- ▁ENTERTAINMENT
- ▁STRIV
- ▁EMILY
- ▁VALET
- MPOSED
- ▁INTIMA
- ▁POLISH
- ▁HIRE
- POST
- ▁TREMBLE
- ▁CEASE
- ▁VIRGIN
- ▁RUSSIA
- COURSE
- ▁EDUCAT
- BOUND
- ▁INHABIT
- ▁SUPERINTEND
- ▁BISCUIT
- ▁CHICAGO
- ▁CHOKICHI
- ▁CONFLICT
- ▁ENCLOS
- ▁EXCLUSION
- ▁EXECUTIVE
- ▁GRANDMOTHER
- ▁HEADQUARTERS
- ▁INFERIOR
- ▁INVISIBLE
- ▁MUTUAL
- ▁OPPONENT
- ▁SENSITIVE
- ▁STUDIED
- ▁TEMPORARY
- ▁UNWILLING
- ▁PERMANENT
- ▁BEDROOM
- ▁NOVEMBER
- ▁COMPLICAT
- ▁DEVOUR
- ▁SCRAMBL
- ▁SECTION
- ▁PROPOSITION
- ▁DEPRIV
- ▁RYNCH
- ▁PLEAD
- ▁TORTURE
- ▁SCOUT
- ▁PILOT
- ▁CHERISH
- ▁SPEAR
- ▁SUGAR
- ▁JASPER
- ▁STRAY
- ▁RIFLE
- ▁NORMAL
- ▁JERK
- ▁HONEY
- ▁AWAKENED
- ▁QUIVER
- ▁PYE
- ▁APPLY
- LICK
- JA
- ▁ANNOUNC
- FORE
- ▁ENGINE
- ▁HESITATE
- ▁PROVIDE
- ▁REALIZE
- ▁SEIZE
- ▁RESTORE
- MOUTH
- FOOT
- ▁DIFFER
- ▁ULTIMATE
- ▁ABUNDANCE
- ▁APPRECIATE
- ▁APPREHENSION
- ▁AVENUE
- ▁AWKWARD
- ▁CETERA
- ▁CHIMNEY
- ▁CLUTCH
- ▁CONVENIENT
- ▁CORRIDOR
- ▁DISTRACT
- ▁ELEGANT
- ▁ELSEWHERE
- ▁ENTHUSIASTIC
- ▁EXECUTE
- ▁EXTREMIT
- ▁JERUSALEM
- ▁MIRACLE
- ▁MONSTROUS
- ▁OBEDIENCE
- ▁OBSCURE
- ▁PHENOMENA
- ▁RESIDENCE
- ▁RESOURCE
- ▁REVOLT
- ▁SCIENTIFIC
- ▁SHIELD
- ▁SIMPSON
- ▁UNIVERSE
- VOLUNTARY
- ▁ATTENTIVE
- ▁BRENDA
- ▁DEPOSIT
- ▁MAXIM
- ▁REJECT
- ▁STIRRED
- ▁DISORDER
- ▁SERENE
- ▁TOBACCO
- ▁MILTON
- ▁BALLOON
- ▁STEPHEN
- ▁STRAIT
- ▁CHINESE
- ▁COURTEOUS
- ▁RELEASE
- ▁RECESS
- ▁COTTON
- ▁STUMP
- ▁TANK
- ▁PROMOTE
- ▁DERIVE
- ▁LOYAL
- ▁GRANIT
- ▁DISMAL
- ▁CATTLE
- ▁DOONE
- ▁CUPID
- DIGNIFIED
- ▁RIPE
- ▁EXILE
- ▁ANTIQU
- UMINAT
- ▁SUPPOS
- ▁WRETCH
- ▁IDENTI
- ▁EASI
- ▁SERV
- ▁QUEST
- TOWN
- ▁ACHIEVEMENT
- ▁APPETITE
- ▁BUCCANEER
- ▁COMMENCED
- ▁DELAWARE
- ▁DISCERN
- ▁IMMORTAL
- ▁INDIGNANT
- ▁JOSIANA
- ▁MECHANICAL
- ▁MUSKRAT
- ▁REVIEW
- ▁ROBARTS
- ▁SIGNIFICANT
- ▁SUBSEQUENT
- ▁YOURSELVES
- ▁ANGRILY
- ▁BORROW
- ▁SUBLIME
- ▁AFRICA
- ▁CHICKEN
- ▁DEGRAD
- ▁GEORGI
- ▁HUMILIAT
- ▁LODGING
- ▁REDCOAT
- ▁VIOLET
- ▁HOPKINS
- ▁RAWDON
- ▁PRICK
- ▁WHALE
- ▁FUNERAL
- ▁GUINEA
- ▁DISMAY
- ▁PORCH
- ▁HARVEST
- ▁PARCEL
- ▁SUBDU
- ▁SYRIA
- ▁PANIC
- ▁BOUGHS
- ▁CIGARETTE
- ▁CHRON
- ▁INQUIRY
- ▁CRYSTAL
- ▁SPELL
- ▁PLUCK
- ▁PATTERN
- ▁DARING
- ▁CRITICISM
- ▁DAINT
- ▁DISTURBANCE
- ▁BUTCHER
- ▁LITERA
- ▁ABUSE
- IXTURE
- ▁ANIMAT
- ▁WRIT
- ▁BELIEV
- ▁INDUCE
- COMING
- ▁DRAMA
- ▁AGITAT
- SHAW
- ▁IMPERFECT
- ▁MANUFACTURE
- ▁AFFIRM
- ▁ANGUISH
- ▁ARTIFICIAL
- ▁BIBBS
- ▁CHARLOTTE
- ▁CIRCUS
- ▁CONNISTON
- ▁CONSTITUTE
- ▁DAZZL
- ▁DEFECT
- ▁DISCHARG
- ▁ESCORT
- ▁EXAGGERAT
- ▁GWENDOLEN
- ▁IRRESISTIBL
- ▁PHILOSOPHY
- ▁PHOTOGRAPH
- ▁PILGRIM
- ▁PLEASING
- ▁QUIXOTE
- ▁RESPONSE
- ▁SCRATCH
- ▁SERGEANT
- ▁SHERIFF
- ▁SHUDDER
- ▁STRUCTURE
- ▁SUFFRAGE
- ▁SURRENDER
- ▁SWORE
- ▁VILLAIN
- ▁HESITATING
- ▁FLORENCE
- ▁IRRITAT
- ▁RIGID
- ▁SINISTER
- ▁STUDIO
- ▁RAFT
- ▁CHAMPION
- ▁PAVEMENT
- ▁WOLF
- ▁DEVICE
- ▁WRECK
- ▁HESITATION
- ▁LAZY
- ▁ADJO
- ▁DECENT
- ▁INTERVEN
- ▁WOOL
- ▁ILLUSION
- ▁HAWK
- ▁IMPART
- ▁LUNGS
- ▁WINNING
- ▁VITAL
- ▁CONSPI
- ▁SUBTLE
- ▁CONSTANC
- ▁HURL
- ▁AMIABL
- ▁FOLK
- GGY
- ▁NECESSIT
- ▁PROFESS
- WASH
- ▁ADMIRING
- ▁AMBITIOUS
- ▁ANTHONY
- ▁CEREMONY
- ▁CONTRIBUTE
- ▁CRAGGS
- ▁DETAIN
- ▁DISCLOS
- ▁DWELT
- ▁EGYPT
- ▁FELIX
- ▁JOURNAL
- ▁KWAIRYO
- ▁LIBERAL
- ▁LUMBER
- ▁OCTOBER
- ▁ORGANIZATION
- ▁POPULACE
- ▁PRECAUTION
- ▁PREJUDICE
- ▁PROCLAIM
- ▁PROPRIETOR
- ▁RESPONSIBLE
- ▁RHYTHM
- ▁RIDICULOUS
- ▁SCHOLAR
- ▁SQUEEZ
- ▁SUBSTITUTE
- ▁SURPASS
- ▁THRESHOLD
- ▁WHARTON
- ▁FLICKER
- ▁AMAZED
- ▁BRONZE
- ▁COSSACK
- ▁SPILETT
- ▁CASUAL
- ▁DARCY
- ▁PARLOUR
- ▁SEXUAL
- ▁INSECT
- ▁NATHAN
- ▁EMINENT
- ▁PENCIL
- ▁PETITION
- ▁ROTTEN
- ▁VIGIL
- ▁CAESAR
- ▁EAGLE
- ▁TREAD
- ▁REACTION
- ▁TACIT
- ▁PARLOR
- ▁SPAIN
- ▁WILDERNESS
- ▁DICTAT
- ▁GRATIFY
- ▁STOVE
- ▁SKIRT
- ▁UTILI
- ▁CONCERT
- ▁GORGE
- ▁DECORAT
- ▁LATIN
- ▁ANCHOR
- ▁KNOT
- ▁MONDAY
- ▁GABLES
- ▁TOLERABL
- ▁ROGER
- BERRIES
- ▁INVAD
- IMMER
- OMETER
- ▁PRODUC
- OBIL
- ▁PERMISSI
- FICIENCY
- ▁WANDER
- RREL
- PIECE
- HORN
- ▁COMMIT
- ▁ACCUMULAT
- ▁JAPAN
- ▁ABUNDANT
- ▁ACADEMY
- ▁ALBERT
- ▁BANQUET
- ▁DELICIOUS
- ▁DOCUMENT
- ▁EXCLAMATION
- ▁FEBRUARY
- ▁GROTESQUE
- ▁HEATHERSTONE
- ▁HUMPHREY
- ▁HURSTWOOD
- ▁MOHAMMED
- ▁MOSCOW
- ▁NICHOLAS
- ▁OBSTINATE
- ▁PHANTOM
- ▁PHILOSOPHER
- ▁RECEPTION
- ▁SPANIARD
- ▁SWOLLEN
- ▁TELEPHONE
- ▁TRIBUTE
- ▁TUNNEL
- ▁UNREASONABL
- ▁WIGWAM
- ▁BUTTERFLY
- ▁COLLINS
- ▁DISPATCH
- ▁EDITOR
- ▁CONTINENT
- ▁DIMINISH
- ▁HORRID
- ▁KEATS
- ▁PROVIDENCE
- ▁BEHALF
- ▁CHARLEY
- ▁DRAKE
- ▁LAUNCH
- ▁SALOON
- ▁GIGANT
- ▁DISPUTE
- ▁HYSTERI
- ▁DEFENCE
- ▁SCREEN
- ▁VAULT
- ▁NINTH
- ▁HARBOR
- ▁FLANK
- ▁SPECK
- ▁UPRIGHT
- ▁KEMP
- ▁CANADA
- ▁STALK
- ▁OWL
- ▁BRUTE
- ▁FERRIS
- ▁DECREE
- ▁HABITUAL
- ▁BRISK
- ▁INSPIRE
- ▁HUSH
- ▁CROUCH
- ▁FRIDAY
- ▁MOUNTAINEER
- ▁HISTORIC
- ▁BATES
- ▁RUSK
- ▁SEMI
- DICTION
- ▁BUSI
- ▁REMOV
- MMI
- ▁SUFFIC
- ▁FLEE
- ▁LOUIS
- NLEA
- ▁IMPORT
- OLOGY
- ▁CLERGY
- ▁ADVERTISEMENT
- ▁BENEVOLEN
- ▁BORODINO
- ▁CATHOLIC
- ▁COMMERCIAL
- ▁CONJECTURE
- ▁CURTAIN
- ▁CUTHBERT
- ▁DEMOCRACY
- ▁GUARANTEE
- ▁HYPNOSIS
- ▁INDEFINITE
- ▁INVESTIGATION
- ▁IRREGULAR
- ▁KOYO
- ▁MERRIWIG
- ▁MIRANDA
- ▁NICHOLL
- ▁ONLOOKER
- ▁PERSECUT
- ▁RECOGNITION
- ▁REJOICE
- ▁REMEMBRANCE
- ▁REVELATION
- ▁SCOLD
- ▁SENIOR
- ▁SQUIRREL
- ▁SYMPATHETIC
- ▁TEMPEST
- ▁TREACHER
- ▁UNDERNEATH
- ▁UNEASINESS
- ▁UNNECESSARY
- ▁UPSTAIRS
- ▁VEXATION
- ▁ACCESS
- ▁CHEAP
- ▁ESTIMATE
- ▁HAZARD
- ▁HORSEBACK
- ▁PLUNDER
- ▁RASCAL
- ▁ROSTOV
- ▁ACCUR
- ▁GRAVITY
- ▁SITUATED
- ▁INVARIABL
- ▁PLENTIFUL
- ▁SPENCER
- ▁WALLACE
- ▁POLICY
- ▁WARRANT
- ▁ENVY
- ▁LAMB
- ▁EXTRACT
- ▁CORRAL
- ▁PANEL
- ▁LINK
- ▁LILIES
- ▁BECKON
- ▁SENOR
- ▁BORG
- ▁DEBATE
- ▁STEER
- COGNI
- COMB
- ▁SETTL
- ▁VENERA
- ▁FEATURE
- ▁TERRIBL
- CAPABLE
- OLOGICAL
- ▁INCESSANT
- ▁RESOLUTE
- SHAUGHNESSY
- ▁ABOLITION
- ▁ASSASSIN
- ▁BEHAVIOUR
- ▁BLUNT
- ▁COMMERCE
- ▁CONSTANTINOPLE
- ▁CRICKET
- ▁DISCIPLINE
- ▁DROUET
- ▁DWARF
- ▁INJUSTICE
- ▁LUXURY
- ▁MANUSCRIPT
- ▁MISUNDERSTAND
- ▁POLITICIAN
- ▁REDOUBT
- ▁SALVATION
- ▁SERMON
- ▁STRUGGLING
- ▁SURPRISING
- ▁TRIGGER
- ▁TUESDAY
- ▁TWILIGHT
- ▁UNDOUBTEDLY
- ▁VEGETABLE
- ▁VULGAR
- ▁WAISTCOAT
- ▁WRINKLE
- ▁ALEXANDER
- ▁CEILING
- ▁ECONOMIC
- ▁EVERLASTING
- ▁INFLICT
- ▁LEVISON
- ▁LOBSTER
- ▁OVERFLOW
- ▁SNATCH
- ▁TRAGEDY
- ▁DEASEY
- ▁ENLIGHTEN
- ▁FRIGATE
- ▁INSPECT
- ▁MARVELLOUS
- ▁ATLANTIC
- ▁LUFTON
- ▁BLADE
- ▁CRASH
- ▁SLAUGHTER
- ▁ANNUAL
- ▁CONFERENCE
- ▁TWIG
- ▁REASSUR
- ▁UNIQUE
- ▁WRATH
- ▁CRADLE
- ▁HULLO
- ▁LIQUID
- ▁MIRTH
- ▁EXPERT
- ▁HARVEY
- ▁RESTORATION
- ▁PRETTI
- ▁APOLOGY
- ▁SLAIN
- ▁BARBER
- ▁UPROAR
- ▁SCANT
- ▁BADGER
- ▁GROCER
- ▁ACRES
- ▁BRIDLE
- ▁SPECIFI
- ▁TANGLE
- ▁FERTIL
- ▁PATRON
- WIXT
- LAMOUR
- ▁DARN
- ▁POPE
- ▁PERCEIV
- ▁CONCLUDE
- ▁SIMPL
- ▁GUILT
- ▁CARRIE
- EFFICIENT
- SGIVING
- ▁APPOINTMENT
- ▁APPRECIATION
- ▁CARTRIDGE
- ▁CHALLENGE
- ▁CRAYFISH
- ▁CRIMSON
- ▁CUCUMETTO
- ▁ENERGETIC
- ▁EPOCH
- ▁EXAMINING
- ▁EXTENSIVE
- ▁EXTINGUISH
- ▁GLOODY
- ▁INSIGNIFICANT
- ▁LANDLORD
- ▁LANGUID
- ▁LEGISLATURE
- ▁MAJESTIC
- ▁PACIFIC
- ▁PASTRINI
- ▁PHRONSIE
- ▁RECONCIL
- ▁SIMULTANEOUS
- ▁SKELETON
- ▁SKETCH
- ▁TRANSFORM
- ▁UNJUST
- ▁VEXED
- ▁ASYLUM
- ▁CLUSTER
- ▁ERRAND
- ▁EXPEND
- ▁NEGATIVE
- ▁NORHALA
- ▁SCANDAL
- ▁STIMULAT
- ▁SWEAT
- ▁COMPOUND
- ▁DECEMBER
- ▁EXPAND
- ▁PROLONG
- ▁PURITAN
- ▁CONQUEST
- ▁MAGUA
- ▁SANCHO
- ▁TRENCH
- ▁ENTITLE
- ▁PEPPER
- ▁DISASTER
- ▁REGAIN
- ▁SHREWD
- ▁SULLEN
- ▁CLAVIER
- ▁COLOSS
- ▁SHILLING
- ▁ETHEL
- ▁MYSTERIES
- ▁BULK
- ▁GRANDEUR
- ▁AGNES
- ▁CONVERT
- ▁WRIST
- ▁GLID
- ▁TERRACE
- ▁SONYA
- ▁DANTES
- ▁MOULD
- ▁MAGNET
- ▁PLOT
- RANK
- ▁CAVIT
- ▁SUBSID
- ▁SLAP
- TURNED
- ▁THREAT
- BREAK
- ▁ANCESTORS
- ▁ANTICIPATED
- ▁APPLAUSE
- ▁ASSAULT
- ▁ATTORNEY
- ▁AUTOMATIC
- ▁CARAVAN
- ▁CATASTROPHE
- ▁CAVALCANTI
- ▁CROMWELL
- ▁ENVOY
- ▁EXHAUSTION
- ▁FIEND
- ▁GENEROSITY
- ▁GIMBLET
- ▁HARDQUANONNE
- ▁HOUARN
- ▁INJURY
- ▁MACKINSON
- ▁OGLETHORPE
- ▁PETTICOAT
- ▁RASPBERR
- ▁REHNHJELM
- ▁REJOICING
- ▁REMNANT
- ▁SCOTLAND
- ▁SHRINK
- ▁STANDPOINT
- ▁TESTIMONY
- ▁THEREAFTER
- ▁THIRTIETH
- ▁TWENTIETH
- ▁TYRANT
- ▁VENTNOR
- ▁VETERAN
- ▁WHITTAKER
- ▁ZVERKOV
- ▁ARCHITECTUR
- ▁BLUNDER
- ▁DENSHER
- ▁FORTNIGHT
- ▁JUDITH
- ▁MARIANNE
- ▁MEMORABLE
- ▁REFINED
- ▁REVOLV
- ▁UNDERTAKING
- ▁CLUMP
- ▁GRUMBLE
- ▁SYMPATHI
- ▁TICKET
- ▁TWITCH
- ▁EDITION
- ▁FALANDER
- ▁CARTHAGE
- ▁ORLEANS
- ▁POSSUM
- ▁SWITCH
- ▁CLUNG
- ▁CARDINAL
- ▁GNAW
- ▁LOCATED
- ▁HARROW
- ▁RASH
- ▁SIEGE
- ▁LOAF
- ▁BRUISE
- ▁REGULAT
- ▁RESORT
- ▁SARAH
- ▁LEVIN
- ▁NAVY
- ▁MOOSE
- ▁STOOL
- ▁CHANCELLOR
- ▁INGENIOUS
- ▁CHALK
- ▁PRETENCE
- ▁REPAY
- ▁ROAST
- ▁PLUTO
- ▁BAFFL
- ▁STUMBL
- ▁SPHERE
- ▁PLEDGE
- ▁SPRAWL
- ▁WRAP
- ▁FRINGE
- ▁DREAR
- ARRINGTON
- ▁FEDERA
- KEEPER
- ▁PHYSIC
- ▁ADVENT
- HUMAN
- OLOGIST
- ▁ALEXANDR
- ▁APPARITION
- ▁BARTHOLEMY
- ▁CITOYEN
- ▁CLIMATE
- ▁CONTEMPORAR
- ▁DESOLATE
- ▁DISCONTENT
- ▁ELEPHANT
- ▁FERNANDO
- ▁FERRALTI
- ▁FOLIAGE
- ▁FUGITIVE
- ▁GAMBLING
- ▁INVOLUNTARILY
- ▁LABYRINTH
- ▁LEGITIMATE
- ▁MILLIONAIRE
- ▁PERCEPTION
- ▁PROPRIETY
- ▁REBELLION
- ▁REFRAIN
- ▁RUGGLES
- ▁SCRIPTURE
- ▁SPLENDOR
- ▁SQUADRON
- ▁STRICKEN
- ▁SWARM
- ▁THEODORA
- ▁TOMORROW
- ▁VELVET
- ▁WOLVES
- ▁DISREGARD
- ▁GLIMMER
- ▁SHROUD
- ▁TWINKLING
- ▁UNEQUAL
- ▁CHANNING
- ▁CLUMS
- ▁ENIGMA
- ▁NAVIGAT
- ▁TARKAS
- ▁TEMPERATURE
- ▁DIVISION
- ▁GRATIFICATION
- ▁MONUMENT
- ▁SQUEAK
- ▁KAVIN
- ▁INTERPOSE
- ▁THORNTON
- ▁SOLUTION
- ▁STREAK
- ▁SHRILL
- ▁APRON
- ▁PITEOUS
- ▁HAUGHTY
- ▁RECKLESS
- ▁EMPTI
- ▁WADMAN
- ▁BONNET
- ▁MARTHA
- ▁DUMB
- ▁SHATTER
- ▁ACUTE
- ▁BRINK
- ▁CAPRICE
- ▁HURON
- ▁INFERN
- ▁FOWL
- ▁ENRAGE
- ▁ADORN
- ▁CRUIS
- ▁PROBABILIT
- ▁EXPIR
- ▁IMPETU
- ▁OVERHEAR
- BURTON
- ▁TRANSLAT
- ▁ENGAGE
- ▁CONVINCE
- ▁ABNORMAL
- ▁GESTICULAT
- ▁ABOMINABL
- ▁ADVERSARY
- ▁ADVERTISER
- ▁ADVERTISING
- ▁ANNIHILAT
- ▁ARTILLERY
- ▁CATHEDRAL
- ▁COMPETITOR
- ▁COULSON
- ▁CREVICE
- ▁CUSHION
- ▁DEBRAY
- ▁DEJECT
- ▁DIETRICH
- ▁DISADVANTAGE
- ▁ELLISON
- ▁EMPHASIS
- ▁EXCURSION
- ▁FANTASTIC
- ▁HYPOTHES
- ▁INCONVENIENCE
- ▁INDESCRIBABLE
- ▁INDUSTRI
- ▁INVALID
- ▁MERCILESS
- ▁MESOPOTAMIA
- ▁MOSQUITO
- ▁NARRATIVE
- ▁NOWADAYS
- ▁OPPORTUNITIES
- ▁PROMISING
- ▁RECTANGLE
- ▁REMONSTRANCE
- ▁RESTAURANT
- ▁RIBBON
- ▁SCIENTIST
- ▁SHALMANESER
- ▁SKULL
- ▁SPRUCE
- ▁SUBSTANTIAL
- ▁SYMBOL
- ▁TEAPOT
- ▁TERRITORY
- ▁TRAFFIC
- ▁TREASON
- ▁TRUMPET
- ▁TYRANN
- ▁UNANIMOUS
- ▁UNAWARE
- ▁VICINITY
- ▁WREATH
- ▁ZADIG
- ▁CHATEAU
- ▁CONFRONT
- ▁DUCHESS
- ▁EMBODI
- ▁FEMININ
- ▁FURNACE
- ▁MONTONI
- ▁RENOWN
- ▁SMASH
- ▁HARVARD
- ▁NEWBERRY
- ▁PERFUME
- ▁SIGNATURE
- ▁SPLASH
- ▁SUPPOSITION
- ▁HARBOUR
- ▁ASSURANCE
- ▁BRISTOL
- ▁BUCKINGHAM
- ▁DUDLEY
- ▁INTENSITY
- ▁CHOPIN
- ▁ENLIST
- Q
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
win_length: 400
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 5
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 256
attention_heads: 4
linear_units: 1024
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ali2066/finetuned_token_2e-05_16_02_2022-01_30_30
|
ali2066
| 2022-02-16T00:32:55Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: finetuned_token_2e-05_16_02_2022-01_30_30
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_token_2e-05_16_02_2022-01_30_30
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1748
- Precision: 0.3384
- Recall: 0.3492
- F1: 0.3437
- Accuracy: 0.9442
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 |
| No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 |
| No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 |
| No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 |
| No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
|
ncats/EpiExtract4GARD-v2
|
ncats
| 2022-02-16T00:08:16Z | 24 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"token-classification",
"ncats",
"en",
"dataset:ncats/EpiSet4NER",
"license:other",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
widget:
- text: "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
example_title: "Named Entity Recognition Ex. 1"
- text: "A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births)"
example_title: "Named Entity Recognition Ex. 2"
- text: "A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence."
example_title: "Named Entity Recognition Ex. 3"
tags:
- token-classification
- ncats
model-index:
- name: EpiExtract4GARD-v2
results:
- task:
name: NER
type: token-classification
metrics:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
- name: Token-Level Precision
type: precision
value:
- name: Token-Level Recall
type: recall
value:
- name: Token-Level F1 Score
type: f_score
value:
datasets:
- ncats/EpiSet4NER
license: other
---
## DOCUMENTATION UPDATES IN PROGRESS
## Model description
**EpiExtract4GARD-v2** is a fine-tuned [BioBERT-base-cased](https://huggingface.co/dmis-lab/biobert-base-cased-v1.1) model that is ready to use for **Named Entity Recognition** of locations (LOC), epidemiologic types (EPI), and epidemiologic rates (STAT). This model was fine-tuned on EpiSet4NER-v2 for epidemiological information from rare disease abstracts. See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. See [EpiExtract4GARD on GitHub](https://github.com/ncats/epi4GARD/tree/master/EpiExtract4GARD#epiextract4gard) for details on the entire pipeline.
#### How to use
You can use this model with the Hosted inference API to the right with this [test sentence](https://pubmed.ncbi.nlm.nih.gov/21659675/): "27 patients have been diagnosed with PKU in Iceland since 1947. Incidence 1972-2008 is 1/8400 living births."
See code below for use with Transformers *pipeline* for NER.:
~~~
from transformers import pipeline, AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("ncats/EpiExtract4GARD")
tokenizer = AutoTokenizer.from_pretrained("ncats/EpiExtract4GARD")
NER_pipeline = pipeline('ner', model=model, tokenizer=tokenizer,aggregation_strategy='simple')
sample = "The live-birth prevalence of mucopolysaccharidoses in Estonia. Previous studies on the prevalence of mucopolysaccharidoses (MPS) in different populations have shown considerable variations. There are, however, few data with regard to the prevalence of MPSs in Fenno-Ugric populations or in north-eastern Europe, except for a report about Scandinavian countries. A retrospective epidemiological study of MPSs in Estonia was undertaken, and live-birth prevalence of MPS patients born between 1985 and 2006 was estimated. The live-birth prevalence for all MPS subtypes was found to be 4.05 per 100,000 live births, which is consistent with most other European studies. MPS II had the highest calculated incidence, with 2.16 per 100,000 live births (4.2 per 100,000 male live births), forming 53% of all diagnosed MPS cases, and was twice as high as in other studied European populations. The second most common subtype was MPS IIIA, with a live-birth prevalence of 1.62 in 100,000 live births. With 0.27 out of 100,000 live births, MPS VI had the third-highest live-birth prevalence. No cases of MPS I were diagnosed in Estonia, making the prevalence of MPS I in Estonia much lower than in other European populations. MPSs are the third most frequent inborn error of metabolism in Estonia after phenylketonuria and galactosemia."
sample2 = "Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Kuwait is a small Arabian Gulf country with a high rate of consanguinity and where a national newborn screening program was expanded in October 2014 to include a wide range of endocrine and metabolic disorders. A retrospective study conducted between January 2015 and December 2020 revealed a total of 304,086 newborns have been screened in Kuwait. Six newborns were diagnosed with classic homocystinuria with an incidence of 1:50,000, which is not as high as in Qatar but higher than the global incidence. Molecular testing for five of them has revealed three previously reported pathogenic variants in the <i>CBS</i> gene, c.969G>A, p.(Trp323Ter); c.982G>A, p.(Asp328Asn); and the Qatari founder variant c.1006C>T, p.(Arg336Cys). This is the first study to review the screening of newborns in Kuwait for classic homocystinuria, starting with the detection of elevated blood methionine and providing a follow-up strategy for positive results, including plasma total homocysteine and amino acid analyses. Further, we have demonstrated an increase in the specificity of the current newborn screening test for classic homocystinuria by including the methionine to phenylalanine ratio along with the elevated methionine blood levels in first-tier testing. Here, we provide evidence that the newborn screening in Kuwait has led to the early detection of classic homocystinuria cases and enabled the affected individuals to lead active and productive lives."
#Sample 1 is from: Krabbi K, Joost K, Zordania R, Talvik I, Rein R, Huijmans JG, Verheijen FV, Õunap K. The live-birth prevalence of mucopolysaccharidoses in Estonia. Genet Test Mol Biomarkers. 2012 Aug;16(8):846-9. doi: 10.1089/gtmb.2011.0307. Epub 2012 Apr 5. PMID: 22480138; PMCID: PMC3422553.
#Sample 2 is from: Alsharhan H, Ahmed AA, Ali NM, Alahmad A, Albash B, Elshafie RM, Alkanderi S, Elkazzaz UM, Cyril PX, Abdelrahman RM, Elmonairy AA, Ibrahim SM, Elfeky YME, Sadik DI, Al-Enezi SD, Salloum AM, Girish Y, Al-Ali M, Ramadan DG, Alsafi R, Al-Rushood M, Bastaki L. Early Diagnosis of Classic Homocystinuria in Kuwait through Newborn Screening: A 6-Year Experience. Int J Neonatal Screen. 2021 Aug 17;7(3):56. doi: 10.3390/ijns7030056. PMID: 34449519; PMCID: PMC8395821.
NER_pipeline(sample)
NER_pipeline(sample2)
~~~
Or if you download [*classify_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/classify_abs.py), [*extract_abs.py*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/extract_abs.py), and [*gard-id-name-synonyms.json*](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/gard-id-name-synonyms.json) from GitHub then you can test with this [*additional* code](https://github.com/ncats/epi4GARD/blob/master/EpiExtract4GARD/Case%20Study.ipynb):
~~~
import pandas as pd
import extract_abs
import classify_abs
pd.set_option('display.max_colwidth', None)
NER_pipeline = extract_abs.init_NER_pipeline()
GARD_dict, max_length = extract_abs.load_GARD_diseases()
nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer = classify_abs.init_classify_model()
def search(term,num_results = 50):
return extract_abs.search_term_extraction(term, num_results, NER_pipeline, GARD_dict, max_length,nlp, nlpSci, nlpSci2, classify_model, classify_tokenizer)
a = search(7058)
a
b = search('Santos Mateus Leal syndrome')
b
c = search('Fellman syndrome')
c
d = search('GARD:0009941')
d
e = search('Homocystinuria')
e
~~~
#### Limitations and bias
## Training data
It was trained on [EpiSet4NER](https://huggingface.co/datasets/ncats/EpiSet4NER). See dataset documentation for details on the weakly supervised teaching methods and dataset biases and limitations. The training dataset distinguishes between the beginning and continuation of an entity so that if there are back-to-back entities of the same type, the model can output where the second entity begins. As in the dataset, each token will be classified as one of the following classes:
Abbreviation|Description
---------|--------------
O |Outside of a named entity
B-LOC | Beginning of a location
I-LOC | Inside of a location
B-EPI | Beginning of an epidemiologic type (e.g. "incidence", "prevalence", "occurrence")
I-EPI | Epidemiologic type that is not the beginning token.
B-STAT | Beginning of an epidemiologic rate
I-STAT | Inside of an epidemiologic rate
+More | Description pending
### EpiSet Statistics
Beyond any limitations due to the EpiSet4NER dataset, this model is limited in numeracy due to BERT-based model's use of subword embeddings, which is crucial for epidemiologic rate identification and limits the entity-level results. Recent techniques in numeracy could be used to improve the performance of the model without improving the underlying dataset.
## Training procedure
This model was trained on a [AWS EC2 p3.2xlarge](https://aws.amazon.com/ec2/instance-types/), which utilized a single Tesla V100 GPU, with these hyperparameters:
4 epochs of training (AdamW weight decay = 0.05) with a batch size of 16. Maximum sequence length = 192. Model was fed one sentence at a time.
<!--- Full config [here](https://wandb.ai/wzkariampuzha/huggingface/runs/353prhts/files/config.yaml). --->
<!--- THIS IS NOT THE UPDATED RESULTS --->
<!--- ## Hold-out validation results --->
<!--- metric| entity-level result --->
<!--- -|- --->
<!--- f1 | 83.8 --->
<!--- precision | 83.2 --->
<!--- recall | 84.5 --->
<!--- ## Test results --->
<!--- | Dataset for Model Training | Evaluation Level | Entity | Precision | Recall | F1 | --->
<!--- |:--------------------------:|:----------------:|:------------------:|:---------:|:------:|:-----:| --->
<!--- | EpiSet | Entity-Level | Overall | 0.556 | 0.662 | 0.605 | --->
<!--- | | | Location | 0.661 | 0.696 | 0.678 | --->
<!--- | | | Epidemiologic Type | 0.854 | 0.911 | 0.882 | --->
<!--- | | | Epidemiologic Rate | 0.143 | 0.218 | 0.173 | --->
<!--- | | Token-Level | Overall | 0.811 | 0.713 | 0.759 | --->
<!--- | | | Location | 0.949 | 0.742 | 0.833 | --->
<!--- | | | Epidemiologic Type | 0.9 | 0.917 | 0.908 | --->
<!--- | | | Epidemiologic Rate | 0.724 | 0.636 | 0.677 | --->
Thanks to [@William Kariampuzha](https://github.com/wzkariampuzha) at Axle Informatics/NCATS for contributing this model.
|
vxvxx/t5-small-finetuned-no_paragraph-to-paragraph
|
vxvxx
| 2022-02-15T23:01:34Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- bleu
model-index:
- name: t5-small-finetuned-no_paragraph-to-paragraph
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-no_paragraph-to-paragraph
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0713
- Bleu: 0.0
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:----:|:-------:|
| 0.767 | 1.0 | 576 | 0.0713 | 0.0 | 19.0 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingartists/led-zeppelin
|
huggingartists
| 2022-02-15T22:19:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/led-zeppelin",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/led-zeppelin
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/e4763bba12e6411077a3e573cd290da0.433x433x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Led Zeppelin</div>
<a href="https://genius.com/artists/led-zeppelin">
<div style="text-align: center; font-size: 14px;">@led-zeppelin</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Led Zeppelin.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/led-zeppelin).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/led-zeppelin")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/cpexpb1w/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Led Zeppelin's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/bna2epba/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/led-zeppelin')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/led-zeppelin")
model = AutoModelWithLMHead.from_pretrained("huggingartists/led-zeppelin")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
Leostronkest/DialoGPT
|
Leostronkest
| 2022-02-15T21:59:14Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"gpt2",
"text-generation",
"conversational",
"arxiv:1911.00536",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
thumbnail: https://huggingface.co/front/thumbnails/dialogpt.png
tags:
- conversational
license: mit
---
## A State-of-the-Art Large-scale Pretrained Response generation model (DialoGPT)
DialoGPT is a SOTA large-scale pretrained dialogue response generation model for multiturn conversations.
The [human evaluation results](https://github.com/dreasysnail/Dialogpt_dev#human-evaluation) indicate that the response generated from DialoGPT is comparable to human response quality under a single-turn conversation Turing test.
The model is trained on 147M multi-turn dialogue from Reddit discussion thread.
* Multi-turn generation examples from an interactive environment:
|Role | Response |
|---------|--------|
|User | Does money buy happiness? |
| Bot | Depends how much money you spend on it .|
|User | What is the best way to buy happiness ? |
| Bot | You just have to be a millionaire by your early 20s, then you can be happy . |
|User |This is so difficult ! |
| Bot | You have no idea how hard it is to be a millionaire and happy . There is a reason the rich have a lot of money |
Please find the information about preprocessing, training and full details of the DialoGPT in the [original DialoGPT repository](https://github.com/microsoft/DialoGPT)
ArXiv paper: [https://arxiv.org/abs/1911.00536](https://arxiv.org/abs/1911.00536)
### How to use
Now we are ready to try out how the model works as a chatting partner!
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
tokenizer = AutoTokenizer.from_pretrained("microsoft/DialoGPT-large")
model = AutoModelForCausalLM.from_pretrained("microsoft/DialoGPT-large")
# Let's chat for 5 lines
for step in range(5):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(bot_input_ids, max_length=1000, pad_token_id=tokenizer.eos_token_id)
# pretty print last ouput tokens from bot
print("DialoGPT: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
```
|
Sourabh714/distilbert-base-uncased-finetuned-squad
|
Sourabh714
| 2022-02-15T20:47:37Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1573
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.2188 | 1.0 | 5533 | 1.1708 |
| 0.9519 | 2.0 | 11066 | 1.1058 |
| 0.7576 | 3.0 | 16599 | 1.1573 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
|
espnet
| 2022-02-15T19:51:13Z | 0 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-summarization",
"en",
"dataset:how2",
"arxiv:2110.06263",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- espnet
- audio
- automatic-speech-summarization
language: en
datasets:
- how2
license: cc-by-4.0
---
## ESPnet2 ASR model
### `espnet/roshansh_how2_asr_raw_ft_sum_valid.acc`
This model was trained by roshansh-cmu using how2 recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout e6f42a9783a5d9eba0687c19417f933e890722d7
pip install -e .
cd egs2/how2/sum1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/roshansh_how2_asr_raw_ft_sum_valid.acc
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Feb 7 15:24:21 EST 2022`
- python version: `3.8.12 (default, Oct 12 2021, 13:49:34) [GCC 7.5.0]`
- espnet version: `espnet 0.10.6a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `04561cdf3b6c3bc1d51edb04c93b953759ef551d`
- Commit date: `Mon Feb 7 09:06:12 2022 -0500`
## asr_raw_ft_sum
|dataset|Snt|Wrd|ROUGE-1|ROUGE-2|ROUGE-L|METEOR|BERTScore|
|---|---|---|---|---|---|---|---|
|decode_sum_asr_model_valid.acc.best/dev5_test_sum|2127|69795|60.72|44.7|56.1|29.36|91.53|
## ASR config
<details><summary>expand</summary>
```
config: conf/train_asr_conformer_vid_lf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_raw_ft_sum
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 8
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 45875
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: true
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 10
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: 5000
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- exp/asr_raw_utt_conformer/valid.acc.ave_10best.pth:::ctc
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 60000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_vid_sum/train/speech_shape
- exp/asr_stats_raw_vid_sum/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_vid_sum/valid/speech_shape
- exp/asr_stats_raw_vid_sum/valid/text_shape.bpe
batch_type: length
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_2000h_sum_trim/wav.scp
- speech
- sound
- - dump/raw/tr_2000h_sum_trim/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/cv05_sum_trim/wav.scp
- speech
- sound
- - dump/raw/cv05_sum_trim/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.001
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.5
patience: 1
token_list:
- <blank>
- <unk>
- '[hes]'
- S
- ▁THE
- ▁TO
- ''''
- ▁AND
- ▁YOU
- ▁A
- ▁IT
- T
- ▁THAT
- ▁OF
- ▁I
- ▁IS
- RE
- ▁IN
- ING
- ▁WE
- M
- ▁GOING
- ▁SO
- ▁THIS
- ▁YOUR
- ▁ON
- E
- D
- ▁BE
- ▁CAN
- N
- Y
- O
- ER
- ▁HAVE
- ▁JUST
- ▁FOR
- ▁WITH
- ▁DO
- ED
- ▁ARE
- ▁WANT
- ▁UP
- R
- LL
- P
- ▁
- L
- B
- ▁IF
- C
- ▁ONE
- ▁S
- ▁OR
- A
- ▁GO
- ▁LIKE
- ▁NOW
- ▁HERE
- VE
- LE
- U
- ▁GET
- ▁WHAT
- ▁OUT
- IN
- W
- ▁C
- ▁LITTLE
- ▁THERE
- LY
- ▁AS
- ▁MAKE
- I
- ▁THEY
- ▁MY
- K
- ▁THEN
- ▁BUT
- AL
- G
- ▁ALL
- OR
- ▁BACK
- ▁NOT
- ▁ABOUT
- ▁RIGHT
- ▁OUR
- EN
- ▁SOME
- ▁DOWN
- F
- ▁WHEN
- CH
- ▁F
- ▁HOW
- AR
- ▁WILL
- ▁RE
- CK
- ▁G
- ES
- CE
- ▁TAKE
- ▁AT
- ▁FROM
- ▁WAY
- TER
- ▁SEE
- RA
- ▁USE
- ▁REALLY
- RI
- TH
- ▁TWO
- ▁ME
- ▁VERY
- ▁E
- ▁B
- AT
- ▁THEM
- ▁DON
- ▁AN
- ▁BECAUSE
- ▁MORE
- RO
- H
- 'ON'
- LI
- ▁PUT
- ▁ST
- IL
- ▁BIT
- ▁START
- ▁NEED
- ▁INTO
- UR
- ▁TIME
- ▁OVER
- ▁W
- ▁DE
- ▁LOOK
- ▁THESE
- ▁LET
- ▁GOOD
- ▁ALSO
- AN
- ▁OFF
- ▁HE
- ▁KIND
- ▁SIDE
- ▁CO
- ▁SURE
- ▁AGAIN
- ▁MA
- ▁KNOW
- IT
- ▁WOULD
- IC
- ▁OTHER
- LA
- ▁P
- ▁WHICH
- '-'
- IR
- ▁LA
- ▁HAND
- EL
- ▁LOT
- ▁WHERE
- ▁THREE
- ▁PA
- ION
- LO
- ▁KEEP
- ▁SHOW
- ▁THING
- ▁FIRST
- TE
- ENT
- ATE
- ▁COME
- AD
- ▁GOT
- NG
- ▁NICE
- ▁T
- ET
- ▁MO
- ▁ANY
- ▁ACTUALLY
- ▁DIFFERENT
- ▁SE
- GE
- ▁WORK
- ▁THROUGH
- ▁O
- KE
- V
- ▁AROUND
- ▁BA
- PE
- ▁HI
- ▁BY
- SH
- ATION
- ▁SU
- ▁CA
- ▁D
- ▁LO
- ▁HAS
- ▁LI
- ▁PLAY
- Z
- ▁ADD
- ▁RO
- ▁TA
- AS
- ▁FOUR
- ▁CON
- ▁THOSE
- MP
- NE
- ▁SP
- UT
- ▁GIVE
- ▁WELL
- ▁BALL
- TING
- RY
- X
- ▁HO
- INE
- IVE
- ▁NEXT
- ▁PO
- ▁STEP
- ▁EVEN
- TION
- ▁MI
- MENT
- ▁CUT
- ▁BO
- ▁LINE
- ▁MUCH
- ▁THINGS
- ▁TALK
- UN
- ▁PART
- ▁WAS
- ▁FA
- ▁SOMETHING
- PP
- ANCE
- ND
- DI
- ▁RA
- AGE
- ▁SAME
- ▁EXPERT
- ▁DOING
- ▁LEFT
- IST
- ▁DI
- ▁NO
- RU
- ME
- TA
- UL
- TI
- ▁VILLAGE
- DE
- ERS
- ▁PEOPLE
- ▁TURN
- VER
- ▁FL
- ▁LEG
- ▁ONCE
- ▁COLOR
- ▁PULL
- ▁USING
- VI
- ▁WATER
- ▁SHE
- ▁TOP
- ▁OKAY
- ▁ANOTHER
- ▁THEIR
- ▁SAY
- URE
- ▁HA
- ▁IMPORTANT
- ▁PIECE
- ▁FOOT
- ▁TRA
- ▁SC
- ▁BODY
- ▁SET
- ▁POINT
- ▁HELP
- ▁TODAY
- ▁BRING
- ▁V
- ▁END
- MA
- ▁CH
- ▁MOST
- ▁K
- ▁AHEAD
- ▁HER
- OL
- ▁SA
- AM
- IES
- ▁THINK
- ▁NAME
- ▁TRY
- ▁MOVE
- ONE
- ▁LE
- ▁TOO
- TO
- UM
- ▁PLACE
- ▁COULD
- ▁FIND
- ▁FIVE
- ▁ALWAYS
- ID
- TY
- NT
- ▁FEEL
- ▁HEAD
- ▁THAN
- NA
- ▁EX
- ▁EYE
- ITY
- CI
- OP
- ▁SHOULD
- ▁MIGHT
- ▁HOLD
- ▁CAR
- AND
- ▁GREAT
- ▁RI
- ▁BU
- ▁HIGH
- ▁OPEN
- ▁BEFORE
- US
- ▁FRONT
- ▁LONG
- ▁TOGETHER
- NI
- ▁HAIR
- ▁LIGHT
- ▁TEN
- ▁HIT
- EST
- OUS
- ▁PRETTY
- ▁TYPE
- IP
- CO
- ▁FINGER
- ▁JO
- ▁UN
- ▁PRO
- ▁STRAIGHT
- ▁BEHALF
- ▁TI
- ▁SIX
- ▁CLEAN
- ▁DIS
- ▁DA
- ▁POSITION
- IGHT
- ACT
- ▁CHA
- ▁PE
- GG
- AP
- ▁MEAN
- ▁COMP
- FI
- ▁KNEE
- ▁CALLED
- ▁HANDS
- ▁PRE
- ▁FORWARD
- ▁AREA
- ANT
- ▁TE
- ▁WA
- ▁AFTER
- ▁SMALL
- ▁THROW
- ▁EVERY
- ▁SHOULDER
- NC
- PER
- ▁MAYBE
- ▁ABLE
- ▁BASICALLY
- ▁AM
- ▁READY
- ▁BOTTOM
- IE
- ▁HALF
- FF
- ▁BIG
- ▁EACH
- ▁PUSH
- ▁EIGHT
- ▁NEW
- ▁DONE
- ▁MAY
- ▁GETTING
- HO
- ▁HIS
- ▁HARD
- ▁CLOSE
- ALLY
- ▁SECOND
- ▁FEET
- ICAL
- ▁JA
- ▁PAINT
- ▁LEARN
- ▁SOUND
- HE
- ▁ROLL
- ▁ONLY
- ▁DOESN
- WA
- ▁DRAW
- ▁VI
- ▁DID
- ▁SHA
- ▁CENTER
- CU
- ▁CLIP
- ▁PI
- ▁CARD
- ▁INSIDE
- ▁PERSON
- ▁STILL
- ▁MAKING
- 'NO'
- ▁EVERYTHING
- .
- ▁FUN
- ARD
- ▁REMEMBER
- ▁AWAY
- ATED
- COM
- ▁SEVEN
- ▁BEEN
- ▁MANY
- ABLE
- ▁DAY
- ▁SIT
- IZE
- ▁REAL
- ▁HIP
- ▁BASIC
- ▁KICK
- ▁TU
- ATING
- ▁STICK
- ▁FLAT
- ▁WHO
- END
- HA
- ▁EXP
- ▁PICK
- ▁MIX
- ▁TRI
- ▁BI
- ▁WHOLE
- ▁STRETCH
- ▁BOTH
- ▁PROBABLY
- CA
- ▁HIM
- ▁STRING
- ▁EDGE
- ▁BASE
- ▁COMING
- UGH
- ▁LIFT
- ▁STA
- ▁WORKING
- ▁MU
- ▁QUICK
- ▁SOMETIMES
- ▁HAPPEN
- ▁YOURSELF
- ▁TALKING
- ▁DR
- ▁TELL
- ▁ANYTHING
- ▁BRA
- ▁LOOKING
- ▁SLOW
- ▁NE
- ▁STAND
- NER
- ▁COMES
- ▁GOES
- ISE
- BE
- ▁USED
- ▁UNDER
- ▁BETWEEN
- ▁HU
- ▁CREATE
- ▁NA
- ▁USUALLY
- ▁ARM
- ▁DRY
- ▁RUN
- LING
- ▁BRUSH
- ▁COVER
- ▁HEAR
- ▁DOES
- ▁STAY
- ▁EN
- ▁FOLD
- ▁CHANGE
- ▁LAST
- ▁EASY
- ▁US
- ▁PER
- ▁FACE
- ▁EAR
- ▁TIGHT
- ▁FE
- ▁PIN
- ▁MAN
- ▁BETTER
- ▁CALL
- ▁PRI
- ▁BEST
- ▁KI
- ▁COUPLE
- ▁WHILE
- ▁SHAPE
- ▁GAME
- IV
- ▁SHOT
- ▁PAPER
- ▁OWN
- ▁ALRIGHT
- ▁HAD
- TIC
- ▁BREATH
- ▁TOOL
- '2'
- ▁ENOUGH
- ▁COURSE
- ▁SKIN
- ▁SPIN
- ▁VA
- ▁ARMS
- ▁TEA
- ▁BREAK
- ▁DOG
- ▁1
- QUE
- ▁DROP
- ▁NUMBER
- IG
- ▁RED
- ▁NOTE
- ▁WEIGHT
- WARD
- ▁PLAYING
- ▁FINISH
- ▁MINUTE
- ▁R
- ▁PRESS
- ▁EITHER
- ▁CHE
- ▁PU
- BER
- ▁FEW
- ▁SIZE
- ▁MADE
- ▁LEAVE
- ▁GA
- ▁ALREADY
- ▁GUY
- ▁FAR
- ▁HOME
- ▁BAR
- UP
- ▁GRAB
- ▁MARK
- ▁WHITE
- ▁PROPER
- ▁CAUSE
- ▁OK
- ▁ART
- HI
- ▁SORT
- ▁EXERCISE
- ▁LOWER
- PORT
- ▁PLANT
- ▁BOARD
- ▁CASE
- ▁YEAR
- CENT
- ▁DU
- ▁CHECK
- ▁WHATEVER
- ▁OIL
- ▁IDEA
- ▁SIMPLE
- ▁PRACTICE
- ▁FAST
- '0'
- ▁CONTROL
- ▁J
- ▁KEY
- ▁MIDDLE
- ▁FULL
- ▁GLASS
- ▁OUTSIDE
- ▁LOW
- ▁REST
- ▁STUFF
- ▁ACT
- ▁UNTIL
- ▁BLACK
- ▁POP
- ▁CLICK
- ▁HOLE
- ▁Z
- ▁COUNT
- ▁POT
- ▁ALLOW
- ▁HAVING
- ▁TRYING
- ▁MUSCLE
- ▁GU
- ▁BOX
- ▁NOTICE
- ▁EXAMPLE
- UND
- ▁ALONG
- FUL
- ISH
- ▁STORE
- ▁LU
- ▁FLOOR
- ▁MOVING
- ▁LARGE
- ▁STOP
- ▁PH
- ▁WALK
- '5'
- ▁QU
- ▁TECHNIQUE
- ▁SOFT
- ▁GROUND
- ▁JUMP
- ▁JU
- ▁FILL
- ▁WHY
- ▁BUY
- ▁GREEN
- ▁WALL
- ▁HEEL
- NESS
- ▁LEVEL
- ▁UNDERNEATH
- ▁PATTERN
- ▁BEHIND
- ▁OLD
- ▁TIP
- ▁COMPLETE
- ▁WON
- ▁TEACH
- ▁FIT
- ▁NECK
- ▁REMOVE
- ▁TRICK
- ▁MOVEMENT
- ▁TOWARDS
- ▁PARTICULAR
- ▁CHI
- ▁EFFECT
- J
- ▁FREE
- ▁ACROSS
- ▁BEND
- ▁SAFE
- ▁SLIDE
- ▁PROBLEM
- ▁BLOCK
- ▁PAN
- ▁NATURAL
- ▁TOUCH
- ▁CHILD
- LINE
- ▁CROSS
- ▁REASON
- '4'
- ▁POWER
- ▁APPLY
- ▁FOLLOW
- ▁DESIGN
- ▁SPACE
- ▁ORDER
- ▁WOOD
- ▁RID
- '3'
- ▁COOK
- ▁BEGIN
- ▁WATCH
- ▁STYLE
- QUA
- ▁PRODUCT
- ▁TAKING
- ▁PUTTING
- ▁EXHALE
- ▁THOUGH
- ▁DEEP
- IAN
- ▁REACH
- ▁FOOD
- ▁ALMOST
- ▁COOL
- ▁SECTION
- ▁SAID
- ▁ANGLE
- ▁MUSIC
- ▁RELAX
- ▁CORNER
- ▁DARK
- ▁CHORD
- ▁ESPECIALLY
- ▁SCALE
- ▁WARM
- ▁WITHOUT
- ▁WHEEL
- ▁SEGMENT
- ▁TABLE
- ▁BOOK
- ▁PASS
- ▁ELBOW
- ▁ROUND
- ▁INHALE
- ▁SMOOTH
- ▁ROOM
- /
- ▁NINE
- ▁SHORT
- ▁MEASURE
- ▁LESS
- ▁TWIST
- ▁BALANCE
- ▁PROCESS
- ▁SWITCH
- ▁GENERAL
- ▁CLAY
- ▁CERTAIN
- ▁NEVER
- ▁BLUE
- ▁CUP
- ▁HOUSE
- ▁EXTRA
- ▁MOTION
- ▁PRESSURE
- ▁FIRE
- ▁SIMPLY
- ▁DOUBLE
- ▁TWENTY
- ▁CATCH
- ▁BECOME
- ▁BUILD
- ▁SPEED
- ▁TRANS
- ▁DRUM
- ▁CHEST
- ▁PICTURE
- ▁LENGTH
- ▁CONTINUE
- ▁COMFORTABLE
- ▁FISH
- ▁PHOTO
- ▁LOOSE
- ▁SKI
- ▁LIFE
- ▁DEGREE
- ▁OPTION
- ▁WORD
- ▁SHARP
- ▁SHOOT
- ▁FOUND
- ▁STRONG
- ▁QUITE
- ▁THIRD
- ▁GLUE
- ▁MIND
- ▁DEFINITELY
- ▁EASIER
- GRAPH
- ▁HOOK
- ▁CLEAR
- ▁POSE
- ▁BUTTON
- ▁CHOOSE
- ▁THICK
- ▁SYSTEM
- ▁PERFECT
- ▁BEAUTIFUL
- ▁SPOT
- ▁GROW
- ▁SIGN
- ▁ELSE
- ▁CONNECT
- ▁SELECT
- ▁PUNCH
- ▁DIRECTION
- ▁WRAP
- ▁RELEASE
- QUI
- SIDE
- ▁CAREFUL
- ▁VIDEO
- ▁INSTEAD
- ▁CIRCLE
- ▁WIRE
- ▁NOSE
- ▁AMOUNT
- ▁FOCUS
- ▁NORMAL
- ▁MAJOR
- ▁WHETHER
- ▁SURFACE
- ▁THUMB
- ▁DRIVE
- ▁SCREW
- ▁POSSIBLE
- ▁OBVIOUSLY
- ▁COMMON
- ▁REGULAR
- ▁ADJUST
- ▁WIDE
- ▁BLADE
- ▁FRET
- ▁RECOMMEND
- ▁BOWL
- BOARD
- ▁IMAGE
- ▁DEPENDING
- ▁PROTECT
- ▁CLOTH
- ▁HEALTH
- ▁WRIST
- ▁CLUB
- ▁DRINK
- ▁SINCE
- ▁FRIEND
- '00'
- ▁RUNNING
- ▁ITSELF
- ▁RECORD
- ▁SWING
- ▁DIRECT
- ▁MATERIAL
- ▁YO
- ▁LEAST
- ▁EXACTLY
- ▁BEGINNING
- ▁SLIGHTLY
- ▁TREAT
- ▁CAMERA
- ▁QUARTER
- ▁WINDOW
- '8'
- ▁SOMEBODY
- ▁BURN
- ▁DEMONSTRATE
- ▁DIFFERENCE
- ▁COMPUTER
- IBLE
- ▁SHOE
- ▁PERFORM
- ▁SQUARE
- ▁CONSIDER
- ▁DRILL
- ▁TEXT
- ▁FILE
- ▁RUB
- ▁FABRIC
- ▁HUNDRED
- ▁GRIP
- ▁CHARACTER
- ▁SPECIFIC
- ▁KNOT
- ▁CURL
- ▁STITCH
- ▁BLEND
- ▁FRAME
- ▁THIRTY
- '1'
- ▁HORSE
- ▁ATTACH
- ▁GROUP
- ▁STROKE
- ▁GUITAR
- ▁APART
- ▁MACHINE
- ▁CLASS
- ▁COMB
- ▁ROOT
- ▁HELLO
- ▁ENERGY
- ▁ATTACK
- ▁CORRECT
- ▁EXTEND
- ▁MINOR
- ▁PROFESSIONAL
- ▁MONEY
- ▁STRIP
- ▁FLAVOR
- ▁EVERYBODY
- ▁RULE
- ▁DIFFICULT
- ▁PROJECT
- ▁DISCUSS
- ▁FIGURE
- ▁HOWEVER
- ▁FINAL
- ▁STRENGTH
- ▁ENTIRE
- ▁FIELD
- ▁CONTACT
- ▁SUPPORT
- ▁PALM
- ▁SERIES
- ▁ENJOY
- '6'
- ▁WORLD
- ▁DECIDE
- ▁SPEAK
- ▁SEVERAL
- ▁WRITE
- ▁PROGRAM
- ABILITY
- ▁KNIFE
- ▁PLASTIC
- ▁ORGAN
- '7'
- ▁UNDERSTAND
- ▁FIFTEEN
- ▁FLEX
- ▁INFORMATION
- ▁TWELVE
- ▁DETAIL
- ▁STRIKE
- ▁ACTUAL
- ▁SPRAY
- ▁LOCAL
- ▁MOUTH
- ▁NIGHT
- ▁VEHICLE
- ▁OPPOSITE
- ▁SCHOOL
- '9'
- ▁QUESTION
- ▁SPECIAL
- ▁BIGGER
- ▁DEVELOP
- ▁PEPPER
- ▁PREFER
- Q
- '%'
- ']'
- '['
- '&'
- ','
- _
- '#'
- '='
- '@'
- +
- '*'
- $
- '~'
- <sos/eos>
init: null
input_size: null
ctc_conf:
ignore_nan_grad: true
model_conf:
ctc_weight: 0.0
lsm_weight: 0.15
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram1000/bpe.model
non_linguistic_symbols: data/nlsyms
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 256
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_vid_sum/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: abs_pos
selfattention_layer_type: lf_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
attention_windows:
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
- 40
attention_dilation:
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
- 1
attention_mode: tvm
decoder: transformer
decoder_conf:
attention_heads: 4
linear_units: 512
num_blocks: 6
dropout_rate: 0.15
positional_dropout_rate: 0.15
self_attention_dropout_rate: 0.15
src_attention_dropout_rate: 0.15
required:
- output_dir
- token_list
version: 0.10.0
distributed: true
```
</details>
Please cite the following paper if you use this recipe:
```BibTex
@misc{sharma2022speech,
title={Speech Summarization using Restricted Self-Attention},
author={Roshan Sharma and Shruti Palaskar and Alan W Black and Florian Metze},
year={2022},
eprint={2110.06263},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title##3={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass{cs.CL}
```
|
solozorro/tianchi
|
solozorro
| 2022-02-15T17:27:07Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
Xibanya/sunset_city
|
Xibanya
| 2022-02-15T16:31:37Z | 0 | 3 | null |
[
"PyTorch",
"Transformers",
"text-to-image",
"ru",
"en",
"license:cc-by-sa-4.0",
"region:us"
] |
text-to-image
| 2022-03-02T23:29:05Z |
---
license: cc-by-sa-4.0
language:
- ru
- en
pipeline_tag: text-to-image
tags:
- PyTorch
- Transformers
---
# Sunset Cities
This is the [Malevich](https://huggingface.co/sberbank-ai/rudalle-Malevich) ruDALL-E model finetuned on anime screenshots of big cities at sunset.
<img style="text-align:center; display:block;" src="https://huggingface.co/Xibanya/sunset_city/resolve/main/citysunset.png" width="256">
### installation
```
pip install rudalle
```
### How to use
Basic implementation to get a list of image data objects.
```python
from translate import Translator
from rudalle import get_rudalle_model, get_tokenizer, get_vae
from rudalle.pipelines import generate_images
model = get_rudalle_model('Malevich', pretrained=True, fp16=True, device='cuda')
model.load_state_dict(torch.load(CHECKPOINT_PATH))
vae = get_vae().to('cuda')
tokenizer = get_tokenizer()
input_text = Translator(to_lang='ru').translate('city at sunset')
images, _ = generate_images(
text=input_text,
tokenizer=tokenizer, dalle=model, vae=vae,
images_num=1,
top_k=2048,
top_p=0.95,
temperature=1.0
)
```
the Malevich model only recognizes input in Russian. If you're going to paste Cyrillic directly into the code rather than filter an English prompt through the translate API, you will need to put this at the top of the file:
```python
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
```
|
AKulk/wav2vec2-base-timit-epochs15
|
AKulk
| 2022-02-15T14:26:13Z | 21 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-epochs15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-epochs15
This model is a fine-tuned version of [AKulk/wav2vec2-base-timit-epochs10](https://huggingface.co/AKulk/wav2vec2-base-timit-epochs10) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 80
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
xxr/bert-base-uncased-issues-128
|
xxr
| 2022-02-15T14:09:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- null
model_index:
- name: bert-base-uncased-issues-128
results:
- task:
name: Masked Language Modeling
type: fill-mask
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2109
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.9845 | 1.0 | 1163 | 1.6403 |
| 1.5695 | 2.0 | 2326 | 1.4212 |
| 1.4221 | 3.0 | 3489 | 1.3714 |
| 1.3302 | 4.0 | 4652 | 1.3592 |
| 1.2734 | 5.0 | 5815 | 1.2781 |
| 1.2143 | 6.0 | 6978 | 1.2286 |
| 1.1704 | 7.0 | 8141 | 1.2492 |
| 1.1261 | 8.0 | 9304 | 1.2044 |
| 1.0812 | 9.0 | 10467 | 1.1878 |
| 1.0657 | 10.0 | 11630 | 1.2177 |
| 1.0319 | 11.0 | 12793 | 1.1428 |
| 1.0063 | 12.0 | 13956 | 1.0910 |
| 0.9731 | 13.0 | 15119 | 1.1111 |
| 0.9674 | 14.0 | 16282 | 1.1699 |
| 0.9391 | 15.0 | 17445 | 1.0805 |
| 0.9381 | 16.0 | 18608 | 1.2109 |
### Framework versions
- Transformers 4.8.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
joe5campbell/BERT_Tweet_Sentiment_10k
|
joe5campbell
| 2022-02-15T12:42:41Z | 9 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: BERT_Tweet_Sentiment_10k
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# BERT_Tweet_Sentiment_10k
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3891
- Train Accuracy: 0.8273
- Validation Loss: 0.4749
- Validation Accuracy: 0.8073
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3891 | 0.8273 | 0.4749 | 0.8073 | 0 |
### Framework versions
- Transformers 4.16.2
- TensorFlow 2.8.0
- Tokenizers 0.11.0
|
xujiacheng127/anchi-bert
|
xujiacheng127
| 2022-02-15T12:01:06Z | 0 | 2 | null |
[
"pytorch",
"region:us"
] | null | 2022-03-02T23:29:05Z |
import json
import requests
headers = {"Authorization": f"Bearer {API_TOKEN}"}
API_URL = "https://api-inference.huggingface.co/models/bert-base-uncased"
def query(payload):
data = json.dumps(payload)
response = requests.request("POST", API_URL, headers=headers, data=data)
return json.loads(response.content.decode("utf-8"))
data = query({"inputs": "The answer to the universe is [MASK]."})
|
MhF/distilbert-base-uncased-finetuned-emotion
|
MhF
| 2022-02-15T05:38:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217985126397109
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2232
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8098 | 1.0 | 250 | 0.3138 | 0.9025 | 0.9001 |
| 0.2429 | 2.0 | 500 | 0.2232 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jatinshah/bert-finetuned-squad
|
jatinshah
| 2022-02-15T02:37:28Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: bert-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Rafat/wav2vec2-base-timit-demo-colab
|
Rafat
| 2022-02-15T01:18:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4229
- Wer: 0.2386
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.5486 | 4.0 | 500 | 2.1672 | 0.9876 |
| 0.6819 | 8.0 | 1000 | 0.4502 | 0.3301 |
| 0.2353 | 12.0 | 1500 | 0.4352 | 0.2841 |
| 0.1427 | 16.0 | 2000 | 0.4237 | 0.2584 |
| 0.0945 | 20.0 | 2500 | 0.4409 | 0.2545 |
| 0.0671 | 24.0 | 3000 | 0.4257 | 0.2413 |
| 0.0492 | 28.0 | 3500 | 0.4229 | 0.2386 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
speech-seq2seq/wav2vec2-2-bert-large-no-adapter-frozen-enc
|
speech-seq2seq
| 2022-02-15T00:30:50Z | 15 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 11.7664
- Wer: 2.0133
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.171 | 0.28 | 500 | 8.6956 | 2.0055 |
| 5.307 | 0.56 | 1000 | 8.5958 | 2.0096 |
| 5.1449 | 0.84 | 1500 | 10.4208 | 2.0115 |
| 6.1351 | 1.12 | 2000 | 10.2950 | 2.0059 |
| 6.2997 | 1.4 | 2500 | 10.6762 | 2.0115 |
| 6.1394 | 1.68 | 3000 | 10.9190 | 2.0110 |
| 6.1868 | 1.96 | 3500 | 11.0166 | 2.0112 |
| 5.9647 | 2.24 | 4000 | 11.4154 | 2.0141 |
| 6.2202 | 2.52 | 4500 | 11.5837 | 2.0152 |
| 5.9612 | 2.8 | 5000 | 11.7664 | 2.0133 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
Arnold/wav2vec2-large-xlsr-hausa2-demo-colab
|
Arnold
| 2022-02-14T23:42:35Z | 9 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-hausa2-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-hausa2-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2993
- Wer: 0.4826
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.6e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 13
- gradient_accumulation_steps: 3
- total_train_batch_size: 36
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.1549 | 12.5 | 400 | 2.7289 | 1.0 |
| 2.0566 | 25.0 | 800 | 0.4582 | 0.6768 |
| 0.4423 | 37.5 | 1200 | 0.3037 | 0.5138 |
| 0.2991 | 50.0 | 1600 | 0.2993 | 0.4826 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_100_Epochs
|
jfarray
| 2022-02-14T22:15:16Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
NicoGrageda/wav2vec2-base-timit-demo-colab
|
NicoGrageda
| 2022-02-14T21:18:23Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-timit-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4519
- Wer: 0.3375
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4351 | 4.0 | 500 | 1.2740 | 0.8259 |
| 0.5828 | 8.0 | 1000 | 0.4276 | 0.4403 |
| 0.2274 | 12.0 | 1500 | 0.4646 | 0.3739 |
| 0.135 | 16.0 | 2000 | 0.4320 | 0.3662 |
| 0.0962 | 20.0 | 2500 | 0.4831 | 0.3607 |
| 0.0719 | 24.0 | 3000 | 0.4506 | 0.3463 |
| 0.0556 | 28.0 | 3500 | 0.4519 | 0.3375 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
jfarray/Model_dccuchile_bert-base-spanish-wwm-uncased_10_Epochs
|
jfarray
| 2022-02-14T21:06:23Z | 2 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_100_Epochs
|
jfarray
| 2022-02-14T20:23:54Z | 8 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_bert-base-multilingual-uncased_50_Epochs
|
jfarray
| 2022-02-14T19:44:38Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
huggingtweets/magicrealismbot
|
huggingtweets
| 2022-02-14T18:15:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/668872745329885184/67TNOs2A_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Magic Realism Bot</div>
<div style="text-align: center; font-size: 14px;">@magicrealismbot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Magic Realism Bot.
| Data | Magic Realism Bot |
| --- | --- |
| Tweets downloaded | 3250 |
| Retweets | 0 |
| Short tweets | 0 |
| Tweets kept | 3250 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1nx0qvg7/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @magicrealismbot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/9vq0074d/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/magicrealismbot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud2-ner
|
akshaychaudhary
| 2022-02-14T17:33:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud2-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud2-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8866
- Precision: 0.0
- Recall: 0.0
- F1: 0.0
- Accuracy: 0.8453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 162 | 0.7804 | 0.0 | 0.0 | 0.0 | 0.8447 |
| No log | 2.0 | 324 | 0.8303 | 0.0 | 0.0 | 0.0 | 0.8465 |
| No log | 3.0 | 486 | 0.8866 | 0.0 | 0.0 | 0.0 | 0.8453 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-small-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:23:08Z | 6 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -146.39734268188477
MTF T5: -72.12132263183594
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-tiny-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:22:51Z | 5 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -149.6728801727295
MTF T5: -74.4166259765625
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-large-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:22:44Z | 6 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -110.35000801086426
MTF T5: -57.58127975463867
|
NewT5SharedHeadsSharedKeyValues/t5-efficient-base-sh
|
NewT5SharedHeadsSharedKeyValues
| 2022-02-14T16:22:41Z | 4 | 0 |
transformers
|
[
"transformers",
"t5",
"text2text-generation",
"t5-new-failed",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-02T23:29:04Z |
---
tags:
- t5-new-failed
---
# Test
Hf T5: -95.86687088012695
MTF T5: -67.8558578491211
|
groar/gpt-neo-1.3B-finetuned-escape3
|
groar
| 2022-02-14T15:17:25Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape3
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
leonadase/distilbert-base-uncased-finetuned-ner
|
leonadase
| 2022-02-14T13:51:21Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.9210439378923027
- name: Recall
type: recall
value: 0.9356751314464705
- name: F1
type: f1
value: 0.9283018867924528
- name: Accuracy
type: accuracy
value: 0.983176322938345
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0611
- Precision: 0.9210
- Recall: 0.9357
- F1: 0.9283
- Accuracy: 0.9832
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2341 | 1.0 | 878 | 0.0734 | 0.9118 | 0.9206 | 0.9162 | 0.9799 |
| 0.0546 | 2.0 | 1756 | 0.0591 | 0.9210 | 0.9350 | 0.9279 | 0.9829 |
| 0.0297 | 3.0 | 2634 | 0.0611 | 0.9210 | 0.9357 | 0.9283 | 0.9832 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
reach-vb/wav2vec2-large-xls-r-1B-common_voice7-lt-ft
|
reach-vb
| 2022-02-14T13:39:07Z | 4 | 1 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-1B-common_voice7-lt-ft
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-1B-common_voice7-lt-ft
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5101
- Wer: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 36
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 72
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 900
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:---:|
| 2.3491 | 31.24 | 500 | 3.9827 | 1.0 |
| 0.0421 | 62.48 | 1000 | 2.9544 | 1.0 |
| 0.0163 | 93.73 | 1500 | 2.5101 | 1.0 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.10.3
|
huggingartists/bill-wurtz
|
huggingartists
| 2022-02-14T08:56:26Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"huggingartists",
"lyrics",
"lm-head",
"causal-lm",
"en",
"dataset:huggingartists/bill-wurtz",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
datasets:
- huggingartists/bill-wurtz
tags:
- huggingartists
- lyrics
- lm-head
- causal-lm
widget:
- text: "I am"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:DISPLAY_1; margin-left: auto; margin-right: auto; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://images.genius.com/0d4b35ed37091d5f6fd59806810e14ca.1000x1000x1.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 HuggingArtists Model 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Bill Wurtz</div>
<a href="https://genius.com/artists/bill-wurtz">
<div style="text-align: center; font-size: 14px;">@bill-wurtz</div>
</a>
</div>
I was made with [huggingartists](https://github.com/AlekseyKorshuk/huggingartists).
Create your own bot based on your favorite artist with [the demo](https://colab.research.google.com/github/AlekseyKorshuk/huggingartists/blob/master/huggingartists-demo.ipynb)!
## How does it work?
To understand how the model was developed, check the [W&B report](https://wandb.ai/huggingartists/huggingartists/reportlist).
## Training data
The model was trained on lyrics from Bill Wurtz.
Dataset is available [here](https://huggingface.co/datasets/huggingartists/bill-wurtz).
And can be used with:
```python
from datasets import load_dataset
dataset = load_dataset("huggingartists/bill-wurtz")
```
[Explore the data](https://wandb.ai/huggingartists/huggingartists/runs/27ysbe74/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on Bill Wurtz's lyrics.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/huggingartists/huggingartists/runs/2f8oa51l/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingartists/bill-wurtz')
generator("I am", num_return_sequences=5)
```
Or with Transformers library:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("huggingartists/bill-wurtz")
model = AutoModelWithLMHead.from_pretrained("huggingartists/bill-wurtz")
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Aleksey Korshuk*
[](https://github.com/AlekseyKorshuk)
[](https://twitter.com/intent/follow?screen_name=alekseykorshuk)
[](https://t.me/joinchat/_CQ04KjcJ-4yZTky)
For more details, visit the project repository.
[](https://github.com/AlekseyKorshuk/huggingartists)
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-synthetic
|
sshasnain
| 2022-02-14T08:39:07Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-synthetic
This model is a fine-tuned version of [sshasnain/wav2vec2-xls-r-300m-bangla-command](https://huggingface.co/sshasnain/wav2vec2-xls-r-300m-bangla-command) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.0254
- eval_wer: 0.4311
- eval_runtime: 2.5036
- eval_samples_per_second: 76.689
- eval_steps_per_second: 9.586
- epoch: 35.71
- step: 1000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
reatiny/distilbert-base-uncased-finetuned-emotion
|
reatiny
| 2022-02-14T07:44:51Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9217811693486851
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2226
- Accuracy: 0.9215
- F1: 0.9218
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8235 | 1.0 | 250 | 0.3190 | 0.901 | 0.8979 |
| 0.2497 | 2.0 | 500 | 0.2226 | 0.9215 | 0.9218 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0
- Datasets 1.15.1
- Tokenizers 0.11.0
|
DeltaHub/lora_t5-base_mrpc
|
DeltaHub
| 2022-02-14T06:32:18Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"endpoints_compatible",
"region:us"
] | null | 2022-03-02T23:29:04Z |
Need to work with OpenDelta
```
from transformers import AutoModelForSeq2SeqLM
t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base")
from opendelta import AutoDeltaModel
delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5)
delta.log()
```
|
jatinshah/marian-finetuned-kde4-en-to-fr
|
jatinshah
| 2022-02-14T05:47:21Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
translation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- translation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8815
- Score: 52.2204
- Counts: [166010, 120787, 91973, 70929]
- Totals: [228361, 207343, 189354, 173335]
- Precisions: [72.69630103213771, 58.254679444205976, 48.57198686058916, 40.92018345977443]
- Bp: 0.9695
- Sys Len: 228361
- Ref Len: 235434
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0a0+0aef44c
- Datasets 1.18.3
- Tokenizers 0.11.0
|
fastai/fastbook_06_multicat_PASCAL
|
fastai
| 2022-02-14T04:40:16Z | 2 | 0 |
fastai
|
[
"fastai",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
tags:
- fastai
---
# Amazing!
Congratulations on hosting your fastai model on the Hugging Face Hub!
# Some next steps
1. Fill out this model card with more information (template below and [documentation here](https://huggingface.co/docs/hub/model-repos))!
2. Create a demo in Gradio or Streamlit using the 🤗Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)).
3. Join our fastai community on the Hugging Face Discord!
Greetings fellow fastlearner 🤝!
---
# Model card
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
|
stellaathena/test-med
|
stellaathena
| 2022-02-14T02:28:29Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-02T23:29:05Z |
---
license: apache-2.0
---
|
jfarray/Model_bert-base-multilingual-uncased_10_Epochs
|
jfarray
| 2022-02-13T23:21:43Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 256 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 256, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 256, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
groar/gpt-neo-1.3B-finetuned-escape2
|
groar
| 2022-02-13T20:59:30Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: gpt-neo-1.3B-finetuned-escape2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt-neo-1.3B-finetuned-escape2
This model is a fine-tuned version of [EleutherAI/gpt-neo-1.3B](https://huggingface.co/EleutherAI/gpt-neo-1.3B) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_all-distilroberta-v1_100_Epochs
|
jfarray
| 2022-02-13T20:50:24Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_50_Epochs
|
jfarray
| 2022-02-13T20:18:37Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_10_Epochs
|
jfarray
| 2022-02-13T19:47:38Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_5_Epochs
|
jfarray
| 2022-02-13T19:40:19Z | 10 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_all-distilroberta-v1_1_Epochs
|
jfarray
| 2022-02-13T19:34:14Z | 9 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"roberta",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: RobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Normalize()
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
castorini/dkrr-dpr-nq-retriever
|
castorini
| 2022-02-13T17:46:38Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"feature-extraction",
"arxiv:2012.04584",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
This model is converted from the original DKRR [repo](https://github.com/facebookresearch/FiD) and ported into Pyserini:
```
@misc{izacard2020distilling,
title={Distilling Knowledge from Reader to Retriever for Question Answering},
author={Gautier Izacard and Edouard Grave},
year={2020},
eprint={2012.04584},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
timtarusov/distilbert-base-uncased-finetuned-emotion
|
timtarusov
| 2022-02-13T08:48:03Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9211076096482195
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2274
- Accuracy: 0.921
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8308 | 1.0 | 250 | 0.3319 | 0.8955 | 0.8897 |
| 0.2516 | 2.0 | 500 | 0.2274 | 0.921 | 0.9211 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
mujeensung/albert-base-v2_mnli_bc
|
mujeensung
| 2022-02-13T05:23:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"albert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
language:
- en
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: albert-base-v2_mnli_bc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE MNLI
type: glue
args: mnli
metrics:
- name: Accuracy
type: accuracy
value: 0.9398776667163956
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_mnli_bc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Accuracy: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2159 | 1.0 | 16363 | 0.2268 | 0.9248 |
| 0.1817 | 2.0 | 32726 | 0.2335 | 0.9347 |
| 0.0863 | 3.0 | 49089 | 0.3014 | 0.9401 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
thyagosme/wav2vec2-base-demo-colab
|
thyagosme
| 2022-02-13T02:14:29Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-base-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4657
- Wer: 0.3422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.4477 | 4.0 | 500 | 1.3352 | 0.9039 |
| 0.5972 | 8.0 | 1000 | 0.4752 | 0.4509 |
| 0.2224 | 12.0 | 1500 | 0.4604 | 0.4052 |
| 0.1308 | 16.0 | 2000 | 0.4542 | 0.3866 |
| 0.0889 | 20.0 | 2500 | 0.4730 | 0.3589 |
| 0.0628 | 24.0 | 3000 | 0.4984 | 0.3657 |
| 0.0479 | 28.0 | 3500 | 0.4657 | 0.3422 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
Arnold/wav2vec2-hausa2-demo-colab
|
Arnold
| 2022-02-13T01:24:29Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-hausa2-demo-colab
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hausa2-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2032
- Wer: 0.7237
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1683 | 12.49 | 400 | 1.0279 | 0.7211 |
| 0.0995 | 24.98 | 800 | 1.2032 | 0.7237 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_5_Epochs
|
jfarray
| 2022-02-12T22:09:20Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-mpnet-base-v2_1_Epochs
|
jfarray
| 2022-02-12T21:48:20Z | 5 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"xlm-roberta",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: XLMRobertaModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_100_Epochs
|
jfarray
| 2022-02-12T21:38:44Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 100,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 110,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_10_Epochs
|
jfarray
| 2022-02-12T20:47:55Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_5_Epochs
|
jfarray
| 2022-02-12T20:37:59Z | 3 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_paraphrase-multilingual-MiniLM-L12-v2_1_Epochs
|
jfarray
| 2022-02-12T20:28:53Z | 6 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 2,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArBert/roberta-base-finetuned-ner-kmeans
|
ArBert
| 2022-02-12T16:54:18Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"dataset:conll2003",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- conll2003
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-kmeans
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: conll2003
type: conll2003
args: conll2003
metrics:
- name: Precision
type: precision
value: 0.955868544600939
- name: Recall
type: recall
value: 0.9614658103513412
- name: F1
type: f1
value: 0.9586590074394953
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0592
- Precision: 0.9559
- Recall: 0.9615
- F1: 0.9587
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| 0.0248 | 1.0 | 878 | 0.0609 | 0.9507 | 0.9561 | 0.9534 |
| 0.0163 | 2.0 | 1756 | 0.0640 | 0.9515 | 0.9578 | 0.9546 |
| 0.0089 | 3.0 | 2634 | 0.0592 | 0.9559 | 0.9615 | 0.9587 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jfarray/Model_distiluse-base-multilingual-cased-v1_50_Epochs
|
jfarray
| 2022-02-12T14:26:35Z | 132 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 50,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 55,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_10_Epochs
|
jfarray
| 2022-02-12T13:53:59Z | 140 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 10,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 11,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
jfarray/Model_distiluse-base-multilingual-cased-v1_5_Epochs
|
jfarray
| 2022-02-12T13:43:01Z | 131 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 512 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 11 with parameters:
```
{'batch_size': 15, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 5,
"evaluation_steps": 1,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 6,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Dense({'in_features': 768, 'out_features': 512, 'bias': True, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
ArBert/roberta-base-finetuned-ner-agglo-twitter
|
ArBert
| 2022-02-12T11:40:08Z | 8 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
model-index:
- name: roberta-base-finetuned-ner-agglo-twitter
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-ner-agglo-twitter
This model is a fine-tuned version of [ArBert/roberta-base-finetuned-ner](https://huggingface.co/ArBert/roberta-base-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6645
- Precision: 0.6885
- Recall: 0.7665
- F1: 0.7254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|
| No log | 1.0 | 245 | 0.2820 | 0.6027 | 0.7543 | 0.6700 |
| No log | 2.0 | 490 | 0.2744 | 0.6308 | 0.7864 | 0.7000 |
| 0.2301 | 3.0 | 735 | 0.2788 | 0.6433 | 0.7637 | 0.6984 |
| 0.2301 | 4.0 | 980 | 0.3255 | 0.6834 | 0.7221 | 0.7022 |
| 0.1153 | 5.0 | 1225 | 0.3453 | 0.6686 | 0.7439 | 0.7043 |
| 0.1153 | 6.0 | 1470 | 0.3988 | 0.6797 | 0.7420 | 0.7094 |
| 0.0617 | 7.0 | 1715 | 0.4711 | 0.6702 | 0.7259 | 0.6969 |
| 0.0617 | 8.0 | 1960 | 0.4904 | 0.6904 | 0.7505 | 0.7192 |
| 0.0328 | 9.0 | 2205 | 0.5088 | 0.6591 | 0.7713 | 0.7108 |
| 0.0328 | 10.0 | 2450 | 0.5709 | 0.6468 | 0.7788 | 0.7067 |
| 0.019 | 11.0 | 2695 | 0.5570 | 0.6642 | 0.7533 | 0.7059 |
| 0.019 | 12.0 | 2940 | 0.5574 | 0.6899 | 0.7656 | 0.7258 |
| 0.0131 | 13.0 | 3185 | 0.5858 | 0.6952 | 0.7609 | 0.7265 |
| 0.0131 | 14.0 | 3430 | 0.6239 | 0.6556 | 0.7826 | 0.7135 |
| 0.0074 | 15.0 | 3675 | 0.5931 | 0.6825 | 0.7599 | 0.7191 |
| 0.0074 | 16.0 | 3920 | 0.6364 | 0.6785 | 0.7580 | 0.7161 |
| 0.005 | 17.0 | 4165 | 0.6437 | 0.6855 | 0.7580 | 0.7199 |
| 0.005 | 18.0 | 4410 | 0.6610 | 0.6779 | 0.7599 | 0.7166 |
| 0.0029 | 19.0 | 4655 | 0.6625 | 0.6853 | 0.7656 | 0.7232 |
| 0.0029 | 20.0 | 4900 | 0.6645 | 0.6885 | 0.7665 | 0.7254 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sylviachency/distilbert-base-uncased-finetuned-cola
|
sylviachency
| 2022-02-12T06:48:04Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5235221651747541
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9155
- Matthews Correlation: 0.5235
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5275 | 1.0 | 535 | 0.5174 | 0.4181 |
| 0.3496 | 2.0 | 1070 | 0.5617 | 0.4857 |
| 0.2359 | 3.0 | 1605 | 0.6661 | 0.5029 |
| 0.1701 | 4.0 | 2140 | 0.8052 | 0.5091 |
| 0.1266 | 5.0 | 2675 | 0.9155 | 0.5235 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
HHousen/household-rooms
|
HHousen
| 2022-02-12T06:21:05Z | 77 | 5 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:04Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: household-rooms
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8482142686843872
---
# household-rooms
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### bathroom

#### bedroom

#### dining room

#### kitchen

#### living room

|
jgammack/multi-qa-MTL-distilbert-base-uncased
|
jgammack
| 2022-02-12T03:52:06Z | 144 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-MTL-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-MTL-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-MTL-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-MTL-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
thyagosme/bert-base-uncased-finetuned-swag
|
thyagosme
| 2022-02-12T02:13:46Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"multiple-choice",
"generated_from_trainer",
"dataset:swag",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
multiple-choice
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- swag
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-swag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-swag
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the swag dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0438
- Accuracy: 0.7915
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.7708 | 1.0 | 4597 | 0.6025 | 0.7659 |
| 0.4015 | 2.0 | 9194 | 0.6287 | 0.7841 |
| 0.1501 | 3.0 | 13791 | 1.0438 | 0.7915 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
jgammack/multi-qa-distilbert-base-uncased
|
jgammack
| 2022-02-11T23:40:41Z | 141 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-02T23:29:05Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# jgammack/multi-qa-distilbert-base-uncased
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('jgammack/multi-qa-distilbert-base-uncased')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
model = AutoModel.from_pretrained('jgammack/multi-qa-distilbert-base-uncased')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=jgammack/multi-qa-distilbert-base-uncased)
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
microsoft/codebert-base
|
microsoft
| 2022-02-11T19:59:44Z | 574,944 | 236 |
transformers
|
[
"transformers",
"pytorch",
"tf",
"jax",
"rust",
"roberta",
"feature-extraction",
"arxiv:2002.08155",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
## CodeBERT-base
Pretrained weights for [CodeBERT: A Pre-Trained Model for Programming and Natural Languages](https://arxiv.org/abs/2002.08155).
### Training Data
The model is trained on bi-modal data (documents & code) of [CodeSearchNet](https://github.com/github/CodeSearchNet)
### Training Objective
This model is initialized with Roberta-base and trained with MLM+RTD objective (cf. the paper).
### Usage
Please see [the official repository](https://github.com/microsoft/CodeBERT) for scripts that support "code search" and "code-to-document generation".
### Reference
1. [CodeBERT trained with Masked LM objective](https://huggingface.co/microsoft/codebert-base-mlm) (suitable for code completion)
2. 🤗 [Hugging Face's CodeBERTa](https://huggingface.co/huggingface/CodeBERTa-small-v1) (small size, 6 layers)
### Citation
```bibtex
@misc{feng2020codebert,
title={CodeBERT: A Pre-Trained Model for Programming and Natural Languages},
author={Zhangyin Feng and Daya Guo and Duyu Tang and Nan Duan and Xiaocheng Feng and Ming Gong and Linjun Shou and Bing Qin and Ting Liu and Daxin Jiang and Ming Zhou},
year={2020},
eprint={2002.08155},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ArBert/bert-base-uncased-finetuned-ner-kmeans
|
ArBert
| 2022-02-11T16:45:09Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:04Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-uncased-finetuned-ner-kmeans
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-ner-kmeans
This model is a fine-tuned version of [ArBert/bert-base-uncased-finetuned-ner](https://huggingface.co/ArBert/bert-base-uncased-finetuned-ner) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1169
- Precision: 0.9084
- Recall: 0.9245
- F1: 0.9164
- Accuracy: 0.9792
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.036 | 1.0 | 1123 | 0.1010 | 0.9086 | 0.9117 | 0.9101 | 0.9779 |
| 0.0214 | 2.0 | 2246 | 0.1094 | 0.9033 | 0.9199 | 0.9115 | 0.9784 |
| 0.014 | 3.0 | 3369 | 0.1169 | 0.9084 | 0.9245 | 0.9164 | 0.9792 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-hy-AM-CV8-v1
|
emre
| 2022-02-11T15:29:46Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-hy-AM-CV8-v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-hy-AM-CV8-v1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9145
- Wer: 0.9598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 170
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 5.7132 | 83.31 | 500 | 1.9274 | 1.0523 |
| 1.017 | 166.62 | 1000 | 0.9145 | 0.9598 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
akshaychaudhary/distilbert-base-uncased-finetuned-cloud-ner
|
akshaychaudhary
| 2022-02-11T15:00:36Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-cloud-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cloud-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0812
- Precision: 0.8975
- Recall: 0.9080
- F1: 0.9027
- Accuracy: 0.9703
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 166 | 0.1326 | 0.7990 | 0.8043 | 0.8017 | 0.9338 |
| No log | 2.0 | 332 | 0.0925 | 0.8770 | 0.8946 | 0.8858 | 0.9618 |
| No log | 3.0 | 498 | 0.0812 | 0.8975 | 0.9080 | 0.9027 | 0.9703 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshasnain/wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
|
sshasnain
| 2022-02-11T13:25:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-bangla-command-word-combination-synthetic
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0068
- Wer: 0.4111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 100
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.2982 | 17.86 | 500 | 2.4580 | 1.1089 |
| 0.9644 | 35.71 | 1000 | 0.1250 | 0.5156 |
| 0.1767 | 53.57 | 1500 | 0.0310 | 0.4267 |
| 0.0912 | 71.43 | 2000 | 0.0149 | 0.4178 |
| 0.0505 | 89.29 | 2500 | 0.0068 | 0.4111 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.0
|
sshasnain/wav2vec2-xls-r-300m-bangla-command
|
sshasnain
| 2022-02-11T13:10:44Z | 7 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"bn",
"audio",
"speech",
"dataset:custom",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language: Bengali
datasets:
- custom
metrics:
- wer
tags:
- bn
- audio
- automatic-speech-recognition
- speech
license: apache-2.0
model-index:
- name: wav2vec2-xls-r-300m-bangla-command
results:
- task:
name: Speech Recognition
type: automatic-speech-recognition
dataset:
name: custom
type: custom
args: ben
metrics:
- name: Test WER
type: wer
value: 0.006
---
# wav2vec2-xls-r-300m-bangla-command
***
## Usage
Commands
'৫ টা কলম দেন'
'চেয়ারটা কোথায় রেখেছেন'
'ডানের বালতিটার প্রাইজ কেমন'
'দশ কেজি আলু কত'
'বাজুসের ল্যাপটপটা এসেছে'
'বাসার জন্য দরজা আছে'
'ম্যাম মোবাইলটা কি আছে'
'হ্যালো শ্যাম্পুর দাম বল'
|
huggingtweets/albinkurti
|
huggingtweets
| 2022-02-11T11:38:45Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/albinkurti/1644579521299/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1425007522067386368/k0GygSdD_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Albin Kurti</div>
<div style="text-align: center; font-size: 14px;">@albinkurti</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Albin Kurti.
| Data | Albin Kurti |
| --- | --- |
| Tweets downloaded | 741 |
| Retweets | 32 |
| Short tweets | 11 |
| Tweets kept | 698 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1yhql26z/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @albinkurti's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/txe5baun) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/txe5baun/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/albinkurti')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
mvip/wav2vec2-large-xls-r-300m-tr
|
mvip
| 2022-02-11T10:58:45Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-tr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-tr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4074
- Wer: 0.4227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.9399 | 4.21 | 400 | 0.7252 | 0.7387 |
| 0.4147 | 8.42 | 800 | 0.4693 | 0.5201 |
| 0.1855 | 12.63 | 1200 | 0.4584 | 0.4848 |
| 0.1256 | 16.84 | 1600 | 0.4464 | 0.4708 |
| 0.0948 | 21.05 | 2000 | 0.4261 | 0.4389 |
| 0.0714 | 25.26 | 2400 | 0.4331 | 0.4349 |
| 0.0532 | 29.47 | 2800 | 0.4074 | 0.4227 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
lgris/wav2vec2-large-xlsr-coraa-portuguese-cv8
|
lgris
| 2022-02-10T23:23:59Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:mozilla-foundation/common_voice_8_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_8_0
model-index:
- name: wav2vec2-large-xlsr-coraa-portuguese-cv8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-coraa-portuguese-cv8
This model is a fine-tuned version of [Edresson/wav2vec2-large-xlsr-coraa-portuguese](https://huggingface.co/Edresson/wav2vec2-large-xlsr-coraa-portuguese) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1626
- Wer: 0.1365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5614 | 0.1 | 100 | 0.2542 | 0.1986 |
| 0.5181 | 0.19 | 200 | 0.2740 | 0.2146 |
| 0.5056 | 0.29 | 300 | 0.2472 | 0.2068 |
| 0.4747 | 0.39 | 400 | 0.2464 | 0.2166 |
| 0.4627 | 0.48 | 500 | 0.2277 | 0.2041 |
| 0.4403 | 0.58 | 600 | 0.2245 | 0.1977 |
| 0.4413 | 0.68 | 700 | 0.2156 | 0.1968 |
| 0.437 | 0.77 | 800 | 0.2102 | 0.1919 |
| 0.4305 | 0.87 | 900 | 0.2130 | 0.1864 |
| 0.4324 | 0.97 | 1000 | 0.2144 | 0.1902 |
| 0.4217 | 1.06 | 1100 | 0.2230 | 0.1891 |
| 0.3823 | 1.16 | 1200 | 0.2033 | 0.1774 |
| 0.3641 | 1.25 | 1300 | 0.2143 | 0.1830 |
| 0.3707 | 1.35 | 1400 | 0.2034 | 0.1793 |
| 0.3767 | 1.45 | 1500 | 0.2029 | 0.1823 |
| 0.3483 | 1.54 | 1600 | 0.1999 | 0.1740 |
| 0.3577 | 1.64 | 1700 | 0.1928 | 0.1728 |
| 0.3667 | 1.74 | 1800 | 0.1898 | 0.1726 |
| 0.3283 | 1.83 | 1900 | 0.1920 | 0.1688 |
| 0.3571 | 1.93 | 2000 | 0.1904 | 0.1649 |
| 0.3467 | 2.03 | 2100 | 0.1994 | 0.1648 |
| 0.3145 | 2.12 | 2200 | 0.1940 | 0.1682 |
| 0.3186 | 2.22 | 2300 | 0.1879 | 0.1571 |
| 0.3058 | 2.32 | 2400 | 0.1975 | 0.1678 |
| 0.3096 | 2.41 | 2500 | 0.1877 | 0.1589 |
| 0.2964 | 2.51 | 2600 | 0.1862 | 0.1568 |
| 0.3068 | 2.61 | 2700 | 0.1809 | 0.1588 |
| 0.3036 | 2.7 | 2800 | 0.1769 | 0.1573 |
| 0.3084 | 2.8 | 2900 | 0.1836 | 0.1524 |
| 0.3109 | 2.9 | 3000 | 0.1807 | 0.1519 |
| 0.2969 | 2.99 | 3100 | 0.1851 | 0.1516 |
| 0.2698 | 3.09 | 3200 | 0.1737 | 0.1490 |
| 0.2703 | 3.19 | 3300 | 0.1759 | 0.1457 |
| 0.2759 | 3.28 | 3400 | 0.1778 | 0.1471 |
| 0.2728 | 3.38 | 3500 | 0.1717 | 0.1462 |
| 0.2398 | 3.47 | 3600 | 0.1767 | 0.1451 |
| 0.256 | 3.57 | 3700 | 0.1742 | 0.1410 |
| 0.2712 | 3.67 | 3800 | 0.1674 | 0.1414 |
| 0.2648 | 3.76 | 3900 | 0.1717 | 0.1423 |
| 0.2576 | 3.86 | 4000 | 0.1672 | 0.1403 |
| 0.2504 | 3.96 | 4100 | 0.1683 | 0.1381 |
| 0.2406 | 4.05 | 4200 | 0.1685 | 0.1399 |
| 0.2403 | 4.15 | 4300 | 0.1656 | 0.1381 |
| 0.2233 | 4.25 | 4400 | 0.1687 | 0.1371 |
| 0.2546 | 4.34 | 4500 | 0.1642 | 0.1377 |
| 0.2431 | 4.44 | 4600 | 0.1655 | 0.1372 |
| 0.2337 | 4.54 | 4700 | 0.1625 | 0.1370 |
| 0.2607 | 4.63 | 4800 | 0.1618 | 0.1363 |
| 0.2292 | 4.73 | 4900 | 0.1622 | 0.1366 |
| 0.2232 | 4.83 | 5000 | 0.1626 | 0.1365 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
lgris/wavlm-large-CORAA-pt-cv7
|
lgris
| 2022-02-10T23:16:09Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wavlm",
"automatic-speech-recognition",
"mozilla-foundation/common_voice_7_0",
"generated_from_trainer",
"pt",
"dataset:mozilla-foundation/common_voice_7_0",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
language:
- pt
license: apache-2.0
tags:
- automatic-speech-recognition
- mozilla-foundation/common_voice_7_0
- generated_from_trainer
- pt
datasets:
- mozilla-foundation/common_voice_7_0
model-index:
- name: wavlm-large-CORAA-pt-cv7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wavlm-large-CORAA-pt-cv7
This model is a fine-tuned version of [lgris/WavLM-large-CORAA-pt](https://huggingface.co/lgris/WavLM-large-CORAA-pt) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2546
- Wer: 0.2261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6029 | 0.13 | 100 | 0.3679 | 0.3347 |
| 0.5297 | 0.26 | 200 | 0.3516 | 0.3227 |
| 0.5134 | 0.39 | 300 | 0.3327 | 0.3167 |
| 0.4941 | 0.52 | 400 | 0.3281 | 0.3122 |
| 0.4816 | 0.65 | 500 | 0.3154 | 0.3102 |
| 0.4649 | 0.78 | 600 | 0.3199 | 0.3058 |
| 0.461 | 0.91 | 700 | 0.3047 | 0.2974 |
| 0.4613 | 1.04 | 800 | 0.3006 | 0.2900 |
| 0.4198 | 1.17 | 900 | 0.2951 | 0.2891 |
| 0.3864 | 1.3 | 1000 | 0.2989 | 0.2862 |
| 0.3963 | 1.43 | 1100 | 0.2932 | 0.2830 |
| 0.3953 | 1.56 | 1200 | 0.2936 | 0.2829 |
| 0.3962 | 1.69 | 1300 | 0.2952 | 0.2773 |
| 0.3811 | 1.82 | 1400 | 0.2915 | 0.2748 |
| 0.3736 | 1.95 | 1500 | 0.2839 | 0.2684 |
| 0.3507 | 2.08 | 1600 | 0.2914 | 0.2678 |
| 0.3277 | 2.21 | 1700 | 0.2895 | 0.2652 |
| 0.3344 | 2.34 | 1800 | 0.2843 | 0.2673 |
| 0.335 | 2.47 | 1900 | 0.2821 | 0.2635 |
| 0.3559 | 2.6 | 2000 | 0.2830 | 0.2599 |
| 0.3254 | 2.73 | 2100 | 0.2711 | 0.2577 |
| 0.3263 | 2.86 | 2200 | 0.2685 | 0.2546 |
| 0.3266 | 2.99 | 2300 | 0.2679 | 0.2521 |
| 0.3066 | 3.12 | 2400 | 0.2727 | 0.2526 |
| 0.2998 | 3.25 | 2500 | 0.2648 | 0.2537 |
| 0.2961 | 3.38 | 2600 | 0.2630 | 0.2519 |
| 0.3046 | 3.51 | 2700 | 0.2684 | 0.2506 |
| 0.3006 | 3.64 | 2800 | 0.2604 | 0.2492 |
| 0.2992 | 3.77 | 2900 | 0.2682 | 0.2508 |
| 0.2775 | 3.9 | 3000 | 0.2732 | 0.2440 |
| 0.2903 | 4.03 | 3100 | 0.2659 | 0.2427 |
| 0.2535 | 4.16 | 3200 | 0.2650 | 0.2433 |
| 0.2714 | 4.29 | 3300 | 0.2588 | 0.2394 |
| 0.2636 | 4.42 | 3400 | 0.2652 | 0.2434 |
| 0.2647 | 4.55 | 3500 | 0.2624 | 0.2371 |
| 0.2796 | 4.67 | 3600 | 0.2611 | 0.2373 |
| 0.2644 | 4.8 | 3700 | 0.2604 | 0.2341 |
| 0.2657 | 4.93 | 3800 | 0.2567 | 0.2331 |
| 0.2423 | 5.06 | 3900 | 0.2594 | 0.2322 |
| 0.2556 | 5.19 | 4000 | 0.2587 | 0.2323 |
| 0.2327 | 5.32 | 4100 | 0.2639 | 0.2299 |
| 0.2613 | 5.45 | 4200 | 0.2569 | 0.2310 |
| 0.2382 | 5.58 | 4300 | 0.2585 | 0.2298 |
| 0.2404 | 5.71 | 4400 | 0.2543 | 0.2287 |
| 0.2368 | 5.84 | 4500 | 0.2553 | 0.2286 |
| 0.2514 | 5.97 | 4600 | 0.2517 | 0.2279 |
| 0.2415 | 6.1 | 4700 | 0.2524 | 0.2270 |
| 0.2338 | 6.23 | 4800 | 0.2540 | 0.2265 |
| 0.219 | 6.36 | 4900 | 0.2549 | 0.2263 |
| 0.2428 | 6.49 | 5000 | 0.2546 | 0.2261 |
### Framework versions
- Transformers 4.16.1
- Pytorch 1.10.0+cu111
- Datasets 1.18.2
- Tokenizers 0.11.0
|
emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
|
emre
| 2022-02-10T22:57:52Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Tr-med-CommonVoice8-Tr-med-CommonVoice8
This model is a fine-tuned version of [emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8](https://huggingface.co/emre/wav2vec2-xls-r-300m-Tr-med-CommonVoice8) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2708
- Wer: 0.5010
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 300
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.0402 | 0.67 | 500 | 0.3354 | 0.5681 |
| 0.7265 | 1.33 | 1000 | 0.3181 | 0.5444 |
| 0.6858 | 2.0 | 1500 | 0.3044 | 0.5322 |
| 0.6537 | 2.66 | 2000 | 0.2911 | 0.5217 |
| 0.6337 | 3.33 | 2500 | 0.2874 | 0.5164 |
| 0.6111 | 3.99 | 3000 | 0.2758 | 0.5059 |
| 0.5815 | 4.66 | 3500 | 0.2708 | 0.5010 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.1
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-med
|
emre
| 2022-02-10T22:56:56Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-med
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-med
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4727
- Wer: 0.4677
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8093 | 4.21 | 400 | 2.7831 | 1.0 |
| 0.9881 | 8.42 | 800 | 0.5088 | 0.6681 |
| 0.3519 | 12.63 | 1200 | 0.4496 | 0.6007 |
| 0.2436 | 16.84 | 1600 | 0.4993 | 0.5654 |
| 0.1874 | 21.05 | 2000 | 0.4793 | 0.5530 |
| 0.1561 | 25.26 | 2400 | 0.5187 | 0.5589 |
| 0.1336 | 29.47 | 2800 | 0.5135 | 0.5311 |
| 0.1163 | 33.68 | 3200 | 0.4960 | 0.5143 |
| 0.1056 | 37.89 | 3600 | 0.4795 | 0.5045 |
| 0.0959 | 42.11 | 4000 | 0.4883 | 0.4987 |
| 0.0819 | 46.32 | 4400 | 0.4799 | 0.4903 |
| 0.0756 | 50.53 | 4800 | 0.4822 | 0.4831 |
| 0.0692 | 54.74 | 5200 | 0.4621 | 0.4762 |
| 0.062 | 58.95 | 5600 | 0.4727 | 0.4677 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
emre/wav2vec2-xls-r-300m-Turkish-Tr-small
|
emre
| 2022-02-10T22:55:52Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-xls-r-300m-Turkish-Tr-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-Turkish-Tr-small
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4375
- Wer: 0.5050
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.8735 | 4.21 | 400 | 2.8173 | 1.0002 |
| 1.0073 | 8.42 | 800 | 0.4981 | 0.6717 |
| 0.3395 | 12.63 | 1200 | 0.4470 | 0.5866 |
| 0.2254 | 16.84 | 1600 | 0.4349 | 0.5491 |
| 0.1648 | 21.05 | 2000 | 0.4454 | 0.5284 |
| 0.1325 | 25.26 | 2400 | 0.4552 | 0.5131 |
| 0.1102 | 29.47 | 2800 | 0.4375 | 0.5050 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
emre/wav2vec2-large-xlsr-53-W2V2-TR-MED
|
emre
| 2022-02-10T22:55:21Z | 14 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"robust-speech-event",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
- robust-speech-event
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xlsr-53-W2V2-TR-MED
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53-W2V2-TR-MED
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4467
- Wer: 0.4598
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 60
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 5.1343 | 4.21 | 400 | 2.3674 | 1.0372 |
| 0.8075 | 8.42 | 800 | 0.4583 | 0.6308 |
| 0.3209 | 12.63 | 1200 | 0.4291 | 0.5531 |
| 0.2273 | 16.84 | 1600 | 0.4348 | 0.5378 |
| 0.1764 | 21.05 | 2000 | 0.4550 | 0.5326 |
| 0.148 | 25.26 | 2400 | 0.4839 | 0.5319 |
| 0.1268 | 29.47 | 2800 | 0.4515 | 0.5070 |
| 0.1113 | 33.68 | 3200 | 0.4590 | 0.4930 |
| 0.1025 | 37.89 | 3600 | 0.4546 | 0.4888 |
| 0.0922 | 42.11 | 4000 | 0.4782 | 0.4852 |
| 0.082 | 46.32 | 4400 | 0.4605 | 0.4752 |
| 0.0751 | 50.53 | 4800 | 0.4358 | 0.4689 |
| 0.0699 | 54.74 | 5200 | 0.4359 | 0.4629 |
| 0.0633 | 58.95 | 5600 | 0.4467 | 0.4598 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.14.0
- Tokenizers 0.10.3
|
ibombonato/vit-age-classifier
|
ibombonato
| 2022-02-10T22:06:51Z | 76 | 6 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: vit-age-classifier
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.8364999890327454
---
# vit-age-classifier
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
|
squish/BertHarmon
|
squish
| 2022-02-10T21:28:51Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
thumbnail: "https://en.memesrandom.com/wp-content/uploads/2020/11/juega-ajedrez.jpeg"
widget:
- text: "rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]"
- example_title: Empty Board
- text: "6Q1/5k2/3P4/1R3p2/P4P2/7Q/6RK/8 b - - 2 60 Black <MOVE_SEP> [MASK]"
- example_title: Late Game Board
---
# BertHarmon
Research done at Johns Hopkins University by Michael DeLeo
Contact: mdeleo2@jh.edu

## Introduction
BertHarmon is a BERT model trained for the task of Chess.

## Sample Usage
```python
from transformers import pipeline
task = pipeline('fill-mask', model='squish/BertHarmon')
task("rnbqkbnr/pppppppp/8/8/8/8/PPPPPPPP/RNBQKBNR w KQkq - 0 1 White <MOVE_SEP> [MASK]")
```
The base string consists of the FEN_position followed by the player color and a move seperator. Finally with the [MASK] token. The mask token is the algebraic notation for a chess move to be taken givent the current board state in FEN Notation
## Links
[Github](https://github.com/deleomike/NLP-Chess)
[HuggingFace](https://huggingface.co/squish/BertHarmon)
|
FuriouslyAsleep/markuplm-large-finetuned-qa
|
FuriouslyAsleep
| 2022-02-10T20:30:55Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"markuplm",
"question-answering",
"arxiv:2110.08518",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:04Z |
# MarkupLM Large fine-tuned on WebSRC to allow Question Answering.
This model is adapted from Microsoft's MarkupLM. This fine-tuned model is the result of partially following instructions in the MarkupLM git repo (with adjustments described farther below under the Fine-tuning args section.) This version not endorsed by Microsoft.
Test the question answering out in the [Markup QA space here](https://huggingface.co/spaces/FuriouslyAsleep/markupQAdemo)
\---------------------------------------------------------------------------------
**Fine-tuned Multimodal (text +markup language) pre-training for [Document AI](https://www.microsoft.com/en-us/research/project/document-ai/)**
## Introduction (From Microsoft MarkupLM Large Model Card)
MarkupLM is a simple but effective multi-modal pre-training method of text and markup language for visually-rich document understanding and information extraction tasks, such as webpage QA and webpage information extraction. MarkupLM archives the SOTA results on multiple datasets. For more details, please refer to our paper:
[MarkupLM: Pre-training of Text and Markup Language for Visually-rich Document Understanding](https://arxiv.org/abs/2110.08518) Junlong Li, Yiheng Xu, Lei Cui, Furu Wei
\---------------------------------------------------------------------------------
Fine-tuning args:
--per_gpu_train_batch_size 4 --warmup_ratio 0.1 --num_train_epochs 4
## Training was performed on only a small subset of the WebSRC:
\
The number of total websites is 60
The train websites list is ['ga09']
The test websites list is []
The dev websites list is ['ga12', 'ph04', 'au08', 'ga10', 'au01', 'bo17', 'mo02', 'jo11', 'sp09', 'sp10', 'ph03', 'ph01', 'un09', 'sp14', 'jo03', 'sp07', 'un07', 'bo07', 'mo04', 'bo09', 'jo10', 'un12', 're02', 'bo01', 'ca01', 'sp15', 'au12', 'un03', 're03', 'jo13', 'ph02', 'un10', 'au09', 'au10', 'un02', 'mo07', 'sp13', 'bo08', 'sp03', 're05', 'sp06', 'ca02', 'sp02', 'sp01', 'au03', 'sp11', 'mo06', 'bo10', 'un11', 'un06', 'ga01', 'un04', 'ph05', 'au11', 'sp12', 'jo05', 'sp04', 'jo12', 'sp08']
The number of processed websites is 60
\---------------------------------------------------------------------------------
Inference test here may not work. Use the transformers markuplm branch from [NielsRogge transformers markuplm branch](https://github.com/NielsRogge/transformers/tree/modeling_markuplm)
After installing from there, try the following model and tokenizer assignemnts (consider using a file for the tags dict)
model = MarkupLMForQuestionAnswering.from_pretrained("FuriouslyAsleep/markuplm-large-finetuned-qa")
tokenizer = MarkupLMTokenizer(
vocab_file="vocab.json",
merges_file="merges.txt",
tags_dict= {"a": 0, "abbr": 1, "acronym": 2, "address": 3, "altGlyph": 4, "altGlyphDef": 5, "altGlyphItem": 6, "animate": 7, "animateColor": 8, "animateMotion": 9, "animateTransform": 10, "applet": 11, "area": 12, "article": 13, "aside": 14, "audio": 15, "b": 16, "base": 17, "basefont": 18, "bdi": 19, "bdo": 20, "bgsound": 21, "big": 22, "blink": 23, "blockquote": 24, "body": 25, "br": 26, "button": 27, "canvas": 28, "caption": 29, "center": 30, "circle": 31, "cite": 32, "clipPath": 33, "code": 34, "col": 35, "colgroup": 36, "color-profile": 37, "content": 38, "cursor": 39, "data": 40, "datalist": 41, "dd": 42, "defs": 43, "del": 44, "desc": 45, "details": 46, "dfn": 47, "dialog": 48, "dir": 49, "div": 50, "dl": 51, "dt": 52, "ellipse": 53, "em": 54, "embed": 55, "feBlend": 56, "feColorMatrix": 57, "feComponentTransfer": 58, "feComposite": 59, "feConvolveMatrix": 60, "feDiffuseLighting": 61, "feDisplacementMap": 62, "feDistantLight": 63, "feFlood": 64, "feFuncA": 65, "feFuncB": 66, "feFuncG": 67, "feFuncR": 68, "feGaussianBlur": 69, "feImage": 70, "feMerge": 71, "feMergeNode": 72, "feMorphology": 73, "feOffset": 74, "fePointLight": 75, "feSpecularLighting": 76, "feSpotLight": 77, "feTile": 78, "feTurbulence": 79, "fieldset": 80, "figcaption": 81, "figure": 82, "filter": 83, "font-face-format": 84, "font-face-name": 85, "font-face-src": 86, "font-face-uri": 87, "font-face": 88, "font": 89, "footer": 90, "foreignObject": 91, "form": 92, "frame": 93, "frameset": 94, "g": 95, "glyph": 96, "glyphRef": 97, "h1": 98, "h2": 99, "h3": 100, "h4": 101, "h5": 102, "h6": 103, "head": 104, "header": 105, "hgroup": 106, "hkern": 107, "hr": 108, "html": 109, "i": 110, "iframe": 111, "image": 112, "img": 113, "input": 114, "ins": 115, "kbd": 116, "keygen": 117, "label": 118, "legend": 119, "li": 120, "line": 121, "linearGradient": 122, "link": 123, "main": 124, "map": 125, "mark": 126, "marker": 127, "marquee": 128, "mask": 129, "math": 130, "menu": 131, "menuitem": 132, "meta": 133, "metadata": 134, "meter": 135, "missing-glyph": 136, "mpath": 137, "nav": 138, "nobr": 139, "noembed": 140, "noframes": 141, "noscript": 142, "object": 143, "ol": 144, "optgroup": 145, "option": 146, "output": 147, "p": 148, "param": 149, "path": 150, "pattern": 151, "picture": 152, "plaintext": 153, "polygon": 154, "polyline": 155, "portal": 156, "pre": 157, "progress": 158, "q": 159, "radialGradient": 160, "rb": 161, "rect": 162, "rp": 163, "rt": 164, "rtc": 165, "ruby": 166, "s": 167, "samp": 168, "script": 169, "section": 170, "select": 171, "set": 172, "shadow": 173, "slot": 174, "small": 175, "source": 176, "spacer": 177, "span": 178, "stop": 179, "strike": 180, "strong": 181, "style": 182, "sub": 183, "summary": 184, "sup": 185, "svg": 186, "switch": 187, "symbol": 188, "table": 189, "tbody": 190, "td": 191, "template": 192, "text": 193, "textPath": 194, "textarea": 195, "tfoot": 196, "th": 197, "thead": 198, "time": 199, "title": 200, "tr": 201, "track": 202, "tref": 203, "tspan": 204, "tt": 205, "u": 206, "ul": 207, "use": 208, "var": 209, "video": 210, "view": 211, "vkern": 212, "wbr": 213, "xmp": 214},
add_prefix_space=True,)
Go to [https://github.com/uwts/ProjectRisk](https://github.com/uwts/ProjectRisk) for sample script.
|
huggingtweets/realsophiarobot
|
huggingtweets
| 2022-02-10T20:03:13Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/realsophiarobot/1644523350998/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1489664916508524545/ePAeH8lT_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Sophia the Robot</div>
<div style="text-align: center; font-size: 14px;">@realsophiarobot</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Sophia the Robot.
| Data | Sophia the Robot |
| --- | --- |
| Tweets downloaded | 2341 |
| Retweets | 313 |
| Short tweets | 99 |
| Tweets kept | 1929 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/rfk5yso3/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @realsophiarobot's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/32n5oiz0/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/realsophiarobot')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/jpbrammer
|
huggingtweets
| 2022-02-10T15:50:29Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/jpbrammer/1644508224660/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1190049285842329600/qwCL5mdU_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">JP</div>
<div style="text-align: center; font-size: 14px;">@jpbrammer</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from JP.
| Data | JP |
| --- | --- |
| Tweets downloaded | 3206 |
| Retweets | 938 |
| Short tweets | 345 |
| Tweets kept | 1923 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/13lk57y6/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @jpbrammer's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/3umvc7qg/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/jpbrammer')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
satyaalmasian/temporal_tagger_German_GELECTRA
|
satyaalmasian
| 2022-02-10T15:23:51Z | 61 | 1 |
transformers
|
[
"transformers",
"pytorch",
"electra",
"token-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
# BERT based temporal tagged
Token classifier for temporal tagging of plain text using German Gelectra model.
# Model description
GELECTRA is a transformer (ELECTRA) model pretrained on a large corpus of German data in a self-supervised fashion. We use GELECTRA for token classification to tag the tokens in text with classes (tags are from english timex3 format):
```
O -- outside of a tag
I-TIME -- inside tag of time
B-TIME -- beginning tag of time
I-DATE -- inside tag of date
B-DATE -- beginning tag of date
I-DURATION -- inside tag of duration
B-DURATION -- beginning tag of duration
I-SET -- inside tag of the set
B-SET -- beginning tag of the set
```
# Intended uses & limitations
This model is best used accompanied with code from the [repository](https://github.com/satya77/Transformer_Temporal_Tagger). Especially for inference, the direct output might be noisy and hard to decipher, in the repository we provide alignment functions and voting strategies for the final output. The repo examples the english models, the german model can be used the same way.
# How to use
you can load the model as follows:
```
tokenizer = AutoTokenizer.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA", use_fast=False)
model = BertForTokenClassification.from_pretrained("satyaalmasian/temporal_tagger_German_GELECTRA")
```
for inference use:
```
processed_text = tokenizer(input_text, return_tensors="pt")
result = model(**processed_text)
classification= result[0]
```
for an example with post-processing, refer to the [repository](https://github.com/satya77/Transformer_Temporal_Tagger).
We provide a function `merge_tokens` to decipher the output.
to further fine-tune, use the `Trainer` from hugginface. An example of a similar fine-tuning can be found [here](https://github.com/satya77/Transformer_Temporal_Tagger/blob/master/run_token_classifier.py).
# Training data
For pre-training we use a large corpus of automatically annotated news articles with heideltime.
We use 2 data sources for fine-tunning. :
[Tempeval-3](https://www.cs.york.ac.uk/semeval-2013/task1/index.php%3Fid=data.html),automatically translated to gemran,
[KRAUTS dataset](https://github.com/JannikStroetgen/KRAUTS).
# Training procedure
The model is trained from publicly available checkpoints on huggingface (`deepset/gelectra-large`), with a batch size of 192. We use a learning rate of 1e-07 with an Adam optimizer and linear weight decay for pretraining.
For fine-tuning we use a batch size of 16. We use a learning rate of 5e-05 with an Adam optimizer and linear weight decay.
We fine-tune with 3 different random seeds, this version of the model is the only seed=7.
For training, we use 2 NVIDIA A100 GPUs with 40GB of memory.
|
ajaiswal1008/wav2vec2-large-xls-r-300m-hi-colab_new
|
ajaiswal1008
| 2022-02-10T15:11:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2-large-xls-r-300m-hi-colab_new
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-hi-colab_new
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.10.3
|
am-shb/bert-base-multilingual-uncased-pretrained
|
am-shb
| 2022-02-10T14:49:27Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-uncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-uncased
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2198
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1337
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.11.2
- Pytorch 1.10.0
- Datasets 1.8.0
- Tokenizers 0.10.3
|
SetFit/deberta-v3-large__sst2__train-8-9
|
SetFit
| 2022-02-10T10:10:14Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-02T23:29:04Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: deberta-v3-large__sst2__train-8-9
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-v3-large__sst2__train-8-9
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6013
- Accuracy: 0.7210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6757 | 1.0 | 3 | 0.7810 | 0.25 |
| 0.6506 | 2.0 | 6 | 0.8102 | 0.25 |
| 0.6463 | 3.0 | 9 | 0.8313 | 0.25 |
| 0.5813 | 4.0 | 12 | 0.8858 | 0.25 |
| 0.4635 | 5.0 | 15 | 0.8220 | 0.25 |
| 0.3992 | 6.0 | 18 | 0.7226 | 0.5 |
| 0.3281 | 7.0 | 21 | 0.6707 | 0.75 |
| 0.2276 | 8.0 | 24 | 0.7515 | 0.75 |
| 0.1674 | 9.0 | 27 | 0.6971 | 0.75 |
| 0.0873 | 10.0 | 30 | 0.5419 | 0.75 |
| 0.0525 | 11.0 | 33 | 0.5025 | 0.75 |
| 0.0286 | 12.0 | 36 | 0.5229 | 0.75 |
| 0.0149 | 13.0 | 39 | 0.5660 | 0.75 |
| 0.0082 | 14.0 | 42 | 0.6954 | 0.75 |
| 0.006 | 15.0 | 45 | 0.8649 | 0.75 |
| 0.0043 | 16.0 | 48 | 1.0011 | 0.75 |
| 0.0035 | 17.0 | 51 | 1.0909 | 0.75 |
| 0.0021 | 18.0 | 54 | 1.1615 | 0.75 |
| 0.0017 | 19.0 | 57 | 1.2147 | 0.75 |
| 0.0013 | 20.0 | 60 | 1.2585 | 0.75 |
| 0.0016 | 21.0 | 63 | 1.2917 | 0.75 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.