repo_id
stringlengths 4
110
| author
stringlengths 2
27
⌀ | model_type
stringlengths 2
29
⌀ | files_per_repo
int64 2
15.4k
| downloads_30d
int64 0
19.9M
| library
stringlengths 2
37
⌀ | likes
int64 0
4.34k
| pipeline
stringlengths 5
30
⌀ | pytorch
bool 2
classes | tensorflow
bool 2
classes | jax
bool 2
classes | license
stringlengths 2
30
| languages
stringlengths 4
1.63k
⌀ | datasets
stringlengths 2
2.58k
⌀ | co2
stringclasses 29
values | prs_count
int64 0
125
| prs_open
int64 0
120
| prs_merged
int64 0
15
| prs_closed
int64 0
28
| discussions_count
int64 0
218
| discussions_open
int64 0
148
| discussions_closed
int64 0
70
| tags
stringlengths 2
513
| has_model_index
bool 2
classes | has_metadata
bool 1
class | has_text
bool 1
class | text_length
int64 401
598k
| is_nc
bool 1
class | readme
stringlengths 0
598k
| hash
stringlengths 32
32
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
minsloth/finetuning-sentiment-model
|
minsloth
|
distilbert
| 13 | 4 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,042 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2309
- Accuracy: 0.9319
- F1: 0.9323
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
352f645b31909f2bddfd083a7cc8ab49
|
jonatasgrosman/exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169
|
jonatasgrosman
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 499 | false |
# exp_w2v2r_en_vp-100k_gender_male-0_female-10_s169
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
3542d36ed106be0098e2d51410e9721e
|
studio-ousia/luke-large-finetuned-conll-2003
|
studio-ousia
|
luke
| 10 | 3,650 |
transformers
| 2 | null | true | false | false |
apache-2.0
| null | null | null | 2 | 0 | 1 | 1 | 0 | 0 | 0 |
[]
| false | true | true | 6,198 | false |
# Model Card for luke-large-finetuned-conll-2003
# Model Details
## Model Description
LUKE (Language Understanding with Knowledge-based Embeddings) is a new pretrained contextualized representation of words and entities based on transformer.
- **Developed by:** Studio Ousia
- **Shared by [Optional]:** More information needed
- **Model type:** EntitySpanClassification
- **Language(s) (NLP):** More information needed
- **License:** Apache-2.0
- **Related Models:** [Luke-large](https://huggingface.co/studio-ousia/luke-large?text=Paris+is+the+%3Cmask%3E+of+France.)
- **Parent Model:** Luke
- **Resources for more information:**
- [GitHub Repo](https://github.com/studio-ousia/luke)
- [Associated Paper](https://arxiv.org/abs/2010.01057)
# Uses
## Direct Use
More information needed
## Downstream Use [Optional]
This model can also be used for the task of named entity recognition, cloze-style question answering, fine-grained entity typing, extractive question answering.
## Out-of-Scope Use
The model should not be used to intentionally create hostile or alienating environments for people.
# Bias, Risks, and Limitations
Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Predictions generated by the model may include disturbing and harmful stereotypes across protected classes; identity characteristics; and sensitive, social, and occupational groups.
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Training Details
## Training Data
More information needed
## Training Procedure
### Preprocessing
More information needed
### Speeds, Sizes, Times
More information needed
# Evaluation
## Testing Data, Factors & Metrics
### Testing Data
More information needed
### Factors
### Metrics
LUKE achieves state-of-the-art results on five popular NLP benchmarks including
* **[SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/)** (extractive
question answering),
* **[CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/)** (named entity
recognition), **[ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/)**
(cloze-style question answering),
* **[TACRED](https://nlp.stanford.edu/projects/tacred/)** (relation
classification), and
* **[Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html)** (entity typing).
## Results
The experimental results are provided as follows:
| Task | Dataset | Metric | LUKE-large | luke-base | Previous SOTA |
| ------------------------------ | ---------------------------------------------------------------------------- | ------ | ----------------- | --------- | ------------------------------------------------------------------------- |
| Extractive Question Answering | [SQuAD v1.1](https://rajpurkar.github.io/SQuAD-explorer/) | EM/F1 | **90.2**/**95.4** | 86.1/92.3 | 89.9/95.1 ([Yang et al., 2019](https://arxiv.org/abs/1906.08237)) |
| Named Entity Recognition | [CoNLL-2003](https://www.clips.uantwerpen.be/conll2003/ner/) | F1 | **94.3** | 93.3 | 93.5 ([Baevski et al., 2019](https://arxiv.org/abs/1903.07785)) |
| Cloze-style Question Answering | [ReCoRD](https://sheng-z.github.io/ReCoRD-explorer/) | EM/F1 | **90.6**/**91.2** | - | 83.1/83.7 ([Li et al., 2019](https://www.aclweb.org/anthology/D19-6011/)) |
| Relation Classification | [TACRED](https://nlp.stanford.edu/projects/tacred/) | F1 | **72.7** | - | 72.0 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
| Fine-grained Entity Typing | [Open Entity](https://www.cs.utexas.edu/~eunsol/html_pages/open_entity.html) | F1 | **78.2** | - | 77.6 ([Wang et al. , 2020](https://arxiv.org/abs/2002.01808)) |
Please check the [Github repository](https://github.com/studio-ousia/luke) for more details and updates.
# Model Examination
More information needed
# Environmental Impact
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** More information needed
- **Hours used:** More information needed
- **Cloud Provider:** More information needed
- **Compute Region:** More information needed
- **Carbon Emitted:** More information needed
# Technical Specifications [optional]
## Model Architecture and Objective
More information needed
## Compute Infrastructure
More information needed
### Hardware
* transformers_version: 4.6.0.dev0
### Software
More information needed
# Citation
**BibTeX:**
```
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
# Glossary [optional]
More information needed
# More Information [optional]
More information needed
# Model Card Authors [optional]
Studio Ousia in collaboration with Ezi Ozoani and the Hugging Face team
# Model Card Contact
More information needed
# How to Get Started with the Model
Use the code below to get started with the model.
<details>
<summary> Click to expand </summary>
```python
from transformers import AutoTokenizer, LukeForEntitySpanClassification
tokenizer = AutoTokenizer.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
model = LukeForEntitySpanClassification.from_pretrained("studio-ousia/luke-large-finetuned-conll-2003")
```
</details>
|
ae61a8e2d9809e50b07194829fa269c5
|
Kagerage/broken-mirror-style
|
Kagerage
| null | 20 | 3 |
diffusers
| 1 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['text-to-image', 'safetensors', 'stable-diffusion']
| false | true | true | 2,076 | false |
### Broken-Mirror-Style Dreambooth model trained by Kagerage with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
In my experience, the best prompting style is "brkmror cracked and shattered mirror, professional photo, descriptor of location, portrait of person, bokeh", potentially with a negative prompt of "closed eyes"
Sampler DPM++ SDE Karras with at least 25 steps gives slightly better faces from my testing, CFG 7 seems fine, and since this is trained on top of 2.1, the minimum resolution is 768x768.
Or course though, feel free to experiment!
\(NOTE!!! This 2.1 based model is FP16, so it may not render outputs properly without Xformers\)
Sample pictures of this concept:




|
062ad91156825765eb74b9f046b6b98d
|
adrianccy/donut-base-sroie-fine-tuned
|
adrianccy
|
vision-encoder-decoder
| 16 | 1 |
transformers
| 0 | null | true | false | false |
mit
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 981 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-base-sroie-fine-tuned
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base) on the imagefolder dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0
- Datasets 2.7.0
- Tokenizers 0.13.2
|
60a9a642cb3720015f589b13ef347d67
|
lct-rug-2022/edos-2023-baseline-microsoft-deberta-v3-base-label_vector
|
lct-rug-2022
|
deberta-v2
| 11 | 12 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,791 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# edos-2023-baseline-microsoft-deberta-v3-base-label_vector
This model is a fine-tuned version of [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5524
- F1: 0.3162
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 5
- num_epochs: 12
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.1209 | 1.18 | 100 | 1.9990 | 0.0801 |
| 1.7997 | 2.35 | 200 | 1.7293 | 0.1349 |
| 1.5749 | 3.53 | 300 | 1.6080 | 0.2431 |
| 1.3674 | 4.71 | 400 | 1.5411 | 0.2793 |
| 1.2214 | 5.88 | 500 | 1.5285 | 0.2980 |
| 1.0752 | 7.06 | 600 | 1.5165 | 0.3054 |
| 0.9899 | 8.24 | 700 | 1.5210 | 0.3186 |
| 0.8733 | 9.41 | 800 | 1.5385 | 0.3134 |
| 0.8578 | 10.59 | 900 | 1.5524 | 0.3162 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
f8f4d6ec8e337419527b92066e3749a3
|
duja1/franck
|
duja1
| null | 21 | 3 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['text-to-image']
| false | true | true | 541 | false |
### Franck Dreambooth model trained by duja1 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
f123ranck (use that on your prompt)
|
046c60d145f6920755dd2c61389345f1
|
Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically
|
Zayn
|
vision-encoder-decoder
| 11 | 3,600 |
transformers
| 4 |
image-to-text
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 2 | 2 | 0 |
['image-to-text', 'image-captioning']
| false | true | true | 1,261 | false |
This is an image captioning model training by Zayn
```python
from transformers import VisionEncoderDecoderModel, ViTFeatureExtractor, AutoTokenizer
model = VisionEncoderDecoderModel.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically")
feature_extractor = ViTFeatureExtractor.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically")
tokenizer = AutoTokenizer.from_pretrained("Zayn/AICVTG_What_if_a_machine_could_create_captions_automatically")
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model.to(device)
max_length = 20
num_beams = 8
gen_kwargs = {"max_length": max_length, "num_beams": num_beams}
def predict_step(image_paths):
images = []
for image_path in image_paths:
i_image = Image.open(image_path)
if i_image.mode != "RGB":
i_image = i_image.convert(mode="RGB")
images.append(i_image)
pixel_values = feature_extractor(images=images, return_tensors="pt").pixel_values
pixel_values = pixel_values.to(device)
output_ids = model.generate(pixel_values, **gen_kwargs)
preds = tokenizer.batch_decode(output_ids, skip_special_tokens=True)
preds = [pred.strip() for pred in preds]
return preds
predict_step(['Image URL.jpg'])
|
6267cfb35174cc85e58bf2d3213809af
|
StonyBrookNLP/teabreac-preasm-large-drop
|
StonyBrookNLP
|
t5
| 8 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['question-answering, multi-step-reasoning, multi-hop-reasoning']
| false | true | true | 2,612 | false |
# What's this?
This is one of the models reported in the paper: ["Teaching Broad Reasoning Skills for Multi-Step QA by Generating Hard Contexts".](https://arxiv.org/abs/2205.12496).
This paper proposes a procedure to synthetically generate a QA dataset, TeaBReaC, for pretraining language models for robust multi-step reasoning. Pretraining plain LMs like Bart, T5 and numerate LMs like NT5, PReasM, POET on TeaBReaC leads to improvemed downstream performance on several multi-step QA datasets. Please checkout out the paper for the details.
We release the following models:
- **A:** Base Models finetuned on target datasets: `{base_model}-{target_dataset}`
- **B:** Base models pretrained on TeaBReaC: `teabreac-{base_model}`
- **C:** Base models pretrained on TeaBReaC and then finetuned on target datasets: `teabreac-{base_model}-{target_dataset}`
The `base_model` above can be from: `bart-large`, `t5-large`, `t5-3b`, `nt5-small`, `preasm-large`.
The `target_dataset` above can be from: `drop`, `tatqa`, `iirc-gold`, `iirc-retrieved`, `numglue`.
The **A** models are only released for completeness / reproducibility. In your end application you probably just want to use either **B** or **C**.
# How to use it?
Please checkout the details in our [github repository](https://github.com/stonybrooknlp/teabreac), but in a nutshell:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
from digit_tokenization import enable_digit_tokenization # digit_tokenization.py from https://github.com/stonybrooknlp/teabreac
model_name = "StonyBrookNLP/teabreac-preasm-large-drop"
tokenizer = AutoTokenizer.from_pretrained(model_name, use_fast=False) # Fast doesn't work with digit tokenization
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
enable_digit_tokenization(tokenizer)
input_texts = [
"Who scored the first touchdown of the game?\n" +
"... Oakland would get the early lead in the first quarter as quarterback JaMarcus Russell completed a 20-yard touchdown pass to rookie wide receiver Chaz Schilens..."
# Note: some models have slightly different qn/ctxt format. See the github repo.
]
input_ids = tokenizer(
input_texts, return_tensors="pt",
truncation=True, max_length=800,
add_special_tokens=True, padding=True,
)["input_ids"]
generated_ids = model.generate(input_ids, min_length=1, max_length=50)
generated_predictions = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)
generated_predictions = [
tokenizer.fix_decoded_text(generated_prediction) for generated_prediction in generated_predictions
]
# => ["Chaz Schilens"]
```
|
278a160ab0a3e2ea8229a1db23bc50a7
|
mujeensung/albert-base-v2_mnli_bc
|
mujeensung
|
albert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,368 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert-base-v2_mnli_bc
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2952
- Accuracy: 0.9399
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.2159 | 1.0 | 16363 | 0.2268 | 0.9248 |
| 0.1817 | 2.0 | 32726 | 0.2335 | 0.9347 |
| 0.0863 | 3.0 | 49089 | 0.3014 | 0.9401 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.1+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
74fca6012f5507199585466140794944
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_stsb_256
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,066 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_stsb_256
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1268
- Pearson: nan
- Spearmanr: nan
- Combined Score: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:|
| 3.1622 | 1.0 | 23 | 1.7502 | -0.0248 | -0.0193 | -0.0221 |
| 1.8579 | 2.0 | 46 | 1.3087 | -0.0465 | -0.0476 | -0.0470 |
| 1.3508 | 3.0 | 69 | 1.1268 | nan | nan | nan |
| 1.1078 | 4.0 | 92 | 1.1974 | 0.0294 | 0.0287 | 0.0290 |
| 1.0747 | 5.0 | 115 | 1.1797 | 0.0509 | 0.0597 | 0.0553 |
| 1.024 | 6.0 | 138 | 1.2292 | 0.0554 | 0.0782 | 0.0668 |
| 0.944 | 7.0 | 161 | 1.2819 | 0.1274 | 0.1441 | 0.1358 |
| 0.795 | 8.0 | 184 | 1.2143 | 0.1987 | 0.2082 | 0.2035 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
218d171197ae6f3bc247d32e2159515c
|
yhavinga/t5-small-24L-ccmatrix-multi
|
yhavinga
|
t5
| 403 | 99 |
transformers
| 0 |
translation
| true | false | true |
apache-2.0
|
['nl', 'en']
|
['yhavinga/mc4_nl_cleaned', 'yhavinga/ccmatrix']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['t5', 'translation', 'seq2seq']
| false | true | true | 26,934 | false |
# t5-small-24L-ccmatrix-multi
A [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) model finetuned for Dutch to English and English to Dutch translation on the CCMatrix dataset.
Evaluation metrics of this model are listed in the **Translation models** section below.
You can use this model directly with a pipeline for text translation:
```python
model_name = "yhavinga/t5-small-24L-ccmatrix-multi"
from transformers import AutoTokenizer
from transformers import AutoModelForSeq2SeqLM
from transformers import pipeline
import torch
device_num = 0 if torch.cuda.is_available() else -1
device = "cpu" if device_num < 0 else f"cuda:{device_num}"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device)
params = {"max_length": 128, "num_beams": 4, "early_stopping": True}
en_to_nl = pipeline("translation_en_to_nl", tokenizer=tokenizer, model=model, device=device_num)
print(en_to_nl("""Young Wehling was hunched in his chair, his head in his hand. He was so rumpled, so still and colorless as to be virtually invisible.""",
**params)[0]['translation_text'])
nl_to_en = pipeline("translation_nl_to_en", tokenizer=tokenizer, model=model, device=device_num)
print(nl_to_en("""De jonge Wehling zat gebogen in zijn stoel, zijn hoofd in zijn hand. Hij was zo stoffig, zo stil en kleurloos dat hij vrijwel onzichtbaar was.""",
**params)[0]['translation_text'])
```
This **t5 eff** model has **249M** parameters.
It was pre-trained with masked language modeling (denoise token span corruption) objective on the dataset
`mc4_nl_cleaned` config `large_en_nl` for **1** epoch(s) and a duration of **4d10h**,
with a sequence length of **512**, batch size **128** and **851852** total steps (**56B** tokens).
Pre-training evaluation loss and accuracy are **1,18** and **0,74**.
Refer to the evaluation section below for a comparison of the pre-trained models on summarization and translation.
## Tokenizer
The model uses a cased SentencePiece tokenizer configured with the `Nmt, NFKC, Replace multi-space to single-space` normalizers
and has 32003 tokens.
It was trained on Dutch and English with scripts from the Huggingface Transformers [Flax examples](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modeling).
See [./raw/main/tokenizer.json](tokenizer.json) for details.
## Dataset(s)
All models listed below are pre-trained on
[cleaned Dutch mC4](https://huggingface.co/datasets/yhavinga/mc4_nl_cleaned),
which is the original mC4, except
* Documents that contained words from a selection of the Dutch and English [List of Dirty Naught Obscene and Otherwise Bad Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words) are removed
* Sentences with less than 3 words are removed
* Sentences with a word of more than 1000 characters are removed
* Documents with less than 5 sentences are removed
* Documents with "javascript", "lorum ipsum", "terms of use", "privacy policy", "cookie policy", "uses cookies",
"use of cookies", "use cookies", "elementen ontbreken", "deze printversie" are removed.
The Dutch and English models are pre-trained on a 50/50% mix of Dutch mC4 and English C4.
The translation models are fine-tuned on [CCMatrix](https://huggingface.co/datasets/yhavinga/ccmatrix).
## Dutch T5 Models
Three types of [Dutch T5 models have been trained (blog)](https://huggingface.co/spaces/yhavinga/pre-training-dutch-t5-models).
`t5-base-dutch` is the only model with an original T5 config.
The other model types t5-v1.1 and t5-eff have `gated-relu` instead of `relu` as activation function,
and trained with a drop-out of `0.0` unless training would diverge (`t5-v1.1-large-dutch-cased`).
The T5-eff models are models that differ in their number of layers. The table will list
the several dimensions of these models. Not all t5-eff models are efficient, the best example being the inefficient
`t5-xl-4L-dutch-english-cased`.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-xl-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-xl-8l-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) |
|:------------------|:----------------|:-----------------------------|:---------------------------|:----------------------------|:-----------------------------------|:----------------------------------------|:-----------------------------|:-------------------------------|:----------------------------------|:-----------------------------------|:--------------------------------------|
| *type* | t5 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5-v1.1 | t5 eff | t5 eff | t5 eff | t5 eff | t5 eff |
| *d_model* | 768 | 768 | 768 | 1024 | 768 | 768 | 512 | 2048 | 768 | 1024 | 1024 |
| *d_ff* | 3072 | 2048 | 2048 | 2816 | 2048 | 2048 | 1920 | 5120 | 2560 | 16384 | 4096 |
| *num_heads* | 12 | 12 | 12 | 16 | 12 | 12 | 8 | 32 | 12 | 32 | 16 |
| *d_kv* | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 64 | 128 | 64 |
| *num_layers* | 12 | 12 | 12 | 24 | 12 | 12 | 24 | 4 | 36 | 8 | 8 |
| *num parameters* | 223M | 248M | 248M | 783M | 248M | 248M | 250M | 585M | 729M | 1241M | 335M |
| *feed_forward_proj* | relu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu | gated-gelu |
| *dropout* | 0.1 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 | 0.1 | 0.0 | 0.0 | 0.0 |
| *dataset* | mc4_nl_cleaned | mc4_nl_cleaned full | mc4_nl_cleaned full | mc4_nl_cleaned | mc4_nl_cleaned small_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl | mc4_nl_cleaned large_en_nl |
| *tr. seq len* | 512 | 1024 | 1024 | 512 | 512 | 1024 | 512 | 512 | 512 | 512 | 512 |
| *batch size* | 128 | 64 | 64 | 64 | 128 | 64 | 128 | 512 | 512 | 64 | 128 |
| *total steps* | 527500 | 1014525 | 1210154 | 1120k/2427498 | 2839630 | 1520k/3397024 | 851852 | 212963 | 212963 | 538k/1703705 | 851850 |
| *epochs* | 1 | 2 | 2 | 2 | 10 | 4 | 1 | 1 | 1 | 1 | 1 |
| *duration* | 2d9h | 5d5h | 6d6h | 8d13h | 11d18h | 9d1h | 4d10h | 6d1h | 17d15h | 4d 19h | 3d 23h |
| *optimizer* | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor | adafactor |
| *lr* | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.005 | 0.009 | 0.005 | 0.005 |
| *warmup* | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 10000.0 | 5000.0 | 20000.0 | 2500.0 | 1000.0 | 1500.0 | 1500.0 |
| *eval loss* | 1,38 | 1,20 | 0,96 | 1,07 | 1,11 | 1,13 | 1,18 | 1,27 | 1,05 | 1,3019 | 1,15 |
| *eval acc* | 0,70 | 0,73 | 0,78 | 0,76 | 0,75 | 0,74 | 0,74 | 0,72 | 0,76 | 0,71 | 0,74 |
## Evaluation
Most models from the list above have been fine-tuned for summarization and translation.
The figure below shows the evaluation scores, where the x-axis shows the translation Bleu score (higher is better)
and y-axis the summarization Rouge1 translation score (higher is better).
Point size is proportional to the model size. Models with faster inference speed are green, slower inference speed is
plotted as bleu.

Evaluation was run on fine-tuned models trained with the following settings:
| | Summarization | Translation |
|---------------:|------------------|-------------------|
| Dataset | CNN Dailymail NL | CCMatrix en -> nl |
| #train samples | 50K | 50K |
| Optimizer | Adam | Adam |
| learning rate | 0.001 | 0.0005 |
| source length | 1024 | 128 |
| target length | 142 | 128 |
|label smoothing | 0.05 | 0.1 |
| #eval samples | 1000 | 1000 |
Note that the amount of training data is limited to a fraction of the total dataset sizes, therefore the scores
below can only be used to compare the 'transfer-learning' strength. The fine-tuned checkpoints for this evaluation
are not saved, since they were trained for comparison of pre-trained models only.
The numbers for summarization are the Rouge scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:------------------------|----------------:|-----------------------------:|---------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *rouge1* | 33.38 | 33.97 | 34.39 | 33.38 | 34.97 | 34.38 | 30.35 | **35.04** | 34.04 | 33.25 |
| *rouge2* | 13.32 | 13.85 | 13.98 | 13.47 | 14.01 | 13.89 | 11.57 | **14.23** | 13.76 | 12.74 |
| *rougeL* | 24.22 | 24.72 | 25.1 | 24.34 | 24.99 | **25.25** | 22.69 | 25.05 | 24.75 | 23.5 |
| *rougeLsum* | 30.23 | 30.9 | 31.44 | 30.51 | 32.01 | 31.38 | 27.5 | **32.12** | 31.12 | 30.15 |
| *samples_per_second* | 3.18 | 3.02 | 2.99 | 3.22 | 2.97 | 1.57 | 2.8 | 0.61 | **3.27** | 1.22 |
The models below have been evaluated for English to Dutch translation.
Note that the first four models are pre-trained on Dutch only. That they still perform adequate is probably because
the translation direction is English to Dutch.
The numbers reported are the Bleu scores on 1000 documents from the test split.
| | [t5-base-dutch](https://huggingface.co/yhavinga/t5-base-dutch) | [t5-v1.1-base-dutch-uncased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-uncased) | [t5-v1.1-base-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-base-dutch-cased) | [t5-v1.1-large-dutch-cased](https://huggingface.co/yhavinga/t5-v1.1-large-dutch-cased) | [t5-v1_1-base-dutch-english-cased](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased) | [t5-v1_1-base-dutch-english-cased-1024](https://huggingface.co/yhavinga/t5-v1_1-base-dutch-english-cased-1024) | [t5-small-24L-dutch-english](https://huggingface.co/yhavinga/t5-small-24L-dutch-english) | [t5-xl-4L-dutch-english-cased](https://huggingface.co/yhavinga/t5-xl-4L-dutch-english-cased) | [t5-base-36L-dutch-english-cased](https://huggingface.co/yhavinga/t5-base-36L-dutch-english-cased) | [t5-eff-large-8l-dutch-english-cased](https://huggingface.co/yhavinga/t5-eff-large-8l-dutch-english-cased) | mt5-base |
|:-------------------------------|----------------:|-----------------------------:|---------------------------:|----------------------------:|-----------------------------------:|----------------------------------------:|-----------------------------:|-------------------------------:|----------------------------------:|--------------------------------------:|-----------:|
| *precision_ng1* | 74.17 | 78.09 | 77.08 | 72.12 | 77.19 | 78.76 | 78.59 | 77.3 | **79.75** | 78.88 | 73.47 |
| *precision_ng2* | 52.42 | 57.52 | 55.31 | 48.7 | 55.39 | 58.01 | 57.83 | 55.27 | **59.89** | 58.27 | 50.12 |
| *precision_ng3* | 39.55 | 45.2 | 42.54 | 35.54 | 42.25 | 45.13 | 45.02 | 42.06 | **47.4** | 45.95 | 36.59 |
| *precision_ng4* | 30.23 | 36.04 | 33.26 | 26.27 | 32.74 | 35.72 | 35.41 | 32.61 | **38.1** | 36.91 | 27.26 |
| *bp* | 0.99 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 | 0.98 | 0.97 | 0.98 | 0.98 | 0.98 |
| *score* | 45.88 | 51.21 | 48.31 | 41.59 | 48.17 | 51.31 | 50.82 | 47.83 | **53** | 51.79 | 42.74 |
| *samples_per_second* | **45.19** | 45.05 | 38.67 | 10.12 | 42.19 | 42.61 | 12.85 | 33.74 | 9.07 | 37.86 | 9.03 |
## Translation models
The models `t5-small-24L-dutch-english` and `t5-base-36L-dutch-english` have been fine-tuned for both language
directions on the first 25M samples from CCMatrix, giving a total of 50M training samples.
Evaluation is performed on out-of-sample CCMatrix and also on Tatoeba and Opus Books.
The `_bp` columns list the *brevity penalty*. The `avg_bleu` score is the bleu score
averaged over all three evaluation datasets. The best scores displayed in bold for both translation directions.
| | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-base-36L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-base-36L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) | [t5-small-24L-ccmatrix-multi](https://huggingface.co/yhavinga/t5-small-24L-ccmatrix-multi) |
|:-----------------------|:-----------------------------|:-----------------------------|:------------------------------|:------------------------------|
| *source_lang* | en | nl | en | nl |
| *target_lang* | nl | en | nl | en |
| *source_prefix* | translate English to Dutch: | translate Dutch to English: | translate English to Dutch: | translate Dutch to English: |
| *ccmatrix_bleu* | **56.8** | 62.8 | 57.4 | **63.1** |
| *tatoeba_bleu* | **46.6** | **52.8** | 46.4 | 51.7 |
| *opus_books_bleu* | **13.5** | **24.9** | 12.9 | 23.4 |
| *ccmatrix_bp* | 0.95 | 0.96 | 0.95 | 0.96 |
| *tatoeba_bp* | 0.97 | 0.94 | 0.98 | 0.94 |
| *opus_books_bp* | 0.8 | 0.94 | 0.77 | 0.89 |
| *avg_bleu* | **38.96** | **46.86** | 38.92 | 46.06 |
| *max_source_length* | 128 | 128 | 128 | 128 |
| *max_target_length* | 128 | 128 | 128 | 128 |
| *adam_beta1* | 0.9 | 0.9 | 0.9 | 0.9 |
| *adam_beta2* | 0.997 | 0.997 | 0.997 | 0.997 |
| *weight_decay* | 0.05 | 0.05 | 0.002 | 0.002 |
| *lr* | 5e-05 | 5e-05 | 0.0005 | 0.0005 |
| *label_smoothing_factor* | 0.15 | 0.15 | 0.1 | 0.1 |
| *train_batch_size* | 128 | 128 | 128 | 128 |
| *warmup_steps* | 2000 | 2000 | 2000 | 2000 |
| *total steps* | 390625 | 390625 | 390625 | 390625 |
| *duration* | 4d 5h | 4d 5h | 3d 2h | 3d 2h |
| *num parameters* | 729M | 729M | 250M | 250M |
## Acknowledgements
This project would not have been possible without compute generously provided by Google through the
[TPU Research Cloud](https://sites.research.google/trc/). The HuggingFace 🤗 ecosystem was instrumental in all parts
of the training. Weights & Biases made it possible to keep track of many training sessions
and orchestrate hyper-parameter sweeps with insightful visualizations.
The following repositories where helpful in setting up the TPU-VM,
and getting an idea what sensible hyper-parameters are for training gpt2 from scratch:
* [Gsarti's Pretrain and Fine-tune a T5 model with Flax on GCP](https://github.com/gsarti/t5-flax-gcp)
* [Flax/Jax Community week t5-base-dutch](https://huggingface.co/flax-community/t5-base-dutch)
Created by [Yeb Havinga](https://www.linkedin.com/in/yeb-havinga-86530825/)
|
41f47e6d86f130eea048fdeebc6dc01c
|
Zekunli/t5-base-extraction-cnndm_fs0.01-all
|
Zekunli
|
t5
| 10 | 8 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,539 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-extraction-cnndm_fs0.01-all
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8747
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.3573 | 2.25 | 200 | 1.9379 |
| 1.9645 | 4.49 | 400 | 1.9068 |
| 1.862 | 6.74 | 600 | 1.8823 |
| 1.7958 | 8.99 | 800 | 1.8796 |
| 1.7493 | 11.24 | 1000 | 1.8759 |
| 1.7053 | 13.48 | 1200 | 1.8747 |
| 1.6773 | 15.73 | 1400 | 1.8786 |
| 1.6631 | 17.98 | 1600 | 1.8796 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
|
828fde276497830f6aeb2ddb716ea3c5
|
jonatasgrosman/exp_w2v2t_id_hubert_s246
|
jonatasgrosman
|
hubert
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['id']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'id']
| false | true | true | 452 | false |
# exp_w2v2t_id_hubert_s246
Fine-tuned [facebook/hubert-large-ll60k](https://huggingface.co/facebook/hubert-large-ll60k) for speech recognition using the train split of [Common Voice 7.0 (id)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
88647f52cc881be1fb4c98e047731dbe
|
gokuls/distilbert_sa_GLUE_Experiment_logit_kd_mnli_96
|
gokuls
|
distilbert
| 17 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,075 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_logit_kd_mnli_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5438
- Accuracy: 0.5431
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.6023 | 1.0 | 1534 | 0.5718 | 0.4960 |
| 0.5673 | 2.0 | 3068 | 0.5547 | 0.5184 |
| 0.5555 | 3.0 | 4602 | 0.5505 | 0.5278 |
| 0.5481 | 4.0 | 6136 | 0.5466 | 0.5381 |
| 0.5426 | 5.0 | 7670 | 0.5454 | 0.5403 |
| 0.5382 | 6.0 | 9204 | 0.5454 | 0.5354 |
| 0.5341 | 7.0 | 10738 | 0.5452 | 0.5344 |
| 0.5308 | 8.0 | 12272 | 0.5428 | 0.5410 |
| 0.5271 | 9.0 | 13806 | 0.5460 | 0.5451 |
| 0.5239 | 10.0 | 15340 | 0.5450 | 0.5462 |
| 0.5209 | 11.0 | 16874 | 0.5447 | 0.5449 |
| 0.5179 | 12.0 | 18408 | 0.5452 | 0.5475 |
| 0.5152 | 13.0 | 19942 | 0.5495 | 0.5454 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
b9501014228fe8c8e8c0e6447de19a95
|
jonatasgrosman/exp_w2v2t_zh-cn_no-pretraining_s730
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['zh-CN']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'zh-CN']
| false | true | true | 420 | false |
# exp_w2v2t_zh-cn_no-pretraining_s730
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
5d74fbf955b44ac4445de275d74f3325
|
newtonkwan/gpt2-ft-with-non-challenging
|
newtonkwan
|
gpt2
| 10 | 2 |
transformers
| 0 |
text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,374 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-ft-with-non-challenging
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9906
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 32
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100.0
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 1 | 4.0984 |
| No log | 2.0 | 2 | 4.0802 |
| No log | 3.0 | 3 | 4.0443 |
| No log | 4.0 | 4 | 3.9906 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
b2f82f76405a1dbdfa1557bf5224e82b
|
fathyshalab/massive_calendar-roberta-large-v1-3-93
|
fathyshalab
|
roberta
| 14 | 2 |
sentence-transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['setfit', 'sentence-transformers', 'text-classification']
| false | true | true | 1,466 | false |
# fathyshalab/massive_calendar-roberta-large-v1-3-93
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("fathyshalab/massive_calendar-roberta-large-v1-3-93")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
|
33d0f11a773fae936d84082a21a394c8
|
DogeAI/finetuning-sentiment-model-3000-samples
|
DogeAI
|
distilbert
| 13 | 12 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,055 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3163
- Accuracy: 0.8667
- F1: 0.8693
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.1
|
a6ef8f12122c68c0f2d440f11acdee0f
|
SetFit/distilbert-base-uncased__sst2__train-16-4
|
SetFit
|
distilbert
| 10 | 14 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,067 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-4
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1501
- Accuracy: 0.6387
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.7043 | 1.0 | 7 | 0.7139 | 0.2857 |
| 0.68 | 2.0 | 14 | 0.7398 | 0.2857 |
| 0.641 | 3.0 | 21 | 0.7723 | 0.2857 |
| 0.5424 | 4.0 | 28 | 0.8391 | 0.2857 |
| 0.5988 | 5.0 | 35 | 0.7761 | 0.2857 |
| 0.3698 | 6.0 | 42 | 0.7707 | 0.4286 |
| 0.3204 | 7.0 | 49 | 0.8290 | 0.4286 |
| 0.2882 | 8.0 | 56 | 0.6551 | 0.5714 |
| 0.1512 | 9.0 | 63 | 0.5652 | 0.5714 |
| 0.1302 | 10.0 | 70 | 0.5278 | 0.5714 |
| 0.1043 | 11.0 | 77 | 0.4987 | 0.7143 |
| 0.0272 | 12.0 | 84 | 0.5278 | 0.5714 |
| 0.0201 | 13.0 | 91 | 0.5307 | 0.5714 |
| 0.0129 | 14.0 | 98 | 0.5382 | 0.5714 |
| 0.0117 | 15.0 | 105 | 0.5227 | 0.5714 |
| 0.0094 | 16.0 | 112 | 0.5066 | 0.7143 |
| 0.0104 | 17.0 | 119 | 0.4869 | 0.7143 |
| 0.0069 | 18.0 | 126 | 0.4786 | 0.7143 |
| 0.0062 | 19.0 | 133 | 0.4707 | 0.7143 |
| 0.0065 | 20.0 | 140 | 0.4669 | 0.7143 |
| 0.0051 | 21.0 | 147 | 0.4686 | 0.7143 |
| 0.0049 | 22.0 | 154 | 0.4784 | 0.7143 |
| 0.0046 | 23.0 | 161 | 0.4839 | 0.7143 |
| 0.0039 | 24.0 | 168 | 0.4823 | 0.7143 |
| 0.0044 | 25.0 | 175 | 0.4791 | 0.7143 |
| 0.0037 | 26.0 | 182 | 0.4778 | 0.7143 |
| 0.0038 | 27.0 | 189 | 0.4770 | 0.7143 |
| 0.0036 | 28.0 | 196 | 0.4750 | 0.7143 |
| 0.0031 | 29.0 | 203 | 0.4766 | 0.7143 |
| 0.0031 | 30.0 | 210 | 0.4754 | 0.7143 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
|
b89e19b56e3c514667e817029a5ffdae
|
aXhyra/presentation_emotion_1234567
|
aXhyra
|
distilbert
| 10 | 5 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['tweet_eval']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,403 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_emotion_1234567
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0237
- F1: 0.7273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.18796906442746e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 1234567
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1189 | 1.0 | 408 | 0.6827 | 0.7164 |
| 1.0678 | 2.0 | 816 | 0.6916 | 0.7396 |
| 0.6582 | 3.0 | 1224 | 0.9281 | 0.7276 |
| 0.0024 | 4.0 | 1632 | 1.0237 | 0.7273 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
|
0a6b4906cf97bcc8091e59763dad556b
|
amberoad/bert-multilingual-passage-reranking-msmarco
|
amberoad
|
bert
| 9 | 1,561 |
transformers
| 13 |
text-classification
| true | true | true |
apache-2.0
|
['multilingual', 'af', 'sq', 'ar', 'an', 'hy', 'ast', 'az', 'ba', 'eu', 'bar', 'be', 'bn', 'inc', 'bs', 'br', 'bg', 'my', 'ca', 'ceb', 'ce', 'zh', 'cv', 'hr', 'cs', 'da', 'nl', 'en', 'et', 'fi', 'fr', 'gl', 'ka', 'de', 'el', 'gu', 'ht', 'he', 'hi', 'hu', 'is', 'io', 'id', 'ga', 'it', 'ja', 'jv', 'kn', 'kk', 'ky', 'ko', 'la', 'lv', 'lt', 'roa', 'nds', 'lm', 'mk', 'mg', 'ms', 'ml', 'mr', 'min', 'ne', 'new', 'nb', 'nn', 'oc', 'fa', 'pms', 'pl', 'pt', 'pa', 'ro', 'ru', 'sco', 'sr', 'hr', 'scn', 'sk', 'sl', 'aze', 'es', 'su', 'sw', 'sv', 'tl', 'tg', 'ta', 'tt', 'te', 'tr', 'uk', 'ud', 'uz', 'vi', 'vo', 'war', 'cy', 'fry', 'pnb', 'yo']
|
['msmarco']
| null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['msmarco', 'multilingual', 'passage reranking']
| false | true | true | 6,859 | false |
# Passage Reranking Multilingual BERT 🔃 🌍
## Model description
**Input:** Supports over 100 Languages. See [List of supported languages](https://github.com/google-research/bert/blob/master/multilingual.md#list-of-languages) for all available.
**Purpose:** This module takes a search query [1] and a passage [2] and calculates if the passage matches the query.
It can be used as an improvement for Elasticsearch Results and boosts the relevancy by up to 100%.
**Architecture:** On top of BERT there is a Densly Connected NN which takes the 768 Dimensional [CLS] Token as input and provides the output ([Arxiv](https://arxiv.org/abs/1901.04085)).
**Output:** Just a single value between between -10 and 10. Better matching query,passage pairs tend to have a higher a score.
## Intended uses & limitations
Both query[1] and passage[2] have to fit in 512 Tokens.
As you normally want to rerank the first dozens of search results keep in mind the inference time of approximately 300 ms/query.
#### How to use
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
model = AutoModelForSequenceClassification.from_pretrained("amberoad/bert-multilingual-passage-reranking-msmarco")
```
This Model can be used as a drop-in replacement in the [Nboost Library](https://github.com/koursaros-ai/nboost)
Through this you can directly improve your Elasticsearch Results without any coding.
## Training data
This model is trained using the [**Microsoft MS Marco Dataset**](https://microsoft.github.io/msmarco/ "Microsoft MS Marco"). This training dataset contains approximately 400M tuples of a query, relevant and non-relevant passages. All datasets used for training and evaluating are listed in this [table](https://github.com/microsoft/MSMARCO-Passage-Ranking#data-information-and-formating). The used dataset for training is called *Train Triples Large*, while the evaluation was made on *Top 1000 Dev*. There are 6,900 queries in total in the development dataset, where each query is mapped to top 1,000 passage retrieved using BM25 from MS MARCO corpus.
## Training procedure
The training is performed the same way as stated in this [README](https://github.com/nyu-dl/dl4marco-bert "NYU Github"). See their excellent Paper on [Arxiv](https://arxiv.org/abs/1901.04085).
We changed the BERT Model from an English only to the default BERT Multilingual uncased Model from [Google](https://huggingface.co/bert-base-multilingual-uncased).
Training was done 400 000 Steps. This equaled 12 hours an a TPU V3-8.
## Eval results
We see nearly similar performance than the English only Model in the English [Bing Queries Dataset](http://www.msmarco.org/). Although the training data is English only internal Tests on private data showed a far higher accurancy in German than all other available models.
Fine-tuned Models | Dependency | Eval Set | Search Boost<a href='#benchmarks'> | Speed on GPU
----------------------------------------------------------------------------------- | ---------------------------------------------------------------------------- | ------------------------------------------------------------------ | ----------------------------------------------------- | ----------------------------------
**`amberoad/Multilingual-uncased-MSMARCO`** (This Model) | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-blue"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+61%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query <a href='#footnotes'>
`nboost/pt-tinybert-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+45%** <sub><sup>(0.26 vs 0.18)</sup></sub> | ~50ms/query <a href='#footnotes'>
`nboost/pt-bert-base-uncased-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+62%** <sub><sup>(0.29 vs 0.18)</sup></sub> | ~300 ms/query<a href='#footnotes'>
`nboost/pt-bert-large-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='http://www.msmarco.org/'>bing queries</a> | **+77%** <sub><sup>(0.32 vs 0.18)</sup></sub> | -
`nboost/pt-biobert-base-msmarco` | <img alt="PyTorch" src="https://img.shields.io/badge/PyTorch-red"/> | <a href ='https://github.com/naver/biobert-pretrained'>biomed</a> | **+66%** <sub><sup>(0.17 vs 0.10)</sup></sub> | ~300 ms/query<a href='#footnotes'>
This table is taken from [nboost](https://github.com/koursaros-ai/nboost) and extended by the first line.
## Contact Infos

Amberoad is a company focussing on Search and Business Intelligence.
We provide you:
* Advanced Internal Company Search Engines thorugh NLP
* External Search Egnines: Find Competitors, Customers, Suppliers
**Get in Contact now to benefit from our Expertise:**
The training and evaluation was performed by [**Philipp Reissel**](https://reissel.eu/) and [**Igli Manaj**](https://github.com/iglimanaj)
[ Linkedin](https://de.linkedin.com/company/amberoad) | <svg xmlns="http://www.w3.org/2000/svg" x="0px" y="0px"
width="32" height="32"
viewBox="0 0 172 172"
style=" fill:#000000;"><g fill="none" fill-rule="nonzero" stroke="none" stroke-width="1" stroke-linecap="butt" stroke-linejoin="miter" stroke-miterlimit="10" stroke-dasharray="" stroke-dashoffset="0" font-family="none" font-weight="none" font-size="none" text-anchor="none" style="mix-blend-mode: normal"><path d="M0,172v-172h172v172z" fill="none"></path><g fill="#e67e22"><path d="M37.625,21.5v86h96.75v-86h-5.375zM48.375,32.25h10.75v10.75h-10.75zM69.875,32.25h10.75v10.75h-10.75zM91.375,32.25h32.25v10.75h-32.25zM48.375,53.75h75.25v43h-75.25zM80.625,112.875v17.61572c-1.61558,0.93921 -2.94506,2.2687 -3.88428,3.88428h-49.86572v10.75h49.86572c1.8612,3.20153 5.28744,5.375 9.25928,5.375c3.97183,0 7.39808,-2.17347 9.25928,-5.375h49.86572v-10.75h-49.86572c-0.93921,-1.61558 -2.2687,-2.94506 -3.88428,-3.88428v-17.61572z"></path></g></g></svg>[Homepage](https://de.linkedin.com/company/amberoad) | [Email](info@amberoad.de)
|
47987478a1b223a0455dc27f0ede7324
|
fourthbrain-demo/demo
|
fourthbrain-demo
|
distilbert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# demo
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu102
- Datasets 2.3.2
- Tokenizers 0.12.1
|
f3bb8bfcee58179da586121c8fe83caa
|
OpenAssistant/reward-model-deberta-v3-large-v2
|
OpenAssistant
|
deberta-v2
| 10 | 155 |
transformers
| 6 |
text-classification
| true | false | false |
mit
|
['en']
|
['openai/summarize_from_feedback', 'openai/webgpt_comparisons', 'Dahoas/instruct-synthetic-prompt-responses', 'Anthropic/hh-rlhf']
| null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
['reward-model', 'reward_model', 'RLHF']
| false | true | true | 3,640 | false |
# Reward model trained from human feedback
Reward model (RM) trained to predict which generated answer is better judged by a human, given a question.
RM are useful in these domain:
- QA model evaluation
- serves as reward score in RLHF
- detect potential toxic response via ranking
All models are train on these dataset with a same split seed across datasets (if validation split wasn't available)
- [webgpt_comparisons](https://huggingface.co/datasets/openai/webgpt_comparisons)
- [summarize_from_feedback](https://huggingface.co/datasets/openai/summarize_from_feedback)
- [synthetic-instruct-gptj-pairwise](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise)
- [anthropic_hh-rlhf](https://huggingface.co/datasets/Anthropic/hh-rlhf)
# How to use
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question, answer = "Explain nuclear fusion like I am five", "Nuclear fusion is the process by which two or more protons and neutrons combine to form a single nucleus. It is a very important process in the universe, as it is the source of energy for stars and galaxies. Nuclear fusion is also a key process in the production of energy for nuclear power plants."
inputs = tokenizer(question, answer, return_tensors='pt')
score = rank_model(**inputs).logits[0].cpu().detach()
print(score)
```
**Toxic response detection**
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
reward_name = "OpenAssistant/reward-model-deberta-v3-large-v2"
rank_model, tokenizer = AutoModelForSequenceClassification.from_pretrained(reward_name), AutoTokenizer.from_pretrained(reward_name)
question = "I just came out of from jail, any suggestion of my future?"
helpful = "It's great to hear that you have been released from jail."
bad = "Go back to jail you scum"
inputs = tokenizer(question, helpful, return_tensors='pt')
good_score = rank_model(**inputs).logits[0].cpu().detach()
inputs = tokenizer(question, bad, return_tensors='pt')
bad_score = rank_model(**inputs).logits[0].cpu().detach()
print(good_score > bad_score) # tensor([True])
```
# Performance
Validation split accuracy
| Model | [WebGPT](https://huggingface.co/datasets/openai/webgpt_comparisons) | [Summary](https://huggingface.co/datasets/openai/summarize_from_feedback) | [SytheticGPT](https://huggingface.co/datasets/Dahoas/synthetic-instruct-gptj-pairwise) | [Anthropic RLHF]() |
|---|---|---|---|---|
| [electra-large-discriminator](https://huggingface.co/OpenAssistant/reward-model-electra-large-discriminator) | 59.30 | 68.66 | 99.85 | 54.33 |
| **[deberta-v3-large-v2](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large-v2)** | **61.57** | 71.47 | 99.88 | **69.25** |
| [deberta-v3-large](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-large) | 61.13 | 72.23 | **99.94** | 55.62 |
| [deberta-v3-base](https://huggingface.co/OpenAssistant/reward-model-deberta-v3-base) | 59.07 | 66.84 | 99.85 | 54.51 |
| deberta-v2-xxlarge | 58.67 | **73.27** | 99.77 | 66.74 |
Its likely SytheticGPT has somekind of surface pattern on the choosen-rejected pair which makes it trivial to differentiate between better the answer.
# Other
Sincere thanks to [stability.ai](https://stability.ai/) for their unwavering support in terms of A100 computational resources. Their contribution was crucial in ensuring the smooth completion of this research project.
|
3c0d15d71a5a5b4a79e58ad747c9c983
|
IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese
|
IDEA-CCNL
|
bert
| 6 | 962 |
transformers
| 5 |
feature-extraction
| true | false | false |
apache-2.0
|
['zh']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 4,214 | false |
# Erlangshen-SimCSE-110M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
基于simcse无监督版本,用搜集整理的中文NLI数据进行simcse有监督任务的训练。在中文句子对任务上有良好的效果。
**Erlangshen-SimCSE-110M-Chinese** is based on the unsupervised version of simcse, And training simcse supervised task with collected and sorted chinese NLI data for. It has good effect on the task in Chinese sentences pair.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言生成 NLU | 二郎神 Erlangshen | Bert | 110M | 中文 Chinese |
## 模型信息 Model Information
为了获得一个通用句子向量表征的模型,我们基于bert-base模型用了大量的无监督数据和有监督数据进行对比学习,最终获得了一个无需微调就能够利用模型输出的[CLS]进行相似度判断的模型。与用bert模型在针对任务微调后,再进行句子相似度任务不同,我们的模型在预训练完成后直接具备提取句子向量的能力。在一些任务上有如下的测评效果:
In order to obtain a general sentence-embedding-model, we use a large number of unsupervised data and supervised data for comparative learning based on the Bert-base model, and finally obtained a model that can use the [CLS] output from the model to judge the similarity without fine-tuning. Different from the sentence similarity task after fine tuning the task with the bert model, our model has the ability to extract sentence vectors directly after pre training. In some tasks, the evaluation results are as follows:
|模型 | LCQMC | BQ | PAWSX | ATEC | STS-B |
| :----: | :----: | :----: | :----: | :----: | :----: |
|Bert | 62 |38.62 |17.38 |28.98 |68.27|
|Bert-large| 63.78| 37.51| 18.63| 30.24| 68.87|
|RoBerta| 67.3| 39.89| 16.79| 30.57| 69.|36
|RoBerta large |67.25 |38.39 |19.09 |30.85 |69.36|
|RoFormer| 63.58| 39.9 |17.52| 29.37 |67.32|
|SimBERT| 73.43| 40.98| 15.87| 31.24| 72|
|Erlangshen-SimCSE-110M-Chinese|74.94| 56.97| 21.84| 34.12| 70.5|
*备注:我们的模型是直接用[cls],无whitening;其余模型是last avg + whitening*
*ps:Our model use [cls] directly,and no whitening;Other model use last avg and do whitening*
## 使用 Usage
### 加载模型 Loading Models
```python
from transformers import AutoTokenizer,AutoModelForMaskedLM
model =AutoModelForMaskedLM.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese')
tokenizer = AutoTokenizer.from_pretrained('IDEA-CCNL/Erlangshen-SimCSE-110M-Chinese')
```
### 使用示例 Usage Examples
```python
import torch
from sklearn.metrics.pairwise import cosine_similarity
texta = '今天天气真不错,我们去散步吧!'
textb = '今天天气真糟糕,还是在宅家里写bug吧!'
inputs_a = tokenizer(texta,return_tensors="pt")
inputs_b = tokenizer(textb,return_tensors="pt")
outputs_a = model(**inputs_a ,output_hidden_states=True)
texta_embedding = outputs_a.hidden_states[-1][:,0,:].squeeze()
outputs_b = model(**inputs_b ,output_hidden_states=True)
textb_embedding = outputs_b.hidden_states[-1][:,0,:].squeeze()
# if you use cuda, the text_embedding should be textb_embedding.cpu().numpy()
# 或者用torch.no_grad():
with torch.no_grad():
silimarity_soce = cosine_similarity(texta_embedding.reshape(1,-1),textb_embedding .reshape(1,-1))[0][0]
print(silimarity_soce)
```
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的[论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
```
|
689bf6c0b58556cdcc9e5953cc34a834
|
jonatasgrosman/exp_w2v2t_en_vp-nl_s980
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['en']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'en']
| false | true | true | 475 | false |
# exp_w2v2t_en_vp-nl_s980
Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition on English using the train split of [Common Voice 7.0](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
2bfe5b93b56ddfe1f10055c71c82edf6
|
sriiikar/wav2vec2-hindi-3
|
sriiikar
|
wav2vec2
| 12 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,655 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-hindi-3
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0900
- Wer: 0.7281
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 40
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.609 | 6.41 | 1000 | 1.2290 | 0.7497 |
| 0.3754 | 12.82 | 2000 | 1.5350 | 0.7128 |
| 0.1587 | 19.23 | 3000 | 1.8671 | 0.7322 |
| 0.103 | 25.64 | 4000 | 1.9383 | 0.7300 |
| 0.0761 | 32.05 | 5000 | 2.0767 | 0.7306 |
| 0.0616 | 38.46 | 6000 | 2.0900 | 0.7281 |
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.2.3.dev0
- Tokenizers 0.12.1
|
d7506274dd08489e2f5692f2700e518c
|
Helsinki-NLP/opus-mt-nl-eo
|
Helsinki-NLP
|
marian
| 11 | 16 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['nl', 'eo']
| null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 1,995 | false |
### nld-epo
* source group: Dutch
* target group: Esperanto
* OPUS readme: [nld-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md)
* model: transformer-align
* source language(s): nld
* target language(s): epo
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.nld.epo | 16.1 | 0.355 |
### System Info:
- hf_name: nld-epo
- source_languages: nld
- target_languages: epo
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/nld-epo/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['nl', 'eo']
- src_constituents: {'nld'}
- tgt_constituents: {'epo'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/nld-epo/opus-2020-06-16.test.txt
- src_alpha3: nld
- tgt_alpha3: epo
- short_pair: nl-eo
- chrF2_score: 0.355
- bleu: 16.1
- brevity_penalty: 0.9359999999999999
- ref_len: 72293.0
- src_name: Dutch
- tgt_name: Esperanto
- train_date: 2020-06-16
- src_alpha2: nl
- tgt_alpha2: eo
- prefer_old: False
- long_pair: nld-epo
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
b9b36b0ed8ac6ab8b86b6c08982b39b1
|
lucio/wav2vec2-large-xlsr-luganda
|
lucio
|
wav2vec2
| 10 | 9 |
transformers
| 0 |
automatic-speech-recognition
| true | false | true |
apache-2.0
|
['lg']
|
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
| true | true | true | 4,625 | false |
# Wav2Vec2-Large-XLSR-53-lg
Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Luganda using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset, using train, validation and other (excluding voices that are in the test set), and taking the test data for validation as well as test.
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly (without a language model) as follows:
```python
import torch
import torchaudio
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
test_dataset = load_dataset("common_voice", "lg", split="test[:2%]")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
inputs = processor(test_dataset[:2]["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits
predicted_ids = torch.argmax(logits, dim=-1)
print("Prediction:", processor.batch_decode(predicted_ids))
print("Reference:", test_dataset["sentence"][:2])
```
## Evaluation
The model can be evaluated as follows on the Luganda test data of Common Voice. (Available in Colab [here](https://colab.research.google.com/drive/1XxZ3mJOEXwIn-QH3C23jD_Qpom9aA1vH?usp=sharing).)
```python
import torch
import torchaudio
from datasets import load_dataset, load_metric
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import re
import unidecode
test_dataset = load_dataset("common_voice", "lg", split="test")
wer = load_metric("wer")
processor = Wav2Vec2Processor.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model = Wav2Vec2ForCTC.from_pretrained("lucio/wav2vec2-large-xlsr-luganda")
model.to("cuda")
chars_to_ignore_regex = '[\[\],?.!;:%"“”(){}‟ˮʺ″«»/…‽�–]'
resampler = torchaudio.transforms.Resample(48_000, 16_000)
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = torchaudio.load(batch["path"])
batch["speech"] = resampler(speech_array).squeeze().numpy()
return batch
def remove_special_characters(batch):
# word-internal apostrophes are marking contractions
batch["norm_text"] = re.sub(r'[‘’´`]', r"'", batch["sentence"])
# most other punctuation is ignored
batch["norm_text"] = re.sub(chars_to_ignore_regex, "", batch["norm_text"]).lower().strip()
batch["norm_text"] = re.sub(r"(-|' | '| +)", " ", batch["norm_text"])
# remove accents from a few characters (from loanwords, not tones)
batch["norm_text"] = unidecode.unidecode(batch["norm_text"])
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
test_dataset = test_dataset.map(remove_special_characters)
def evaluate(batch):
inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True)
with torch.no_grad():
logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits
pred_ids = torch.argmax(logits, dim=-1)
batch["pred_strings"] = processor.batch_decode(pred_ids)
return batch
result = test_dataset.map(evaluate, batched=True, batch_size=8)
print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["norm_text"])))
```
**Test Result**: 29.52 %
## Training
The Common Voice `train`, `validation` and `other` datasets were used for training, excluding voices that are in both the `other` and `test` datasets. The data was augmented to twice the original size with added noise and manipulated pitch, phase and intensity.
Training proceeded for 60 epochs, on 1 V100 GPU provided by OVHcloud. The `test` data was used for validation.
The [script used for training](https://github.com/serapio/transformers/blob/feature/xlsr-finetune/examples/research_projects/wav2vec2/run_common_voice.py) is adapted from the [example script provided in the transformers repo](https://github.com/huggingface/transformers/blob/master/examples/research_projects/wav2vec2/run_common_voice.py).
|
3cb3b16342a980bc0ec2a8141fed4b50
|
timm/convnext_small.fb_in1k
|
timm
| null | 4 | 408 |
timm
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagenet-1k']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['image-classification', 'timm']
| false | true | true | 21,328 | false |
# Model card for convnext_small.fb_in1k
A ConvNeXt image classification model. Pretrained on ImageNet-1k by paper authors.
## Model Details
- **Model Type:** Image classification / feature backbone
- **Model Stats:**
- Params (M): 50.2
- GMACs: 8.7
- Activations (M): 21.6
- Image size: 224 x 224
- **Papers:**
- A ConvNet for the 2020s: https://arxiv.org/abs/2201.03545
- **Original:** https://github.com/facebookresearch/ConvNeXt
- **Dataset:** ImageNet-1k
## Model Usage
### Image Classification
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model('convnext_small.fb_in1k', pretrained=True)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
top5_probabilities, top5_class_indices = torch.topk(output.softmax(dim=1) * 100, k=5)
```
### Feature Map Extraction
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_small.fb_in1k',
pretrained=True,
features_only=True,
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # unsqueeze single image into batch of 1
for o in output:
# print shape of each feature map in output
# e.g. for convnext_base:
# torch.Size([1, 128, 56, 56])
# torch.Size([1, 256, 28, 28])
# torch.Size([1, 512, 14, 14])
# torch.Size([1, 1024, 7, 7])
print(o.shape)
```
### Image Embeddings
```python
from urllib.request import urlopen
from PIL import Image
import timm
img = Image.open(
urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png'))
model = timm.create_model(
'convnext_small.fb_in1k',
pretrained=True,
num_classes=0, # remove classifier nn.Linear
)
model = model.eval()
# get model specific transforms (normalization, resize)
data_config = timm.data.resolve_model_data_config(model)
transforms = timm.data.create_transform(**data_config, is_training=False)
output = model(transforms(img).unsqueeze(0)) # output is (batch_size, num_features) shaped tensor
# or equivalently (without needing to set num_classes=0)
output = model.forward_features(transforms(img).unsqueeze(0))
# output is unpooled (ie.e a (batch_size, num_features, H, W) tensor
output = model.forward_head(output, pre_logits=True)
# output is (batch_size, num_features) tensor
```
## Model Comparison
### By Top-1
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
### By Throughput (samples / sec)
All timing numbers from eager model PyTorch 1.13 on RTX 3090 w/ AMP.
|model |top1 |top5 |img_size|param_count|gmacs |macts |samples_per_sec|batch_size|
|----------------------------------------------|------|------|--------|-----------|------|------|---------------|----------|
|[convnext_atto.d2_in1k](https://huggingface.co/timm/convnext_atto.d2_in1k)|75.664|92.9 |224 |3.7 |0.55 |3.81 |8439.22 |256 |
|[convnext_atto_ols.a2_in1k](https://huggingface.co/timm/convnext_atto_ols.a2_in1k)|75.88 |92.846|224 |3.7 |0.58 |4.11 |7963.16 |256 |
|[convnext_femto.d1_in1k](https://huggingface.co/timm/convnext_femto.d1_in1k)|77.454|93.68 |224 |5.22 |0.79 |4.57 |7189.92 |256 |
|[convnext_femto_ols.d1_in1k](https://huggingface.co/timm/convnext_femto_ols.d1_in1k)|77.86 |93.83 |224 |5.23 |0.82 |4.87 |6910.6 |256 |
|[convnext_pico.d1_in1k](https://huggingface.co/timm/convnext_pico.d1_in1k)|79.526|94.558|224 |9.05 |1.37 |6.1 |5686.88 |256 |
|[convnext_pico_ols.d1_in1k](https://huggingface.co/timm/convnext_pico_ols.d1_in1k)|79.522|94.692|224 |9.06 |1.43 |6.5 |5422.46 |256 |
|[convnextv2_atto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_atto.fcmae_ft_in1k)|76.664|93.044|224 |3.71 |0.55 |3.81 |4728.91 |256 |
|[convnextv2_femto.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_femto.fcmae_ft_in1k)|78.488|93.98 |224 |5.23 |0.79 |4.57 |4264.2 |256 |
|[convnext_nano.in12k_ft_in1k](https://huggingface.co/timm/convnext_nano.in12k_ft_in1k)|82.282|96.344|224 |15.59 |2.46 |8.37 |3926.52 |256 |
|[convnext_nano.d1h_in1k](https://huggingface.co/timm/convnext_nano.d1h_in1k)|80.768|95.334|224 |15.59 |2.46 |8.37 |3915.58 |256 |
|[convnext_nano_ols.d1h_in1k](https://huggingface.co/timm/convnext_nano_ols.d1h_in1k)|80.866|95.246|224 |15.65 |2.65 |9.38 |3523.85 |256 |
|[convnextv2_pico.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_pico.fcmae_ft_in1k)|80.304|95.072|224 |9.07 |1.37 |6.1 |3274.57 |256 |
|[convnext_tiny_hnf.a2h_in1k](https://huggingface.co/timm/convnext_tiny_hnf.a2h_in1k)|82.216|95.852|224 |28.59 |4.47 |13.44 |2529.75 |256 |
|[convnext_tiny.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k)|82.898|96.616|224 |28.59 |4.47 |13.44 |2480.88 |256 |
|[convnext_tiny.in12k_ft_in1k](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k)|84.186|97.124|224 |28.59 |4.47 |13.44 |2433.7 |256 |
|[convnext_tiny.fb_in1k](https://huggingface.co/timm/convnext_tiny.fb_in1k)|82.066|95.854|224 |28.59 |4.47 |13.44 |2346.26 |256 |
|[convnextv2_nano.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in1k)|81.83 |95.738|224 |15.62 |2.46 |8.37 |2321.48 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k)|82.03 |96.166|224 |15.62 |2.46 |8.37 |2300.18 |256 |
|[convnext_small.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k)|84.562|97.394|224 |50.22 |8.71 |21.56 |1478.29 |256 |
|[convnext_small.in12k_ft_in1k](https://huggingface.co/timm/convnext_small.in12k_ft_in1k)|85.174|97.506|224 |50.22 |8.71 |21.56 |1474.31 |256 |
|[convnext_small.fb_in1k](https://huggingface.co/timm/convnext_small.fb_in1k)|83.142|96.434|224 |50.22 |8.71 |21.56 |1464.0 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k)|83.894|96.964|224 |28.64 |4.47 |13.44 |1452.72 |256 |
|[convnextv2_tiny.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in1k)|82.92 |96.284|224 |28.64 |4.47 |13.44 |1425.62 |256 |
|[convnext_base.fb_in1k](https://huggingface.co/timm/convnext_base.fb_in1k)|83.82 |96.746|224 |88.59 |15.38 |28.75 |1054.0 |256 |
|[convnext_base.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k)|85.822|97.866|224 |88.59 |15.38 |28.75 |1037.66 |256 |
|[convnext_tiny.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.fb_in22k_ft_in1k_384)|84.084|97.14 |384 |28.59 |13.14 |39.48 |862.95 |256 |
|[convnext_tiny.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_tiny.in12k_ft_in1k_384)|85.118|97.608|384 |28.59 |13.14 |39.48 |856.76 |256 |
|[convnext_base.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_base.clip_laion2b_augreg_ft_in1k)|86.154|97.68 |256 |88.59 |20.09 |37.55 |819.86 |256 |
|[convnextv2_nano.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_nano.fcmae_ft_in22k_in1k_384)|83.37 |96.742|384 |15.62 |7.22 |24.61 |801.72 |256 |
|[convnextv2_base.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in1k)|84.874|97.09 |224 |88.72 |15.38 |28.75 |625.33 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k)|86.74 |98.022|224 |88.72 |15.38 |28.75 |624.23 |256 |
|[convnext_large.fb_in1k](https://huggingface.co/timm/convnext_large.fb_in1k)|84.282|96.892|224 |197.77 |34.4 |43.13 |584.28 |256 |
|[convnext_large.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k)|86.636|98.028|224 |197.77 |34.4 |43.13 |581.43 |256 |
|[convnext_small.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_small.fb_in22k_ft_in1k_384)|85.778|97.886|384 |50.22 |25.58 |63.37 |518.95 |256 |
|[convnext_small.in12k_ft_in1k_384](https://huggingface.co/timm/convnext_small.in12k_ft_in1k_384)|86.182|97.92 |384 |50.22 |25.58 |63.37 |516.19 |256 |
|[convnextv2_tiny.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_tiny.fcmae_ft_in22k_in1k_384)|85.112|97.63 |384 |28.64 |13.14 |39.48 |491.32 |256 |
|[convnext_large_mlp.clip_laion2b_augreg_ft_in1k](https://huggingface.co/timm/convnext_large_mlp.clip_laion2b_augreg_ft_in1k)|87.344|98.218|256 |200.13 |44.94 |56.33 |438.08 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k)|87.26 |98.248|224 |197.96 |34.4 |43.13 |376.84 |256 |
|[convnextv2_large.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in1k)|85.742|97.584|224 |197.96 |34.4 |43.13 |375.23 |256 |
|[convnext_base.clip_laiona_augreg_ft_in1k_384](https://huggingface.co/timm/convnext_base.clip_laiona_augreg_ft_in1k_384)|86.504|97.97 |384 |88.59 |45.21 |84.49 |368.14 |256 |
|[convnext_xlarge.fb_in22k_ft_in1k](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k)|87.002|98.208|224 |350.2 |60.98 |57.5 |368.01 |256 |
|[convnext_base.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_base.fb_in22k_ft_in1k_384)|86.796|98.264|384 |88.59 |45.21 |84.49 |366.54 |256 |
|[convnextv2_base.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_base.fcmae_ft_in22k_in1k_384)|87.646|98.422|384 |88.72 |45.21 |84.49 |209.51 |256 |
|[convnext_large.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_large.fb_in22k_ft_in1k_384)|87.476|98.382|384 |197.77 |101.1 |126.74|194.66 |256 |
|[convnextv2_huge.fcmae_ft_in1k](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in1k)|86.256|97.75 |224 |660.29 |115.0 |79.07 |154.72 |256 |
|[convnextv2_large.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_large.fcmae_ft_in22k_in1k_384)|88.196|98.532|384 |197.96 |101.1 |126.74|128.94 |128 |
|[convnext_xlarge.fb_in22k_ft_in1k_384](https://huggingface.co/timm/convnext_xlarge.fb_in22k_ft_in1k_384)|87.75 |98.556|384 |350.2 |179.2 |168.99|124.85 |192 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_384](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_384)|88.668|98.738|384 |660.29 |337.96|232.35|50.56 |64 |
|[convnextv2_huge.fcmae_ft_in22k_in1k_512](https://huggingface.co/timm/convnextv2_huge.fcmae_ft_in22k_in1k_512)|88.848|98.742|512 |660.29 |600.81|413.07|28.58 |48 |
## Citation
```bibtex
@article{liu2022convnet,
author = {Zhuang Liu and Hanzi Mao and Chao-Yuan Wu and Christoph Feichtenhofer and Trevor Darrell and Saining Xie},
title = {A ConvNet for the 2020s},
journal = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
year = {2022},
}
```
```bibtex
@misc{rw2019timm,
author = {Ross Wightman},
title = {PyTorch Image Models},
year = {2019},
publisher = {GitHub},
journal = {GitHub repository},
doi = {10.5281/zenodo.4414861},
howpublished = {\url{https://github.com/rwightman/pytorch-image-models}}
}
```
|
52e81d130ecf9c6f2031c3c6d6b5dcc0
|
scribis/italian-literature-model-mini
|
scribis
|
gpt2
| 9 | 5 |
transformers
| 0 |
text-generation
| false | true | false |
mit
|
['it']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback', 'text_generator']
| true | true | true | 1,583 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# italian-literature-model-mini
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 5.7067
- Validation Loss: 5.6842
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 15686, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.7065 | 5.6842 | 0 |
| 5.7065 | 5.6842 | 1 |
| 5.7067 | 5.6842 | 2 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
|
02c3cd0f608074cb7e306916d027d0f4
|
steja/whisper-small-shona
|
steja
|
whisper
| 17 | 0 |
transformers
| 1 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['google/fleurs']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,676 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper_small_Shona
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the google/fleurs sn_zw dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1174
- Wer: 50.8563
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 2000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:-------:|
| 0.0054 | 33.32 | 400 | 0.9826 | 51.6687 |
| 0.0009 | 66.64 | 800 | 1.0774 | 50.9062 |
| 0.0005 | 99.96 | 1200 | 1.1174 | 50.8563 |
| 0.0003 | 133.32 | 1600 | 1.1388 | 50.875 |
| 0.0003 | 166.64 | 2000 | 1.1461 | 50.925 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
9fd22d2c65d79a5681587746479023ff
|
tingtingyuli/wav2vec2-base-timit-demo-colab
|
tingtingyuli
|
wav2vec2
| 14 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,641 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-timit-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4371
- Wer: 0.3402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.6515 | 4.0 | 500 | 1.9481 | 0.9825 |
| 0.8007 | 8.0 | 1000 | 0.4364 | 0.4424 |
| 0.2559 | 12.0 | 1500 | 0.4188 | 0.3848 |
| 0.1483 | 16.0 | 2000 | 0.4466 | 0.3524 |
| 0.1151 | 20.0 | 2500 | 0.4492 | 0.3519 |
| 0.0971 | 24.0 | 3000 | 0.4568 | 0.3453 |
| 0.0765 | 28.0 | 3500 | 0.4371 | 0.3402 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.13.3
- Tokenizers 0.10.3
|
4b1257e43e8e747345d2d38d8f59a43c
|
gossminn/predict-perception-bertino-focus-victim
|
gossminn
|
distilbert
| 12 | 3 |
transformers
| 0 |
text-classification
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 3,970 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-bertino-focus-victim
This model is a fine-tuned version of [indigo-ai/BERTino](https://huggingface.co/indigo-ai/BERTino) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2497
- R2: 0.6131
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 47
### Training results
| Training Loss | Epoch | Step | Validation Loss | R2 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.5438 | 1.0 | 14 | 0.4405 | 0.3175 |
| 0.2336 | 2.0 | 28 | 0.2070 | 0.6792 |
| 0.0986 | 3.0 | 42 | 0.2868 | 0.5555 |
| 0.0907 | 4.0 | 56 | 0.2916 | 0.5481 |
| 0.0652 | 5.0 | 70 | 0.2187 | 0.6611 |
| 0.0591 | 6.0 | 84 | 0.2320 | 0.6406 |
| 0.0478 | 7.0 | 98 | 0.2501 | 0.6125 |
| 0.0347 | 8.0 | 112 | 0.2425 | 0.6243 |
| 0.021 | 9.0 | 126 | 0.2670 | 0.5863 |
| 0.0214 | 10.0 | 140 | 0.2853 | 0.5580 |
| 0.0172 | 11.0 | 154 | 0.2726 | 0.5776 |
| 0.0177 | 12.0 | 168 | 0.2629 | 0.5927 |
| 0.0152 | 13.0 | 182 | 0.2396 | 0.6287 |
| 0.012 | 14.0 | 196 | 0.2574 | 0.6012 |
| 0.0119 | 15.0 | 210 | 0.2396 | 0.6288 |
| 0.0128 | 16.0 | 224 | 0.2517 | 0.6100 |
| 0.0109 | 17.0 | 238 | 0.2509 | 0.6112 |
| 0.008 | 18.0 | 252 | 0.2522 | 0.6092 |
| 0.0101 | 19.0 | 266 | 0.2503 | 0.6121 |
| 0.0075 | 20.0 | 280 | 0.2527 | 0.6084 |
| 0.0082 | 21.0 | 294 | 0.2544 | 0.6058 |
| 0.0061 | 22.0 | 308 | 0.2510 | 0.6111 |
| 0.006 | 23.0 | 322 | 0.2402 | 0.6279 |
| 0.005 | 24.0 | 336 | 0.2539 | 0.6066 |
| 0.0058 | 25.0 | 350 | 0.2438 | 0.6222 |
| 0.0051 | 26.0 | 364 | 0.2439 | 0.6221 |
| 0.006 | 27.0 | 378 | 0.2442 | 0.6216 |
| 0.0061 | 28.0 | 392 | 0.2416 | 0.6257 |
| 0.0053 | 29.0 | 406 | 0.2519 | 0.6097 |
| 0.0045 | 30.0 | 420 | 0.2526 | 0.6085 |
| 0.0034 | 31.0 | 434 | 0.2578 | 0.6006 |
| 0.0039 | 32.0 | 448 | 0.2557 | 0.6038 |
| 0.0043 | 33.0 | 462 | 0.2538 | 0.6068 |
| 0.0041 | 34.0 | 476 | 0.2535 | 0.6072 |
| 0.0042 | 35.0 | 490 | 0.2560 | 0.6033 |
| 0.0037 | 36.0 | 504 | 0.2576 | 0.6009 |
| 0.0036 | 37.0 | 518 | 0.2634 | 0.5919 |
| 0.0037 | 38.0 | 532 | 0.2582 | 0.5999 |
| 0.0038 | 39.0 | 546 | 0.2552 | 0.6045 |
| 0.0034 | 40.0 | 560 | 0.2563 | 0.6028 |
| 0.0033 | 41.0 | 574 | 0.2510 | 0.6110 |
| 0.0029 | 42.0 | 588 | 0.2515 | 0.6103 |
| 0.0033 | 43.0 | 602 | 0.2525 | 0.6088 |
| 0.0028 | 44.0 | 616 | 0.2522 | 0.6093 |
| 0.0028 | 45.0 | 630 | 0.2526 | 0.6085 |
| 0.0027 | 46.0 | 644 | 0.2494 | 0.6136 |
| 0.0024 | 47.0 | 658 | 0.2497 | 0.6131 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
ed2d0b79f6095cce8d9c43704e30efc6
|
PlayDev/klue-bert-finetuned-nsmc
|
PlayDev
|
bert
| 18 | 7 |
transformers
| 0 |
text-classification
| true | false | false |
cc-by-sa-4.0
| null |
['nsmc']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,130 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# klue-bert-finetuned-nsmc
This model is a fine-tuned version of [klue/bert-base](https://huggingface.co/klue/bert-base) on the nsmc dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.2402
- eval_accuracy: 0.9029
- eval_f1: 0.9029
- eval_runtime: 343.6707
- eval_samples_per_second: 145.488
- eval_steps_per_second: 4.548
- epoch: 1.0
- step: 4688
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.13.0
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.10.3
|
668992cd236d193efe01b0428315676d
|
skr1125/distilbert-base-uncased-finetuned-emotion
|
skr1125
|
distilbert
| 12 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,344 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2253
- Accuracy: 0.927
- F1: 0.9268
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8507 | 1.0 | 250 | 0.3406 | 0.899 | 0.8954 |
| 0.2546 | 2.0 | 500 | 0.2253 | 0.927 | 0.9268 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.11.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
822952526042091448cd9dddb8a76e2d
|
abdouaziiz/wav2vec2-WOLOF-2.6K-base
|
abdouaziiz
|
wav2vec2
| 10 | 3 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,577 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wolof
This model is a fine-tuned version of [LeBenchmark/wav2vec2-FR-2.6K-base](https://huggingface.co/LeBenchmark/wav2vec2-FR-2.6K-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2816
- Wer: 0.3897
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 20
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.9468 | 1.67 | 1500 | 0.7036 | 0.6418 |
| 0.5506 | 3.33 | 3000 | 0.4129 | 0.5018 |
| 0.3817 | 5.0 | 4500 | 0.3414 | 0.4519 |
| 0.2885 | 6.67 | 6000 | 0.3181 | 0.4305 |
| 0.2275 | 8.33 | 7500 | 0.2920 | 0.4011 |
| 0.1852 | 10.0 | 9000 | 0.2816 | 0.3897 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
|
40ebd75279c2a04366401d1efd284ddb
|
jonatasgrosman/exp_w2v2t_pt_no-pretraining_s34
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['pt']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'pt']
| false | true | true | 413 | false |
# exp_w2v2t_pt_no-pretraining_s34
Fine-tuned randomly initialized wav2vec2 model for speech recognition using the train split of [Common Voice 7.0 (pt)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
4b7a55b0a6f116a42d76e526c2ad0094
|
sd-dreambooth-library/joseph-russel-ammen
|
sd-dreambooth-library
| null | 23 | 2 |
diffusers
| 0 | null | false | false | false |
mit
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,432 | false |
### Joseph Russel Ammen on Stable Diffusion via Dreambooth
#### model by wallowbitz
This your the Stable Diffusion model fine-tuned the Joseph Russel Ammen concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **Joseph Russel Ammen**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





|
2ad2e42c5b3cfa111a33bc6e7dba7b7a
|
versae/mdeberta-v3-base-finetuned-recores
|
versae
|
deberta-v2
| 22 | 0 |
transformers
| 0 |
multiple-choice
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,779 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mdeberta-v3-base-finetuned-recores
This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeberta-v3-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6094
- Accuracy: 0.2011
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 3000
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 1.6112 | 1.0 | 1047 | 1.6094 | 0.1901 |
| 1.608 | 2.0 | 2094 | 1.6094 | 0.1873 |
| 1.6127 | 3.0 | 3141 | 1.6095 | 0.1983 |
| 1.6125 | 4.0 | 4188 | 1.6094 | 0.2424 |
| 1.6118 | 5.0 | 5235 | 1.6094 | 0.1956 |
| 1.6181 | 6.0 | 6282 | 1.6094 | 0.2094 |
| 1.6229 | 7.0 | 7329 | 1.6095 | 0.1680 |
| 1.6125 | 8.0 | 8376 | 1.6094 | 0.1736 |
| 1.6134 | 9.0 | 9423 | 1.6094 | 0.2066 |
| 1.6174 | 10.0 | 10470 | 1.6093 | 0.2204 |
| 1.6161 | 11.0 | 11517 | 1.6096 | 0.2121 |
| 1.6198 | 12.0 | 12564 | 1.6094 | 0.2039 |
| 1.6182 | 13.0 | 13611 | 1.6094 | 0.2287 |
| 1.6208 | 14.0 | 14658 | 1.6094 | 0.2287 |
| 1.6436 | 15.0 | 15705 | 1.6092 | 0.2287 |
| 1.6209 | 16.0 | 16752 | 1.6094 | 0.2094 |
| 1.6097 | 17.0 | 17799 | 1.6094 | 0.2094 |
| 1.6115 | 18.0 | 18846 | 1.6094 | 0.2149 |
| 1.6249 | 19.0 | 19893 | 1.6094 | 0.1956 |
| 1.6201 | 20.0 | 20940 | 1.6094 | 0.1763 |
| 1.6217 | 21.0 | 21987 | 1.6094 | 0.1956 |
| 1.6193 | 22.0 | 23034 | 1.6094 | 0.1846 |
| 1.6171 | 23.0 | 24081 | 1.6095 | 0.1983 |
| 1.6123 | 24.0 | 25128 | 1.6095 | 0.1846 |
| 1.6164 | 25.0 | 26175 | 1.6094 | 0.2011 |
### Framework versions
- Transformers 4.19.0
- Pytorch 1.10.1+cu102
- Datasets 2.2.1
- Tokenizers 0.12.1
|
ed67f12c9eeae46841d6163af97260a1
|
muhtasham/tiny-mlm-glue-sst2-from-scratch-custom-tokenizer-expand-vocab
|
muhtasham
|
bert
| 12 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,594 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-sst2-from-scratch-custom-tokenizer-expand-vocab
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.5699
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 9.6351 | 0.4 | 500 | 8.5392 |
| 7.9053 | 0.8 | 1000 | 7.2886 |
| 7.0854 | 1.2 | 1500 | 6.8440 |
| 6.8355 | 1.6 | 2000 | 6.6595 |
| 6.7188 | 2.0 | 2500 | 6.5499 |
| 6.698 | 2.4 | 3000 | 6.6385 |
| 6.6435 | 2.8 | 3500 | 6.6154 |
| 6.6402 | 3.2 | 4000 | 6.5699 |
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.1.dev0
- Tokenizers 0.13.2
|
b851d4be417d01c830077236fe4eef6d
|
csikasote/xls-r-1b-bemba-10hrs
|
csikasote
|
wav2vec2
| 17 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,003 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xls-r-1b-bemba-10hrs
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-1b](https://huggingface.co/facebook/wav2vec2-xls-r-1b) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2350
- Wer: 0.3524
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 400
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 2.2547 | 0.54 | 400 | 0.4199 | 0.5888 |
| 0.5422 | 1.07 | 800 | 0.2689 | 0.4360 |
| 0.4154 | 1.61 | 1200 | 0.2342 | 0.4008 |
| 0.4075 | 2.15 | 1600 | 0.2172 | 0.3579 |
| 0.3326 | 2.68 | 2000 | 0.2151 | 0.3603 |
| 0.2837 | 3.22 | 2400 | 0.2117 | 0.3505 |
| 0.2688 | 3.76 | 2800 | 0.2040 | 0.3559 |
| 0.2401 | 4.3 | 3200 | 0.2099 | 0.3445 |
| 0.2176 | 4.83 | 3600 | 0.1973 | 0.3299 |
| 0.1913 | 5.37 | 4000 | 0.2123 | 0.3432 |
| 0.1683 | 5.91 | 4400 | 0.2032 | 0.3358 |
| 0.1445 | 6.44 | 4800 | 0.2350 | 0.3524 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
734a5c2877b35f613b787e4413634138
|
euphoricpenguin22/3DVaporwave
|
euphoricpenguin22
| null | 6 | 0 | null | 0 | null | false | false | false |
creativeml-openrail-m
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 550 | false |
# 3DVaporwave
A Dreambooth model based on Stable Diffusion 1.5. The keyword for the model is `threedvaporstyle`, which should be sufficient for most generations. Semantically, it can be helpful to treat the keyword as a style descriptor. I also find that using descriptions to indicate that the image is a render can increase the likelihood that it will generate in the style that you want.


|
0bdd4645973c91a2936d50729282672c
|
haritha-bendapudi/test_trainer
|
haritha-bendapudi
|
bert
| 6 | 18 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['yelp_review_full']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,318 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6082
- Accuracy: 0.205
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 1.6262 | 0.23 |
| No log | 2.0 | 250 | 1.6096 | 0.212 |
| No log | 3.0 | 375 | 1.6082 | 0.205 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cpu
- Datasets 2.5.1
- Tokenizers 0.12.1
|
d8ea82a3769965935b24d5f35ba7b939
|
valurank/paraphrase-mpnet-base-v2-offensive
|
valurank
|
mpnet
| 13 | 7 |
sentence-transformers
| 0 |
sentence-similarity
| true | false | false |
other
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
| false | true | true | 3,731 | false |
# valurank/paraphrase-mpnet-base-v2-offensive
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('valurank/paraphrase-mpnet-base-v2-offensive')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('valurank/paraphrase-mpnet-base-v2-offensive')
model = AutoModel.from_pretrained('valurank/paraphrase-mpnet-base-v2-offensive')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=valurank/paraphrase-mpnet-base-v2-offensive)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1280 with parameters:
```
{'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: MPNetModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
6727f3e7d8e63a1069498fe6f3786308
|
emilios/whisper-tn-el-e1
|
emilios
|
whisper
| 34 | 4 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['el']
|
['mozilla-foundation/common_voice_11_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['whisper-event', 'generated_from_trainer']
| true | true | true | 1,571 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper tiny Greek El Greco
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 el dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5615
- Wer: 45.6538
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.3389 | 8.33 | 1000 | 0.5659 | 49.1363 |
| 0.1562 | 16.67 | 2000 | 0.5615 | 45.6538 |
| 0.0616 | 25.0 | 3000 | 0.6440 | 46.9632 |
| 0.0116 | 33.33 | 4000 | 0.7578 | 47.9569 |
| 0.0046 | 41.67 | 5000 | 0.8374 | 48.6999 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 2.0.0.dev20221216+cu116
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
|
38920f94aebcbfa92574d32820b7f2a6
|
HannahRoseKirk/Hatemoji
|
HannahRoseKirk
|
deberta
| 12 | 6 |
transformers
| 3 |
text-classification
| true | false | false |
cc-by-4.0
|
['en']
|
['HatemojiBuild', 'HatemojiCheck']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-classification', 'pytorch', 'hate-speech-detection']
| false | true | true | 7,325 | false |
# Hatemoji Model
## Model description
This model is a fine-tuned version of the [DeBERTa base model](https://huggingface.co/microsoft/deberta-base). This model is cased. The model was trained on iterative rounds of adversarial data generation with human-and-model-in-the-loop. In each round, annotators are tasked with tricking the model-in-the-loop with emoji-containing statements that it will misclassify. Between each round, the model is retrained. This is the final model from the iterative process, referred to as R8-T in our paper. The intended task is to classify an emoji-containing statement as either non-hateful (LABEL 0.0) or hateful (LABEL 1.0).
- **Github Repository:** https://github.com/HannahKirk/Hatemoji
- **HuggingFace Datasets:** [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) & [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck)
- **Paper:** https://arxiv.org/abs/2108.05921
- **Point of Contact:** hannah.kirk@oii.ox.ac.uk
## Intended uses & limitations
The intended use of the model is to classify English-language, emoji-containing, short-form text documents as a binary task: non-hateful vs hateful. The model has demonstrated strengths compared to commercial and academic models on classifying emoji-based hate, but is also a strong classifier of text-only hate. Because the model was trained on synthetic, adversarially-generated data, it may have some weaknesses when it comes to empirical emoji-based hate 'in-the-wild'.
You can interact with this model on [Dynabench](https://dynabench.org/tasks/hs), and find its limitations. We hope to continue improving the model on new adversarial data to better iron out its remaining weaknesses!
## How to use
The model can be used with pipeline:
```python
from transformers import pipeline
classifier = pipeline("text-classification",model='HannahRoseKirk/Hatemoji', return_all_scores=True)
prediction = classifier("I 💜💙💚 emoji 😍", )
print(prediction)
"""
Output
[[{'label': 'LABEL_0', 'score': 0.9999157190322876}, {'label': 'LABEL_1', 'score': 8.425049600191414e-05}]]
"""
```
### Training data
The model was trained on:
* The three rounds of emoji-containing, adversarially-generated texts from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild)
* The four rounds of text-only, adversarially-generated texts from Vidgen et al., (2021). _Learning from the worst: Dynamically generated datasets to improve online hate detection_. Available on [Github](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset) and explained in their [paper](https://arxiv.org/abs/2012.15761).
* A collection of widely available and publicly accessible datasets from [https://hatespeechdata.com/](hatespeechdata.com)
## Train procedure
The model was trained using HuggingFace's [run glue script](https://github.com/huggingface/transformers/blob/main/examples/pytorch/text-classification/run_glue.py), using the following parameters:
```
python3 transformers/examples/pytorch/text-classification/run_glue.py \
--model_name_or_path microsoft/deberta-base \
--validation_file path_to_data/dev.csv \
--train_file path_to_data/train.csv \
--do_train --do_eval --max_seq_length 512 --learning_rate 2e-5 \
--num_train_epochs 3 --evaluation_strategy epoch \
--load_best_model_at_end --output_dir path_to_outdir/deberta123/ \
--seed 123 \
--cache_dir /.cache/huggingface/transformers/ \
--overwrite_output_dir > ./log_deb 2> ./err_deb
```
We experimented with upsampling the train split of each round to improve performance with increments of [1, 5, 10, 100], with the optimum upsampling taken
forward to all subsequent rounds. The optimal upsampling ratios for R1-R4 (text rounds from Vidgen et al.,) are carried forward. This model is trained on upsampling ratios of `{'R0':1, 'R1':5, 'R2':100, 'R3':1, 'R4':1 , 'R5':100, 'R6':1, 'R7':5}`.
## Variable and metrics
We wished to train a model which could effectively encode information about emoji-based hate, without worsening performance on text-only hate. Thus, we evaluate the model on:
* [HatemojiCheck](https://huggingface.co/datasets/HannahRoseKirk/HatemojiCheck), an evaluation checklist with 7 functionalities of emoji-based hate and contrast sets
* [HateCheck](https://huggingface.co/datasets/Paul/hatecheck), an evaluation checklist contains 29 functional tests for hate speech and contrast sets.
* The held-out tests sets from [HatemojiBuild](https://huggingface.co/datasets/HannahRoseKirk/HatemojiBuild) the three round of adversarially-generated data collection with emoji-containing examples (R5-7). Available on Huuggingface
* The held-out test sets from the four rounds of adversarially-generated data collection with text-only examples (R1-4, from [Vidgen et al.](https://github.com/bvidgen/Dynamically-Generated-Hate-Speech-Dataset))
For the round-specific test sets, we used a weighted F1-score across them to choose the final model in each round. For more details, see our [paper](https://arxiv.org/abs/2108.05921)
## Evaluation results
We compare our model to:
* **P-IA**: the identity attack attribute from Perspective API
* **P-TX**: the toxicity attribute from Perspective API
* **B-D**: A BERT model trained on the [Davidson et al. (2017)](https://github.com/t-davidson/hate-speech-and-offensive-language) dataset
* **B-F**: A BERT model trained on the [Founta et al. (2018)](https://github.com/ENCASEH2020/hatespeech-twitter) dataset
| | **Emoji Test Sets** | | | | **Text Test Sets** | | | | **All Rounds** | |
| :------- | :-----------------: | :--------: | :------------: | :--------: | :----------------: | :--------: | :-----------: | :--------: | :------------: | :--------: |
| | **R5-R7** | | **HmojiCheck** | | **R1-R4** | | **HateCheck** | | **R1-R7** | |
| | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** | **Acc** | **F1** |
| **P-IA** | 0\.508 | 0\.394 | 0\.689 | 0\.754 | 0\.679 | 0\.720 | 0\.765 | 0\.839 | 0\.658 | 0\.689 |
| **P-TX** | 0\.523 | 0\.448 | 0\.650 | 0\.711 | 0\.602 | 0\.659 | 0\.720 | 0\.813 | 0\.592 | 0\.639 |
| **B-D** | 0\.489 | 0\.270 | 0\.578 | 0\.636 | 0\.589 | 0\.607 | 0\.632 | 0\.738 | 0\.591 | 0\.586 |
| **B-F** | 0\.496 | 0\.322 | 0\.552 | 0\.605 | 0\.562 | 0\.562 | 0\.602 | 0\.694 | 0\.557 | 0\.532 |
| **Hatemoji** | **0\.744** | **0\.755** | **0\.871** | **0\.904** | **0\.827** | **0\.844** | **0\.966** | **0\.975** | **0\.814** | **0\.829** |
For full discussion of the model results, see our [paper](https://arxiv.org/abs/2108.05921).
A recent [paper](https://arxiv.org/pdf/2202.11176.pdf) by Lees et al., (2022) _A New Generation of Perspective API:Efficient Multilingual Character-level Transformers_ beats this model on the HatemojiCheck benchmark.
|
c0d7f48dd2edb45c0f834e46dc699a33
|
faisito/xlm-roberta-base-finetuned-panx-de
|
faisito
|
xlm-roberta
| 13 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1372
- F1: 0.8596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2549 | 1.0 | 525 | 0.1663 | 0.8164 |
| 0.128 | 2.0 | 1050 | 0.1421 | 0.8460 |
| 0.0821 | 3.0 | 1575 | 0.1372 | 0.8596 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.0
- Tokenizers 0.13.2
|
e2183455ea06821039eed4d733f5260e
|
marwanHug/ddpm-butterflies-128
|
marwanHug
| null | 16 | 0 |
diffusers
| 0 | null | false | false | false |
apache-2.0
|
['en']
|
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,205 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `imagefolder` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/marwanHug/ddpm-butterflies-128/tensorboard?#scalars)
|
b714e1c39d9a6e9b51d030fb999c9a49
|
alkiskoudounas/sd-universe-256px
|
alkiskoudounas
| null | 7 | 2 |
diffusers
| 0 |
unconditional-image-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'unconditional-image-generation', 'diffusion-models-class']
| false | true | true | 643 | false |
# Model Card for Stable Diffusion - Universe, 256px
Model developed for the Unit 1 of the [Diffusion Models Class 🧨](https://github.com/huggingface/diffusion-models-class).
This model is a diffusion model for unconditional image generation of Universe 🪐.
It is trained on a small collection of universe pictures and trained for 50 epochs, with 🤗 Accelerate.
## Usage
```python
from diffusers import DDPMPipeline
pipeline = DDPMPipeline.from_pretrained('alkiskoudounas/sd-universe-256px')
```
## Example
Here you can find an example of the output of the model, in a batch of 8 images:

|
2784bd64b504f4b5d9d928f8f837092b
|
gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
|
gary109
|
wav2vec2
| 14 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'gary109/AI_Light_Dance', 'generated_from_trainer']
| true | true | true | 2,031 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v4
This model is a fine-tuned version of [gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3](https://huggingface.co/gary109/ai-light-dance_stepmania_ft_wav2vec2-large-xlsr-53-v3) on the GARY109/AI_LIGHT_DANCE - ONSET-STEPMANIA2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.0298
- Wer: 0.6642
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.9218 | 1.0 | 188 | 1.0718 | 0.6958 |
| 0.9194 | 2.0 | 376 | 1.0354 | 0.6937 |
| 0.9077 | 3.0 | 564 | 1.0365 | 0.6730 |
| 0.8956 | 4.0 | 752 | 1.0497 | 0.6727 |
| 0.877 | 5.0 | 940 | 1.0299 | 0.6694 |
| 0.8736 | 6.0 | 1128 | 1.0298 | 0.6642 |
| 0.8769 | 7.0 | 1316 | 1.0348 | 0.6584 |
| 0.8571 | 8.0 | 1504 | 1.0689 | 0.6602 |
| 0.8573 | 9.0 | 1692 | 1.0559 | 0.6549 |
| 0.8458 | 10.0 | 1880 | 1.0706 | 0.6588 |
### Framework versions
- Transformers 4.21.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.3.dev0
- Tokenizers 0.12.1
|
01173a575be7027001fda61eaf00ca4a
|
gokuls/distilbert_sa_GLUE_Experiment_data_aug_mrpc_96
|
gokuls
|
distilbert
| 17 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
|
['en']
|
['glue']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 5,787 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_sa_GLUE_Experiment_data_aug_mrpc_96
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE MRPC dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0000
- Accuracy: 1.0
- F1: 1.0
- Combined Score: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:|
| 0.3242 | 1.0 | 980 | 0.0830 | 0.9804 | 0.9857 | 0.9830 |
| 0.0843 | 2.0 | 1960 | 0.0355 | 0.9828 | 0.9875 | 0.9852 |
| 0.0431 | 3.0 | 2940 | 0.0105 | 1.0 | 1.0 | 1.0 |
| 0.0268 | 4.0 | 3920 | 0.0046 | 1.0 | 1.0 | 1.0 |
| 0.019 | 5.0 | 4900 | 0.0015 | 1.0 | 1.0 | 1.0 |
| 0.0141 | 6.0 | 5880 | 0.0011 | 1.0 | 1.0 | 1.0 |
| 0.0115 | 7.0 | 6860 | 0.0007 | 1.0 | 1.0 | 1.0 |
| 0.0094 | 8.0 | 7840 | 0.0004 | 1.0 | 1.0 | 1.0 |
| 0.0078 | 9.0 | 8820 | 0.0004 | 1.0 | 1.0 | 1.0 |
| 0.0056 | 10.0 | 9800 | 0.0006 | 1.0 | 1.0 | 1.0 |
| 0.0056 | 11.0 | 10780 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0039 | 12.0 | 11760 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0038 | 13.0 | 12740 | 0.0001 | 1.0 | 1.0 | 1.0 |
| 0.0029 | 14.0 | 13720 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0026 | 15.0 | 14700 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0025 | 16.0 | 15680 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0019 | 17.0 | 16660 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0017 | 18.0 | 17640 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0015 | 19.0 | 18620 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 20.0 | 19600 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 21.0 | 20580 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0013 | 22.0 | 21560 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0012 | 23.0 | 22540 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.001 | 24.0 | 23520 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0008 | 25.0 | 24500 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 26.0 | 25480 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0006 | 27.0 | 26460 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 28.0 | 27440 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0007 | 29.0 | 28420 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0005 | 30.0 | 29400 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 31.0 | 30380 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0005 | 32.0 | 31360 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 33.0 | 32340 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 34.0 | 33320 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0004 | 35.0 | 34300 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 36.0 | 35280 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 37.0 | 36260 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 38.0 | 37240 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0003 | 39.0 | 38220 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 40.0 | 39200 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 41.0 | 40180 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 42.0 | 41160 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 43.0 | 42140 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0002 | 44.0 | 43120 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 45.0 | 44100 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 46.0 | 45080 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 47.0 | 46060 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 48.0 | 47040 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0 | 49.0 | 48020 | 0.0000 | 1.0 | 1.0 | 1.0 |
| 0.0001 | 50.0 | 49000 | 0.0000 | 1.0 | 1.0 | 1.0 |
### Framework versions
- Transformers 4.26.0
- Pytorch 1.14.0a0+410ce96
- Datasets 2.9.0
- Tokenizers 0.13.2
|
e31bbbae8899b09c2d508b39f2eb3fc8
|
zambezivoice/wav2vec2-large-xlsr-lozi-test_001
|
zambezivoice
|
wav2vec2
| 13 | 0 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,657 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-lozi-test_001
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7485
- Wer: 0.3943
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.0643 | 4.31 | 500 | 0.5803 | 0.5244 |
| 0.3151 | 8.62 | 1000 | 0.5316 | 0.4477 |
| 0.1569 | 12.93 | 1500 | 0.6340 | 0.4308 |
| 0.0845 | 17.24 | 2000 | 0.6819 | 0.3986 |
| 0.0503 | 21.55 | 2500 | 0.7048 | 0.4051 |
| 0.0354 | 25.86 | 3000 | 0.7485 | 0.3943 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
69f0a503fb74b86f37ec2b04c84798c0
|
bigmorning/lektay
|
bigmorning
|
distilbert
| 8 | 3 |
transformers
| 0 |
fill-mask
| false | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_keras_callback']
| true | true | true | 990 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lektay
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.2
- Tokenizers 0.12.1
|
6d74e6e12eb71f51f7fbab9c7b4c0929
|
Sandipan1994/t5-small-entailement-Writer
|
Sandipan1994
|
t5
| 11 | 3 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 8,774 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-entailement-Writer
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5958
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 150
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 42 | 1.8511 |
| No log | 2.0 | 84 | 1.2249 |
| No log | 3.0 | 126 | 0.9976 |
| No log | 4.0 | 168 | 0.9108 |
| No log | 5.0 | 210 | 0.8478 |
| No log | 6.0 | 252 | 0.8186 |
| No log | 7.0 | 294 | 0.7965 |
| No log | 8.0 | 336 | 0.7815 |
| No log | 9.0 | 378 | 0.7634 |
| No log | 10.0 | 420 | 0.7544 |
| No log | 11.0 | 462 | 0.7408 |
| 1.2198 | 12.0 | 504 | 0.7298 |
| 1.2198 | 13.0 | 546 | 0.7240 |
| 1.2198 | 14.0 | 588 | 0.7139 |
| 1.2198 | 15.0 | 630 | 0.7070 |
| 1.2198 | 16.0 | 672 | 0.7028 |
| 1.2198 | 17.0 | 714 | 0.6977 |
| 1.2198 | 18.0 | 756 | 0.6926 |
| 1.2198 | 19.0 | 798 | 0.6906 |
| 1.2198 | 20.0 | 840 | 0.6846 |
| 1.2198 | 21.0 | 882 | 0.6822 |
| 1.2198 | 22.0 | 924 | 0.6760 |
| 1.2198 | 23.0 | 966 | 0.6710 |
| 0.7403 | 24.0 | 1008 | 0.6667 |
| 0.7403 | 25.0 | 1050 | 0.6657 |
| 0.7403 | 26.0 | 1092 | 0.6653 |
| 0.7403 | 27.0 | 1134 | 0.6588 |
| 0.7403 | 28.0 | 1176 | 0.6584 |
| 0.7403 | 29.0 | 1218 | 0.6573 |
| 0.7403 | 30.0 | 1260 | 0.6520 |
| 0.7403 | 31.0 | 1302 | 0.6522 |
| 0.7403 | 32.0 | 1344 | 0.6525 |
| 0.7403 | 33.0 | 1386 | 0.6463 |
| 0.7403 | 34.0 | 1428 | 0.6453 |
| 0.7403 | 35.0 | 1470 | 0.6437 |
| 0.6642 | 36.0 | 1512 | 0.6397 |
| 0.6642 | 37.0 | 1554 | 0.6382 |
| 0.6642 | 38.0 | 1596 | 0.6365 |
| 0.6642 | 39.0 | 1638 | 0.6332 |
| 0.6642 | 40.0 | 1680 | 0.6335 |
| 0.6642 | 41.0 | 1722 | 0.6325 |
| 0.6642 | 42.0 | 1764 | 0.6295 |
| 0.6642 | 43.0 | 1806 | 0.6304 |
| 0.6642 | 44.0 | 1848 | 0.6287 |
| 0.6642 | 45.0 | 1890 | 0.6272 |
| 0.6642 | 46.0 | 1932 | 0.6267 |
| 0.6642 | 47.0 | 1974 | 0.6242 |
| 0.6127 | 48.0 | 2016 | 0.6232 |
| 0.6127 | 49.0 | 2058 | 0.6225 |
| 0.6127 | 50.0 | 2100 | 0.6211 |
| 0.6127 | 51.0 | 2142 | 0.6204 |
| 0.6127 | 52.0 | 2184 | 0.6196 |
| 0.6127 | 53.0 | 2226 | 0.6183 |
| 0.6127 | 54.0 | 2268 | 0.6168 |
| 0.6127 | 55.0 | 2310 | 0.6175 |
| 0.6127 | 56.0 | 2352 | 0.6160 |
| 0.6127 | 57.0 | 2394 | 0.6154 |
| 0.6127 | 58.0 | 2436 | 0.6143 |
| 0.6127 | 59.0 | 2478 | 0.6142 |
| 0.5799 | 60.0 | 2520 | 0.6131 |
| 0.5799 | 61.0 | 2562 | 0.6122 |
| 0.5799 | 62.0 | 2604 | 0.6120 |
| 0.5799 | 63.0 | 2646 | 0.6115 |
| 0.5799 | 64.0 | 2688 | 0.6119 |
| 0.5799 | 65.0 | 2730 | 0.6112 |
| 0.5799 | 66.0 | 2772 | 0.6099 |
| 0.5799 | 67.0 | 2814 | 0.6094 |
| 0.5799 | 68.0 | 2856 | 0.6082 |
| 0.5799 | 69.0 | 2898 | 0.6092 |
| 0.5799 | 70.0 | 2940 | 0.6081 |
| 0.5799 | 71.0 | 2982 | 0.6071 |
| 0.5558 | 72.0 | 3024 | 0.6062 |
| 0.5558 | 73.0 | 3066 | 0.6079 |
| 0.5558 | 74.0 | 3108 | 0.6072 |
| 0.5558 | 75.0 | 3150 | 0.6052 |
| 0.5558 | 76.0 | 3192 | 0.6066 |
| 0.5558 | 77.0 | 3234 | 0.6049 |
| 0.5558 | 78.0 | 3276 | 0.6042 |
| 0.5558 | 79.0 | 3318 | 0.6039 |
| 0.5558 | 80.0 | 3360 | 0.6050 |
| 0.5558 | 81.0 | 3402 | 0.6042 |
| 0.5558 | 82.0 | 3444 | 0.6040 |
| 0.5558 | 83.0 | 3486 | 0.6029 |
| 0.5292 | 84.0 | 3528 | 0.6032 |
| 0.5292 | 85.0 | 3570 | 0.6039 |
| 0.5292 | 86.0 | 3612 | 0.6036 |
| 0.5292 | 87.0 | 3654 | 0.6019 |
| 0.5292 | 88.0 | 3696 | 0.6014 |
| 0.5292 | 89.0 | 3738 | 0.6022 |
| 0.5292 | 90.0 | 3780 | 0.6014 |
| 0.5292 | 91.0 | 3822 | 0.6020 |
| 0.5292 | 92.0 | 3864 | 0.6028 |
| 0.5292 | 93.0 | 3906 | 0.5994 |
| 0.5292 | 94.0 | 3948 | 0.6004 |
| 0.5292 | 95.0 | 3990 | 0.5987 |
| 0.5159 | 96.0 | 4032 | 0.5992 |
| 0.5159 | 97.0 | 4074 | 0.5993 |
| 0.5159 | 98.0 | 4116 | 0.5989 |
| 0.5159 | 99.0 | 4158 | 0.6004 |
| 0.5159 | 100.0 | 4200 | 0.6001 |
| 0.5159 | 101.0 | 4242 | 0.6008 |
| 0.5159 | 102.0 | 4284 | 0.6006 |
| 0.5159 | 103.0 | 4326 | 0.5999 |
| 0.5159 | 104.0 | 4368 | 0.5994 |
| 0.5159 | 105.0 | 4410 | 0.5996 |
| 0.5159 | 106.0 | 4452 | 0.5991 |
| 0.5159 | 107.0 | 4494 | 0.5990 |
| 0.5004 | 108.0 | 4536 | 0.5996 |
| 0.5004 | 109.0 | 4578 | 0.5988 |
| 0.5004 | 110.0 | 4620 | 0.5992 |
| 0.5004 | 111.0 | 4662 | 0.5984 |
| 0.5004 | 112.0 | 4704 | 0.5982 |
| 0.5004 | 113.0 | 4746 | 0.5973 |
| 0.5004 | 114.0 | 4788 | 0.5984 |
| 0.5004 | 115.0 | 4830 | 0.5973 |
| 0.5004 | 116.0 | 4872 | 0.5977 |
| 0.5004 | 117.0 | 4914 | 0.5970 |
| 0.5004 | 118.0 | 4956 | 0.5976 |
| 0.5004 | 119.0 | 4998 | 0.5962 |
| 0.488 | 120.0 | 5040 | 0.5969 |
| 0.488 | 121.0 | 5082 | 0.5965 |
| 0.488 | 122.0 | 5124 | 0.5969 |
| 0.488 | 123.0 | 5166 | 0.5972 |
| 0.488 | 124.0 | 5208 | 0.5966 |
| 0.488 | 125.0 | 5250 | 0.5962 |
| 0.488 | 126.0 | 5292 | 0.5966 |
| 0.488 | 127.0 | 5334 | 0.5960 |
| 0.488 | 128.0 | 5376 | 0.5969 |
| 0.488 | 129.0 | 5418 | 0.5960 |
| 0.488 | 130.0 | 5460 | 0.5960 |
| 0.483 | 131.0 | 5502 | 0.5960 |
| 0.483 | 132.0 | 5544 | 0.5965 |
| 0.483 | 133.0 | 5586 | 0.5965 |
| 0.483 | 134.0 | 5628 | 0.5963 |
| 0.483 | 135.0 | 5670 | 0.5965 |
| 0.483 | 136.0 | 5712 | 0.5962 |
| 0.483 | 137.0 | 5754 | 0.5963 |
| 0.483 | 138.0 | 5796 | 0.5961 |
| 0.483 | 139.0 | 5838 | 0.5963 |
| 0.483 | 140.0 | 5880 | 0.5964 |
| 0.483 | 141.0 | 5922 | 0.5957 |
| 0.483 | 142.0 | 5964 | 0.5957 |
| 0.4809 | 143.0 | 6006 | 0.5957 |
| 0.4809 | 144.0 | 6048 | 0.5956 |
| 0.4809 | 145.0 | 6090 | 0.5958 |
| 0.4809 | 146.0 | 6132 | 0.5958 |
| 0.4809 | 147.0 | 6174 | 0.5959 |
| 0.4809 | 148.0 | 6216 | 0.5958 |
| 0.4809 | 149.0 | 6258 | 0.5958 |
| 0.4809 | 150.0 | 6300 | 0.5958 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
|
19e391ae05af5fcc21ebe82da8f73c58
|
muhtasham/tiny-mlm-glue-qnli-target-glue-mrpc
|
muhtasham
|
bert
| 10 | 1 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,714 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# tiny-mlm-glue-qnli-target-glue-mrpc
This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-qnli](https://huggingface.co/muhtasham/tiny-mlm-glue-qnli) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3118
- Accuracy: 0.7353
- F1: 0.8176
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 200
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5845 | 4.35 | 500 | 0.5424 | 0.7402 | 0.8290 |
| 0.4395 | 8.7 | 1000 | 0.5503 | 0.7525 | 0.8374 |
| 0.2883 | 13.04 | 1500 | 0.6404 | 0.7475 | 0.8280 |
| 0.1828 | 17.39 | 2000 | 0.7736 | 0.7574 | 0.8406 |
| 0.1141 | 21.74 | 2500 | 1.0144 | 0.7255 | 0.8056 |
| 0.0816 | 26.09 | 3000 | 1.1432 | 0.7328 | 0.8180 |
| 0.0616 | 30.43 | 3500 | 1.3118 | 0.7353 | 0.8176 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
|
67952babad1d46fc86254b885530a59d
|
KoichiYasuoka/deberta-small-coptic
|
KoichiYasuoka
|
deberta-v2
| 8 | 2 |
transformers
| 0 |
fill-mask
| true | false | false |
cc-by-sa-4.0
|
['cop']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['coptic', 'masked-lm']
| false | true | true | 545 | false |
# deberta-small-coptic
## Model Description
This is a DeBERTa(V2) model pre-trained on Coptic Scriptorium Corpora. You can fine-tune `deberta-small-coptic` for downstream tasks, such as [POS-tagging](https://huggingface.co/KoichiYasuoka/deberta-small-coptic-upos), dependency-parsing, and so on.
## How to Use
```py
from transformers import AutoTokenizer,AutoModelForMaskedLM
tokenizer=AutoTokenizer.from_pretrained("KoichiYasuoka/deberta-small-coptic")
model=AutoModelForMaskedLM.from_pretrained("KoichiYasuoka/deberta-small-coptic")
```
|
c636ae36e047ffb9dfdf5b2103bf1625
|
ppsingh/roberta-finetuned-qa-policy
|
ppsingh
|
roberta
| 15 | 12 |
transformers
| 0 |
question-answering
| true | false | false |
cc-by-4.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 984 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-qa-policy
This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.26.0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
85374f9779180a1c8e3179c0f30bf3ab
|
Helsinki-NLP/opus-mt-pl-fr
|
Helsinki-NLP
|
marian
| 10 | 1,023 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 770 | false |
### opus-mt-pl-fr
* source languages: pl
* target languages: fr
* OPUS readme: [pl-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/pl-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.zip)
* test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.test.txt)
* test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/pl-fr/opus-2020-01-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba.pl.fr | 49.0 | 0.659 |
|
b1d75b30693fd1f1aa69e30001bcfd02
|
anas-awadalla/bart-base-finetuned-squad-infilling-lr-1e-5-decay-01
|
anas-awadalla
|
bart
| 20 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 961 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-finetuned-squad-infilling-lr-1e-5-decay-01
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
396b5da0c32711f18f69264b87b25abf
|
rajistics/imdb
|
rajistics
|
bert
| 14 | 8 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,084 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# imdb
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.3268 | 0.876 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
7772133915f3645343e14d1eec78fa03
|
nejox/roberta-base-coffee20230108
|
nejox
|
roberta
| 13 | 4 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,888 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-coffee20230108
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.4032
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 89 | 4.5615 |
| 5.5192 | 2.0 | 178 | 2.8434 |
| 3.2266 | 3.0 | 267 | 2.2547 |
| 2.1833 | 4.0 | 356 | 2.3272 |
| 1.5483 | 5.0 | 445 | 2.3703 |
| 1.148 | 6.0 | 534 | 2.4088 |
| 1.0413 | 7.0 | 623 | 2.6734 |
| 0.6844 | 8.0 | 712 | 2.7058 |
| 0.5396 | 9.0 | 801 | 2.9746 |
| 0.5396 | 10.0 | 890 | 3.6085 |
| 0.3883 | 11.0 | 979 | 3.4980 |
| 0.2854 | 12.0 | 1068 | 4.0556 |
| 0.2021 | 13.0 | 1157 | 4.1024 |
| 0.1797 | 14.0 | 1246 | 4.2926 |
| 0.1425 | 15.0 | 1335 | 4.4032 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.11.0+cu113
- Datasets 2.8.0
- Tokenizers 0.13.2
|
a7ecfedb3d1670773b8d9413de8d24a2
|
crescendonow/pwa_categorical_complaint
|
crescendonow
|
camembert
| 9 | 2 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 500 | false |
This Model finetunes from WangchanBERTa ("wangchanberta-base-att-spm-uncased") uses only the Provincial Waterworks Authority of Thailand.
The Model classification into ten categories describe by the dictionary are
{'ข้อร้องเรียน-ปริมาณน้ำ':[11,0],
'ข้อร้องเรียน-ท่อแตกรั่ว':[12,1],
'ข้อร้องเรียน-คุณภาพน้ำ':[13,2],
'ข้อร้องเรียน-การบริการ':[14,3],
'ข้อร้องเรียน-บุคลากร':[15,4],
'ข้อสอบถามทั่วไป':[2,5],
'ข้อเสนอแนะ':[3,6],
'ข้อคิดเห็น':[4,7],
'อื่นๆ':[8,8],
'ไม่เกี่ยวข้องกับกปภ.':[9,9]}
|
8a9a31221122deae7deb811a08c93de2
|
jonatasgrosman/exp_w2v2t_nl_vp-100k_s408
|
jonatasgrosman
|
wav2vec2
| 10 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
|
['nl']
|
['mozilla-foundation/common_voice_7_0']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['automatic-speech-recognition', 'nl']
| false | true | true | 475 | false |
# exp_w2v2t_nl_vp-100k_s408
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (nl)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
|
c684d4db0a787bde72af2f1c9dd08cac
|
espnet/slurp_slu_2pass
|
espnet
| null | 20 | 0 |
espnet
| 0 |
automatic-speech-recognition
| false | false | false |
cc-by-4.0
|
['en']
|
['slurp']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['espnet', 'audio', 'automatic-speech-recognition']
| false | true | true | 166,086 | false |
## ESPnet2 ASR model
### `espnet/slurp_slu_2pass`
This model was trained by Siddhant using slurp recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html)
if you haven't done that already.
```bash
cd espnet
git checkout 3b54bfe52a294cdfce668c20d777bfa65f413745
pip install -e .
cd egs2/slurp/slu1
./run.sh --skip_data_prep false --skip_train true --download_model espnet/slurp_slu_2pass
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Jan 10 22:53:19 EST 2022`
- python version: `3.9.5 (default, Jun 4 2021, 12:28:51) [GCC 7.5.0]`
- espnet version: `espnet 0.10.3a2`
- pytorch version: `pytorch 1.8.1+cu102`
- Git hash: `17758ad804fd7c4b6f88ef5601f475a241dc4605`
- Commit date: `Fri Oct 15 16:08:01 2021 -0400`
## asr_train_asr_bert_conformer_deliberation_raw_en_word
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|8690|108484|84.1|10.3|5.6|3.2|19.2|55.1|
|inference_asr_model_valid.acc.ave_10best/test|13078|159666|83.9|10.4|5.7|3.2|19.2|53.2|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|inference_asr_model_valid.acc.ave_10best/devel|8690|512732|92.1|3.9|4.1|3.3|11.2|55.1|
|inference_asr_model_valid.acc.ave_10best/test|13078|757056|92.0|3.9|4.1|3.3|11.3|53.2|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_bert_conformer_deliberation.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_bert_conformer_deliberation_raw_en_word
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param:
- ../../slurp_new/asr1/exp/asr_train_asr_conformer_raw_en_word/valid.acc.ave_10best.pth:encoder:encoder
ignore_init_mismatch: false
freeze_param:
- encoder
- postdecoder.model
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_word/train/speech_shape
- exp/asr_stats_raw_en_word/train/text_shape.word
- exp/asr_stats_raw_en_word/train/transcript_shape.word
valid_shape_file:
- exp/asr_stats_raw_en_word/valid/speech_shape
- exp/asr_stats_raw_en_word/valid/text_shape.word
- exp/asr_stats_raw_en_word/valid/transcript_shape.word
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 150
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train/wav.scp
- speech
- sound
- - dump/raw/train/text
- text
- text
- - dump/raw/train/transcript
- transcript
- text
valid_data_path_and_name_and_type:
- - dump/raw/devel/wav.scp
- speech
- sound
- - dump/raw/devel/text
- text
- text
- - dump/raw/devel/transcript
- transcript
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0002
scheduler: warmuplr
scheduler_conf:
warmup_steps: 25000
token_list:
- <blank>
- <unk>
- ▁the
- s
- ▁to
- ▁i
- ▁me
- ▁you
- ▁what
- ▁a
- ▁is
- ▁my
- ▁please
- a
- ''''
- y
- ▁in
- ing
- ▁s
- e
- ▁for
- i
- ▁on
- d
- t
- o
- u
- er
- p
- ▁of
- es
- re
- l
- ▁it
- ▁p
- le
- ▁f
- ▁m
- ▁email
- ▁d
- m
- ▁c
- st
- r
- n
- ar
- ▁h
- b
- ▁that
- c
- ▁this
- h
- an
- email_query
- ▁play
- ▁re
- ▁b
- ▁do
- ▁can
- at
- ▁have
- g
- ▁from
- ▁and
- en
- email_sendemail
- ▁olly
- 'on'
- ▁new
- it
- qa_factoid
- calendar_set
- ▁any
- or
- ▁g
- ▁how
- ▁t
- ▁tell
- ch
- ▁not
- ▁about
- ▁at
- ate
- general_negate
- f
- ▁today
- ▁e
- ed
- ▁list
- ▁r
- in
- k
- ic
- social_post
- ▁are
- play_music
- general_quirky
- ▁l
- al
- v
- ent
- ▁n
- ▁be
- ▁an
- ▁st
- et
- ▁am
- general_praise
- ▁time
- weather_query
- ▁up
- ▁check
- calendar_query
- ▁w
- om
- ur
- ▁send
- ▁with
- ly
- w
- general_explain
- ad
- ▁th
- news_query
- ▁one
- ▁emails
- day
- ▁sh
- ce
- ▁last
- ve
- ▁he
- z
- ▁ch
- ▁will
- ▁set
- ▁would
- ▁was
- x
- general_repeat
- ▁add
- ou
- ▁again
- ▁ex
- is
- ct
- general_affirm
- general_confirm
- ▁song
- ▁next
- ▁j
- ▁meeting
- um
- ation
- ▁turn
- ▁did
- if
- ▁alarm
- am
- ▁like
- datetime_query
- ter
- ▁remind
- ▁o
- qa_definition
- ▁said
- ▁calendar
- ll
- se
- ers
- th
- ▁get
- our
- ▁need
- ▁all
- ot
- ▁want
- ▁off
- and
- ▁right
- ▁de
- ▁tr
- ut
- general_dontcare
- ▁
- ▁week
- as
- ▁tweet
- ight
- ir
- ▁your
- ▁event
- ▁news
- ▁se
- ay
- ion
- ▁com
- ▁there
- ▁ye
- ▁weather
- un
- ▁confirm
- ld
- calendar_remove
- ▁y
- ▁lights
- ▁more
- ▁v
- play_radio
- ▁does
- ▁po
- ▁now
- id
- email_querycontact
- ▁show
- ▁could
- ery
- op
- ▁day
- ▁pm
- ▁music
- ▁tomorrow
- ▁train
- ▁u
- ine
- ▁or
- ange
- qa_currency
- ice
- ▁contact
- ▁just
- ▁jo
- ▁think
- qa_stock
- end
- ss
- ber
- ▁tw
- ▁command
- ▁make
- ▁no
- ▁mo
- pe
- ▁find
- general_commandstop
- ▁when
- social_query
- ▁so
- ong
- ▁co
- ant
- ow
- ▁much
- ▁where
- ul
- ue
- ri
- ap
- ▁start
- ▁mar
- ▁by
- one
- ▁know
- ▁wor
- oo
- ▁give
- ▁let
- ▁events
- der
- ▁ro
- ▁pr
- ▁pl
- play_podcasts
- art
- us
- ▁work
- ▁current
- ol
- cooking_recipe
- nt
- ▁correct
- transport_query
- ia
- ▁stock
- ▁br
- ive
- ▁app
- ▁two
- ▁latest
- lists_query
- ▁some
- recommendation_events
- ab
- ▁go
- ▁but
- ook
- ke
- alarm_set
- play_audiobook
- ▁k
- ▁response
- ▁wr
- cast
- ▁open
- ▁cle
- ▁done
- ▁got
- ▁ca
- ite
- ase
- ▁thank
- iv
- ah
- ag
- ▁answer
- ie
- ▁five
- ▁book
- ist
- ▁rec
- ore
- ▁john
- ment
- ▁appreci
- ▁fri
- ack
- ▁remove
- ated
- ock
- ree
- j
- ▁good
- ▁many
- orn
- fe
- ▁radio
- ▁we
- int
- ▁facebook
- ▁cl
- ▁sev
- ▁schedule
- ard
- ▁per
- ▁li
- ▁going
- nd
- ain
- recommendation_locations
- ▁post
- lists_createoradd
- ff
- ▁su
- red
- iot_hue_lightoff
- lists_remove
- ▁ar
- een
- ▁say
- ro
- ▁volume
- ▁le
- ▁reply
- ▁complaint
- ▁out
- ▁delete
- ▁ne
- ame
- ▁detail
- ▁if
- im
- ▁happ
- orr
- ich
- em
- ▁ev
- ction
- ▁dollar
- ▁as
- alarm_query
- audio_volume_mute
- ac
- music_query
- ▁mon
- ther
- ▁thanks
- cel
- ▁who
- ave
- ▁service
- ▁mail
- ty
- ▁hear
- de
- ▁si
- ▁wh
- ood
- ell
- ▁con
- ▁once
- ound
- ▁don
- ▁loc
- ▁light
- ▁birthday
- ▁inf
- ort
- ffe
- ▁playlist
- el
- ening
- ▁us
- ▁un
- ▁has
- own
- ▁inc
- ai
- ▁speak
- age
- ▁mess
- ast
- ci
- ver
- ▁ten
- ▁underst
- ▁pro
- ▁q
- enty
- ▁ticket
- gh
- audio_volume_up
- ▁take
- ▁bo
- ally
- ome
- transport_ticket
- ind
- iot_hue_lightchange
- pp
- iot_coffee
- ▁res
- plain
- io
- lar
- takeaway_query
- ge
- takeaway_order
- email_addcontact
- play_game
- ak
- ▁fa
- transport_traffic
- music_likeness
- ▁rep
- act
- ust
- transport_taxi
- iot_hue_lightdim
- ▁mu
- ▁ti
- ick
- ▁ha
- ould
- general_joke
- '1'
- qa_maths
- ▁lo
- iot_cleaning
- q
- ake
- ill
- her
- iot_hue_lightup
- pl
- '2'
- alarm_remove
- orrect
- ▁cont
- mail
- out
- audio_volume_down
- book
- ail
- recommendation_movies
- ck
- ▁man
- ▁mus
- ▁che
- me
- ume
- ▁answ
- datetime_convert
- ▁late
- iot_wemo_on
- ▁twe
- music_settings
- iot_wemo_off
- orre
- ith
- ▁tom
- ▁fr
- ere
- ▁ad
- xt
- ▁ab
- ank
- general_greet
- now
- ▁meet
- ▁curre
- ▁respon
- ▁ag
- ght
- audio_volume_other
- ink
- ▁spe
- iot_hue_lighton
- ▁rem
- lly
- '?'
- urn
- ▁op
- ▁complain
- ▁comm
- let
- music_dislikeness
- ove
- ▁sch
- ather
- ▁rad
- edule
- ▁under
- icket
- lease
- ▁bir
- erv
- ▁birth
- ▁face
- ▁cur
- sw
- ▁serv
- ek
- aid
- '9'
- ▁vol
- edu
- '5'
- cooking_query
- lete
- ▁joh
- ▁det
- firm
- nder
- '0'
- irm
- '8'
- '&'
- _
- list
- pon
- qa_query
- '7'
- '3'
- '-'
- reci
- ▁doll
- <sos/eos>
transcript_token_list:
- <blank>
- <unk>
- the
- to
- i
- me
- you
- is
- what
- please
- my
- a
- for
- 'on'
- in
- of
- email
- this
- it
- have
- from
- and
- play
- olly
- that
- new
- can
- do
- how
- tell
- about
- at
- any
- today
- not
- time
- are
- check
- list
- send
- with
- an
- one
- emails
- last
- will
- am
- again
- set
- next
- would
- was
- up
- like
- turn
- said
- calendar
- meeting
- get
- what's
- right
- all
- did
- be
- need
- want
- song
- tweet
- add
- event
- your
- news
- 'off'
- weather
- there
- lights
- more
- now
- alarm
- pm
- music
- show
- confirm
- train
- could
- think
- does
- make
- command
- just
- find
- when
- tomorrow
- much
- where
- week
- by
- give
- events
- know
- day
- start
- two
- latest
- response
- that's
- remind
- done
- but
- thank
- stock
- some
- you've
- answer
- five
- open
- current
- many
- remove
- radio
- good
- book
- 'no'
- facebook
- going
- it's
- volume
- reply
- work
- delete
- go
- complaint
- contact
- if
- service
- let
- thanks
- so
- hear
- once
- correct
- john
- playlist
- birthday
- got
- post
- ten
- order
- sorry
- has
- date
- hey
- coffee
- who
- rate
- three
- exchange
- further
- light
- twenty
- price
- mail
- reminder
- explain
- podcast
- ticket
- down
- really
- clear
- seven
- schedule
- alarms
- say
- morning
- change
- twitter
- cancel
- number
- dollar
- stop
- out
- appreciated
- hundred
- wrong
- don't
- information
- address
- contacts
- read
- york
- us
- which
- should
- 'yes'
- details
- songs
- between
- nine
- anything
- s1
- received
- playing
- shut
- dot
- mind
- com
- google
- most
- put
- job
- traffic
- four
- best
- six
- create
- recent
- yeah
- happening
- friday
- name
- very
- area
- mom
- or
- take
- appointment
- yeap
- room
- world
- home
- hour
- message
- eight
- clarify
- s2
- party
- episode
- here
- elaborate
- alexa
- appreciate
- customer
- i'd
- sent
- thing
- march
- look
- tonight
- place
- try
- after
- definition
- call
- well
- times
- rock
- phone
- speak
- today's
- whats
- food
- thirty
- see
- joke
- every
- pizza
- write
- lists
- game
- shopping
- weekend
- rephrase
- month
- matter
- s
- update
- station
- vacuum
- great
- detail
- long
- gmail
- old
- repeat
- city
- audiobook
- perfectly
- status
- inbox
- mute
- local
- near
- restaurant
- thousand
- tuesday
- year
- we
- media
- before
- around
- resume
- musch
- her
- house
- taxi
- hours
- didn't
- describe
- answers
- understand
- incorrect
- word
- listen
- first
- item
- d
- trump
- save
- days
- socket
- recipe
- nice
- u
- reminders
- social
- search
- as
- monday
- subject
- location
- movie
- saturday
- euro
- dinner
- them
- ask
- let's
- scheduled
- plug
- i'm
- gotten
- question
- minutes
- friend
- favorite
- meetings
- define
- instructions
- exactly
- cook
- understood
- sentence
- thursday
- grocery
- correcly
- their
- words
- temperature
- person
- amazon
- catch
- company
- mean
- something
- correctly
- living
- fantastic
- help
- following
- dollars
- rain
- speakers
- instruction
- helpful
- increase
- consumer
- evening
- family
- upcoming
- jazz
- saying
- way
- switch
- forecast
- task
- cleaner
- love
- late
- boss
- wednesday
- yesterday
- updates
- lower
- people
- cool
- wonderful
- twelve
- afternoon
- color
- wake
- oh
- lunch
- perfect
- back
- understanding
- useful
- amazing
- his
- dim
- movies
- chicago
- things
- takeaway
- fifty
- unread
- happy
- available
- noon
- wouldn't
- night
- had
- appointments
- idea
- michael
- doing
- over
- doesn't
- select
- hi
- shit
- may
- they
- delivery
- nearest
- buy
- apple
- car
- left
- confirmed
- report
- worth
- robot
- uber
- wemo
- sunday
- excellent
- outside
- blue
- looking
- messages
- top
- wear
- point
- too
- i've
- country
- prices
- bring
- store
- awesome
- unclear
- ok
- mark
- speaker
- app
- sound
- hot
- live
- jackson
- bad
- recently
- currently
- smith
- pull
- whatever
- india
- messed
- kitchen
- ninety
- percent
- him
- use
- office
- brightness
- care
- gave
- description
- tom
- regarding
- meaning
- meet
- siri
- bob
- joe
- hmm
- leave
- sarah
- smart
- come
- chicken
- seventeen
- walmart
- bill
- enough
- choose
- louder
- our
- trending
- born
- london
- zone
- account
- cnn
- audio
- president
- isn't
- compose
- coming
- second
- manner
- pick
- album
- uhh
- plus
- provide
- erase
- notification
- played
- channel
- donald
- pound
- instagram
- made
- bbc
- recommend
- happened
- united
- replay
- shop
- free
- dammit
- nope
- b
- nearby
- pop
- shops
- california
- highest
- notifications
- shuffle
- fm
- chinese
- currency
- uh
- restaurants
- jack
- april
- robert
- only
- been
- why
- states
- friends
- skip
- important
- he
- samsung
- later
- notify
- bedroom
- john's
- mails
- eleven
- red
- exact
- cold
- cup
- rates
- incorrectly
- fifth
- money
- boston
- spoke
- tomorrow's
- forward
- respond
- funny
- wait
- business
- market
- star
- headlines
- third
- favorites
- bother
- retry
- stocks
- high
- g
- favourite
- george
- umbrella
- directions
- wedding
- content
- m
- close
- spoken
- concert
- run
- alert
- searching
- mary
- into
- artist
- located
- mike
- anyone
- snow
- tickets
- then
- reset
- garden
- route
- hello
- tall
- likes
- talk
- forty
- share
- feed
- were
- indian
- washington
- difference
- remember
- convert
- receive
- tune
- level
- asking
- capital
- life
- dad
- yen
- street
- raining
- mistake
- correctly?
- quite
- pandora
- jane
- town
- yet
- player
- park
- san
- american
- far
- sports
- raise
- popular
- display
- these
- couldn't
- mountain
- dentist
- importance
- unimportant
- complain
- clean
- continue
- euros
- los
- ready
- yahoo
- can't
- classical
- politics
- newest
- lighting
- miami
- trip
- horrible
- info
- added
- prepare
- iphone
- machine
- mother
- miles
- via
- chris
- tv
- since
- bathroom
- state
- cheese
- request
- items
- oops
- ah
- closest
- warm
- microsoft
- settings
- value
- keep
- brighter
- note
- everything
- wife
- decrease
- okay
- using
- rap
- election
- sunny
- eat
- usa
- eighty
- fifteen
- until
- wanted
- wrongly
- dog
- obama
- years
- coat
- week's
- japan
- quiet
- paris
- angeles
- comcast
- target
- emailed
- airport
- interesting
- mcdonalds
- mr
- married
- green
- product
- past
- little
- other
- t
- listening
- cooking
- activate
- earth
- dance
- title
- florida
- rupee
- travel
- kids
- takeout
- pending
- america
- making
- its
- than
- doctor
- population
- bar
- plans
- power
- fourth
- silent
- ride
- milk
- how's
- seventy
- sure
- fine
- jennifer
- july
- sister
- brighten
- picture
- deliver
- singer
- clock
- inform
- brad
- burger
- never
- pesos
- object
- hero
- arrive
- classic
- olive
- games
- group
- watch
- line
- justin
- cost
- project
- called
- lets
- track
- still
- starbucks
- form
- repeating
- christmas
- breaking
- due
- cheapest
- forget
- posted
- james
- posts
- central
- lot
- stories
- whole
- small
- ever
- steak
- review
- requested
- wish
- david
- workout
- alex
- seems
- given
- gym
- largest
- la
- average
- compare
- china
- fifteenth
- having
- rupees
- band
- background
- meal
- online
- reserve
- file
- lamp
- laugh
- sun
- anniversary
- eastern
- busy
- mobile
- bit
- jokes
- places
- geographic
- else
- chess
- meant
- working
- p
- planned
- program
- seconds
- rated
- large
- issues
- road
- pay
- big
- holiday
- daily
- 'true'
- celebrity
- better
- hut
- being
- sixty
- away
- helped
- peter
- god
- cab
- someone
- internet
- page
- anna
- feel
- video
- steve
- opening
- lately
- sandy
- bank
- weeks
- id
- sam
- pitt
- river
- february
- i'll
- saved
- soup
- phrase
- distance
- economy
- hits
- sony
- eggs
- low
- water
- text
- topic
- co
- begin
- attend
- groceries
- adele
- reach
- within
- pause
- half
- yourself
- kind
- dark
- replied
- enter
- must
- asked
- beatles
- fun
- ingredients
- against
- invite
- soon
- colour
- different
- jacket
- updated
- seattle
- denver
- canada
- vegas
- mode
- pasta
- january
- doe
- listed
- refresh
- listened
- team
- longest
- spotify
- remainder
- telling
- mumbai
- you're
- orlando
- card
- rice
- during
- reduce
- locate
- future
- starting
- boil
- genre
- class
- slow
- famous
- named
- allen
- youtube
- works
- olly's
- dc
- brew
- through
- pounds
- football
- pacific
- white
- sings
- egg
- oil
- festival
- clothes
- moment
- die
- orange
- school
- kim
- las
- divided
- whether
- photo
- everyday
- ryan
- bills
- headline
- fix
- square
- npr
- jake
- brother
- todays
- terrible
- weekly
- type
- topics
- months
- chat
- yoga
- reading
- products
- extra
- cut
- adjust
- king
- personal
- client
- jan
- data
- doctor's
- computer
- rohit
- johns
- o'clock
- canadian
- mistakes
- rid
- names
- control
- sunscreen
- per
- lady
- head
- taylor
- always
- budget
- pink
- bought
- x
- side
- ahead
- articles
- english
- ny
- able
- reschedule
- fast
- hashtag
- tweets
- countries
- numbers
- running
- alabama
- blank
- madonna
- bright
- yellow
- west
- went
- options
- story
- october
- russia
- together
- n
- basketball
- joe's
- dominos
- tomorrows
- less
- situation
- colors
- mom's
- end
- payment
- drop
- downtown
- provider
- joes
- means
- helping
- mexican
- friday's
- cricket
- return
- needed
- death
- tech
- charlotte
- heavy
- draft
- sea
- paul
- r
- condition
- seventh
- dallas
- hip
- related
- article
- heard
- war
- elvis
- everest
- problem
- stating
- bieber
- system
- sales
- shoes
- hard
- become
- based
- kevin
- age
- she
- quality
- mile
- hair
- gas
- biggest
- inr
- climate
- hate
- twentieth
- sucks
- dean
- angelina
- turkey
- harry
- cake
- national
- record
- longer
- dave
- subjects
- brown
- supposed
- ocean
- church
- drive
- gandhi
- needs
- above
- theatre
- cookies
- abraham
- gone
- map
- television
- such
- face
- sale
- jim
- francisco
- sean
- june
- romantic
- compared
- curry
- ball
- jeff
- subway
- lincoln
- bed
- lagos
- turned
- south
- won
- trains
- girlfriend
- mahatma
- nsa
- hop
- amy
- commute
- solve
- came
- created
- dont
- history
- math
- telephone
- says
- laptop
- pawel
- offer
- fox
- single
- sixth
- midnight
- missed
- potter
- loud
- richard
- chuck
- looks
- practice
- body
- dan
- husband
- waiting
- birth
- stuff
- adam
- sender
- gaga
- truck
- france
- texas
- restart
- intel
- colours
- statue
- liberty
- intensity
- previous
- problems
- outlook
- visit
- wine
- peso
- continent
- utterance
- helps
- asssistance
- each
- north
- grand
- patrick
- match
- opinion
- plan
- trump's
- papa
- instead
- martin
- root
- purchase
- perry
- richards
- closing
- cloudy
- eddie
- senders
- move
- susan
- tesco
- size
- shows
- folder
- spaghetti
- doctors
- stores
- presidential
- dates
- theater
- menu
- agenda
- ann
- code
- animal
- frequency
- kansas
- roomba
- technology
- tasks
- without
- flight
- who's
- beach
- empty
- tired
- driving
- entire
- carry
- british
- dr
- asia
- rccg
- uncle
- vacation
- pepperoni
- programme
- standard
- reminding
- maximum
- starts
- tallest
- gonna
- fourteenth
- playback
- medium
- nike
- cruise
- changed
- diego
- arrange
- bowie
- learn
- mount
- particular
- costumer
- sundays
- fire
- calls
- silence
- podcasts
- spain
- dominoes
- website
- italy
- strongly
- agree
- agreed
- suggest
- mood
- fourteen
- result
- metallica
- thinking
- session
- profile
- england
- active
- ohio
- grid
- fall
- pot
- marriage
- queue
- told
- narendra
- jerry
- mt
- frank
- tenth
- wishes
- recording
- finished
- international
- calculate
- hit
- towers
- ninth
- site
- feeling
- macy's
- tag
- actually
- black
- birthdays
- hottest
- mary's
- expect
- snapchat
- jay
- smith's
- mountains
- building
- setting
- cleaning
- height
- initiate
- hall
- breakfast
- martha
- conference
- aol
- win
- steps
- fancy
- smartphone
- led
- zeppelin
- houses
- holy
- currencies
- club
- children
- atlanta
- einstein
- happen
- cell
- landline
- coworker
- objects
- negative
- modi
- soft
- haven't
- mention
- radius
- books
- daughter
- results
- earlier
- bruce
- butter
- stars
- remaining
- delivers
- device
- domino's
- unmute
- joy
- twelfth
- voice
- taking
- snowing
- sick
- boots
- cleveland
- journey
- destination
- worker
- poker
- lee
- katy
- australia
- incoming
- least
- lisa
- experience
- million
- recurring
- scenario
- sacramento
- geography
- library
- brief
- jolie
- monthly
- elton
- sirius
- alaska
- lyrics
- oven
- log
- random
- moscow
- barack
- disney
- alive
- measurements
- maker
- poor
- error
- stone
- versus
- hotmail
- interpret
- sarah's
- memorial
- goes
- stay
- delhi
- health
- special
- speed
- thirteen
- test
- edinburgh
- credit
- facts
- cat
- neighborhood
- sometime
- empire
- entry
- financial
- comment
- link
- hockey
- circuit
- holidays
- singh
- jodhpur
- rockville
- ones
- features
- bread
- eye
- mall
- directv
- contain
- seacrest
- chance
- under
- table
- few
- hotel
- rude
- services
- yesterday's
- certain
- fb
- abc
- netflix
- linda
- notes
- length
- reminded
- shoe
- wild
- employees
- beef
- sushi
- fastest
- thirteenth
- recommendations
- fish
- tennis
- main
- jersey
- jones
- break
- concerts
- gomez
- angry
- uk
- replies
- emily
- kickball
- released
- upload
- effects
- quickest
- italian
- caroline
- emma
- real
- human
- minute
- took
- activity
- jeff's
- staff
- handler
- touch
- hold
- joanne
- range
- moon
- submit
- ends
- tomato
- lost
- prime
- twelveth
- phones
- amd
- hectic
- bobburgers
- screwed
- porch
- reviews
- vegan
- rihanna
- houston
- ham
- mondays
- general
- engaged
- walk
- melody
- electronic
- held
- selected
- equal
- getting
- tata
- wall
- clothing
- round
- leaving
- nasdaq
- total
- pressure
- expensive
- border
- exhibition
- trash
- november
- handle
- halloween
- attachment
- kardashian
- shoot
- rewind
- rating
- toronto
- department
- procedure
- member
- ray
- chelsea
- rohan
- arrow
- checked
- modify
- wasn't
- chances
- protest
- lottery
- prince
- include
- jo
- net
- pie
- sleep
- enjoy
- nineties
- taco
- banana
- source
- quieter
- bored
- desert
- guys
- gary
- activities
- already
- contract
- st
- minister
- disable
- woman
- europe
- arijit
- audible
- presentation
- cad
- records
- trips
- booking
- tacos
- sally
- non
- centre
- direct
- advance
- selena
- policy
- orders
- stefan
- arrival
- divide
- chocolate
- dish
- teeth
- hdfc
- silvia
- stove
- coast
- defined
- digest
- snafu
- manager
- pinterest
- tim
- conversation
- bulldog
- titanic
- brunch
- heat
- canyon
- dial
- earliest
- region
- stopped
- foreign
- folk
- watching
- brexit
- albert
- joejoe
- early
- cities
- manchester
- december
- biloxi
- often
- questions
- garage
- tunes
- possible
- ms
- ar
- kiss
- shares
- bangalore
- heading
- derek's
- desk
- cheers
- tomasz
- terms
- companyname
- sara
- asap
- super
- meryl
- streep
- rent
- dress
- cinema
- usually
- trend
- conversion
- friendly
- ties
- ordered
- electricity
- marked
- migration
- choice
- journal
- norris
- aniston
- mailbox
- minus
- fried
- miley
- cyrus
- newly
- theory
- rest
- swift
- windy
- dan's
- mass
- comes
- selfie
- wings
- julie
- masti
- celine
- plays
- pack
- including
- responded
- jason's
- ale
- apples
- dolly
- oranges
- lg
- washer
- substitute
- global
- feedback
- grandma
- ben
- drainage
- invoice
- sunset
- takeaways
- man
- art
- universe
- suitable
- antonio
- full
- delivered
- laundry
- wrote
- min
- register
- snap
- nixon
- bird
- spend
- rome
- jesse
- calories
- cappuccino
- quickly
- buying
- britney
- spears
- spacey
- jobs
- arriving
- jean
- potholes
- janet
- pictures
- ashwin
- morgan
- freeman
- baby
- microwave
- yellowstone
- francis
- dubai
- invitation
- hope
- melbourne
- rocky
- kroger
- rivers
- charles
- jim's
- rectify
- statement
- carpet
- baked
- jessica
- meatballs
- mushrooms
- amount
- switzerland
- relating
- zero
- front
- phonebook
- hows
- cheesecake
- carryout
- magic
- ola
- replace
- recorded
- access
- land
- where's
- elephant
- removed
- liz
- load
- metal
- package
- diner
- goog
- bob's
- k
- year's
- mars
- guy
- assistant
- rahman
- eagle
- part
- burn
- aran
- stevens
- daughter's
- eighteen
- chemistry
- action
- selling
- thats
- koc
- lines
- sugar
- major
- chair
- easter
- departing
- africa
- nigeria
- requests
- conditions
- you'll
- manhattan
- roll
- cracow
- candy
- crush
- bell
- massive
- gold
- happens
- usual
- andrew
- equals
- dead
- plane
- graduation
- warned
- shaun
- triangle
- wyatt's
- pass
- function
- max
- space
- programmes
- awful
- parton
- exciting
- battery
- hwu
- recipes
- dirham
- rushmore
- johndoe
- button
- express
- pontificate
- easiest
- magda
- selection
- reservations
- guess
- copy
- classes
- supplies
- schedules
- winning
- berkeley
- notice
- headed
- outgoing
- mi
- rainy
- wikipedia
- entertainment
- dow
- everyone
- aunt
- furniture
- oceans
- softer
- heart
- newmail
- while
- baseball
- easy
- stations
- philadelphia
- alice
- swat
- yearly
- poem
- soccer
- president's
- milan
- paper
- kardashian's
- loop
- shown
- sandals
- yo
- scan
- nevada
- apahelp
- coldplay
- french
- bay
- higher
- rumplestiltskin
- airlines
- fresh
- standing
- cream
- hamburger
- broadway
- oscars
- tokyo
- cable
- shipment
- formula
- teacher
- sweet
- golden
- newsfeed
- confirmation
- shirt
- austin
- own
- canon
- wanna
- gods
- spanish
- count
- seat
- ideas
- study
- tara
- mutual
- jennifer's
- because
- edit
- denmark
- direction
- timer
- growth
- luther
- marketing
- cd
- mine
- public
- peter's
- bolshoi
- flat
- crazy
- others
- dry
- pub
- theatres
- bro
- fashion
- teams
- cycle
- pickup
- dion
- teach
- series
- checkout
- male
- noise
- solitaire
- pf
- cassie
- travelling
- davis
- naty
- income
- disco
- dropping
- donna
- follow
- shelly
- accidents
- plot
- irene
- download
- circle
- law
- tea
- organize
- principal
- weekends
- camera
- solution
- bombay
- wuthering
- heights
- charged
- colorado
- kong
- keys
- race
- mona
- entries
- j
- nyc
- potatoes
- gospel
- raju
- trivia
- bike
- dating
- oregon
- event's
- prefers
- rush
- percentages
- peking
- cooker
- husbands
- won't
- tower
- heaven
- hugh
- june's
- fake
- figure
- purple
- takes
- l
- howard
- stern
- nineteen
- percentage
- motorola
- doe's
- outstanding
- tesla
- laura
- dale
- warning
- eighteenth
- golf
- island
- career
- bieber's
- vacuuming
- pizzas
- refund
- weekday
- s's
- derek
- thanksgiving
- delayed
- query
- buffet
- rachel
- pants
- wash
- survey
- photos
- except
- topography
- door
- jen
- queen
- depart
- cheap
- theaters
- web
- jesse's
- multiply
- workhouse
- press
- click
- loss
- recipient
- verizon
- volcano
- rolls
- royce
- pixel
- affirmative
- completing
- thai
- walking
- bananas
- hollywood
- equation
- dirty
- scores
- katrina
- exam
- creating
- letter
- sing
- construction
- broadcast
- tom's
- rupies
- management
- permanently
- converting
- ist
- iron
- religion
- kings
- tucson
- standup
- tic
- tac
- toe
- headset
- sex
- diapers
- purpose
- seventeenth
- eighth
- dylan
- temple
- refer
- gift
- fact
- drink
- inches
- air
- carpets
- newcastle
- clients
- private
- tasting
- sams
- nj
- chili
- cultural
- swimming
- they're
- iowa
- jordan
- period
- accept
- cincinnati
- college
- rainbow
- myself
- deep
- deepest
- warming
- sky
- vp
- seeing
- indianapolis
- kmart
- nikesupport
- image
- suck
- broiler
- timeline
- dell
- parisa
- brandon
- example
- y
- filter
- sad
- shine
- sixteen
- christian
- pic
- pdr
- fry
- another
- network
- omelette
- kilometers
- municipality
- giving
- leo
- cups
- earthquake
- susan's
- application
- cross
- across
- carl
- pawel's
- sauce
- relativity
- rail
- sisters
- letting
- shorts
- vs
- rajesh
- swift's
- starving
- discussing
- block
- written
- n9ne
- women
- celebrities
- bake
- cookie
- continents
- workers
- leonardo
- mel
- gibson
- shall
- beauty
- sum
- fair
- deli
- middle
- same
- nile
- sell
- role
- boat
- sandwich
- parts
- hearing
- knows
- sand
- manoj
- delivering
- rahul
- neil
- australian
- kindly
- properly
- assist
- esurance
- emilia
- breach
- loudly
- harvard
- marc
- nintendo
- scrabble
- farm
- lie
- patio
- greg
- screen
- degrees
- yesterdays
- carrots
- receipt
- lasagna
- clooney
- there's
- degree
- preferences
- hallway
- latin
- nicest
- lauren
- worst
- also
- checkers
- input
- boyfriend
- masala
- tournament
- monet's
- burmuda
- section
- eric
- japanese
- supervisor
- junk
- performance
- effective
- urgent
- oldest
- tone
- sweater
- goa
- bag
- lowest
- aus
- peace
- julia
- summer
- fan
- hurricane
- colder
- steven
- sachin
- tendulkar
- watson
- exorbitant
- bags
- macs
- yulia
- matthew
- pole
- toby
- pennsylvania
- carmen
- tiffany
- complete
- electric
- wallet
- albums
- maths
- distribution
- eminem
- familiar
- regard
- upwards
- ron
- couple
- acme
- angel
- zoo
- nineteenth
- shazam
- inflation
- offers
- devotional
- jackie
- tony
- artificial
- intelligence
- grill
- father
- predictions
- repeats
- manila
- cooked
- reason
- learning
- nowadays
- cheer
- jingle
- bells
- anxiety
- hoizer
- girl
- pondichery
- position
- teachers
- dictionary
- nap
- cafe
- m's
- meting
- crime
- eve
- horn
- bristol
- pubs
- companies
- johnson
- resolve
- waterfall
- female
- biriyani
- drama
- nothappy
- haircut
- remote
- colleagues
- bones
- saturdays
- cambridge
- jam
- maine
- category
- invented
- chang's
- boy
- planning
- chen
- assignment
- publish
- hunt
- alerts
- dad's
- deal
- leading
- trail
- follows
- young
- jay's
- summary
- ko
- beyonce
- vergara
- mexico
- whishes
- arrived
- placid
- specific
- depot
- tikka
- expire
- markets
- problematic
- highly
- blues
- thirtieth
- brooklyn
- tatum
- argentinian
- redso
- des
- moines
- women's
- richard's
- cellphone
- division
- hong
- political
- charley's
- steakhouse
- accident
- normal
- wakeup
- satellite
- freezing
- forex
- jimmy
- chores
- snooze
- design
- museum
- guide
- speech
- ran
- shift
- inferior
- mashed
- jcpenney
- environment
- raw
- disturbed
- sia
- chips
- anybody
- present
- reynolds
- limbaugh
- weekdays
- islands
- viral
- asian
- streets
- inception
- meatloaf
- alternative
- compliant
- sensex
- phil
- est
- hand
- switched
- recap
- ferrari
- nandy
- promotion
- kate
- brothers
- ma
- followers
- closer
- deleted
- gloves
- bands
- platter
- boland
- corner
- strong
- chipotle
- eu
- amtrak
- son
- charges
- version
- rajdhani
- chart
- manage
- musical
- hat
- den
- tonight's
- syria
- stronger
- homelessness
- nails
- support
- ally
- sentences
- penn
- ago
- turning
- center
- hungry
- actress
- keywords
- usain
- bolt
- ongoing
- cancelled
- idol
- julia's
- wells
- fargo
- ri
- sarahs
- computers
- devices
- toms
- regards
- quote
- production
- brother's
- inch
- shell
- marathon
- directory
- dictate
- huey
- lewis
- elections
- alone
- marry
- apart
- danielle
- jane's
- mankind
- singularity
- nye
- feynman
- whom
- inventory
- makes
- dept
- apple's
- education
- bugs
- settle
- when's
- geographical
- jason
- exchanges
- mcdonald's
- tgi
- ship
- hershey
- facing
- faulty
- zita
- jeremy
- irons
- wallmart
- sphere
- hp
- gottten
- pardon
- engagement
- showing
- format
- absolute
- interest
- messenger
- gate
- enable
- columbus
- hips
- tour
- sterling
- thumbs
- priced
- tablet
- amc
- bible
- safeway
- organism
- undertake
- freedom
- charger
- documents
- jars
- clay
- members
- o
- vegetables
- delicious
- beaumont
- tx
- finance
- exhibitions
- trumps
- month's
- v
- applebee
- dakota
- bus
- brighton
- pa
- darken
- promoted
- liverpool
- utah
- suggestions
- micheal
- complaints
- pencil
- keith
- fridays
- temperatures
- hardware
- exercise
- jpearsonjessica
- release
- hoover
- goshen
- chester
- wood
- woodchuck
- healthcare
- borges
- calculator
- dune
- reality
- jobe
- gossip
- piece
- convenient
- titled
- pork
- belongs
- hongbin
- wreck
- tool
- started
- gather
- bruno
- costa
- patel
- daniel
- corporate
- controversy
- wendy's
- texans
- biography
- flowers
- investing
- arrives
- finish
- spot
- crop
- culture
- enjoying
- fetch
- kill
- auto
- washing
- buffalo
- he's
- titles
- ross
- whose
- types
- pleasant
- erin
- madison
- tuesday's
- lif
- khan
- affordable
- season
- policies
- c
- expected
- hypothesis
- seth
- kicked
- unhappy
- gallery
- xorg
- used
- monali
- thakur
- noodles
- cher
- sally's
- tracks
- mid
- launch
- glasgow
- bridge
- releases
- pitt's
- server
- clarity
- yens
- motivational
- scratch
- blanket
- aib
- reads
- singing
- monas
- tuesdays
- winter
- rocket
- lands
- chan
- economic
- sister's
- aa
- film
- pb
- indiana
- departure
- pipeline
- stitch
- sleeved
- hail
- logan
- style
- quantum
- physics
- labeled
- delia
- began
- rrcg
- shape
- awards
- improve
- pertaining
- trance
- lives
- weight
- met
- brian
- sinatra
- sunglasses
- attending
- falls
- requesting
- sunday's
- overhead
- greg's
- rom
- historic
- georgia
- guest
- jaipur
- iroomba
- alfredo
- pride
- prejudice
- fill
- interview
- daddy
- wangs
- manchow
- university
- locally
- lowes
- tiring
- east
- medical
- metro
- bach
- schubert
- rooster
- czk
- channing
- pad's
- identify
- yelp
- scandal
- affect
- suffering
- enabled
- arby's
- saw
- mango
- itunes
- highlights
- brings
- sixteenth
- tourist
- wendys
- presley
- sold
- intern
- affairs
- fries
- buttermilk
- panda
- wants
- floor
- clint
- eastwood
- moe's
- planets
- equivalent
- morrocco
- gravity
- uploaded
- someplace
- availability
- issue
- fly
- jpy
- natural
- delta
- disappointed
- files
- q
- cindy
- shortest
- simple
- ring
- lotion
- maroon
- fort
- died
- bonus
- repetitive
- icecream
- statistics
- rebel
- lawn
- leith
- measure
- daytime
- september
- pilots
- pda's
- shade
- sil
- cap
- punjab
- gwalior
- ashley
- juice
- nagar
- ellen
- programs
- fairs
- invest
- suits
- ingredient
- launches
- leaves
- bjork
- crater
- elevation
- stewart
- hotels
- spices
- bubbles
- grass
- broccoli
- capricious
- philosophy
- anthony's
- apply
- pings
- gps
- thomas
- koontz
- acdc
- beijing
- ratings
- union
- prayer
- todo
- angles
- scissors
- stashable
- cinch
- bacon
- passive
- que
- occurred
- lakeland
- tulsa
- advise
- singapore
- risotto
- invested
- model
- helmsworth
- bench
- julian
- buddy
- rogers
- brains
- chap
- badminton
- dick
- lopez
- apartment
- points
- germany
- unknown
- thugs
- healthy
- rash
- casey
- oriam
- ps
- plants
- mailed
- ikoyi
- grassmarket
- marleen's
- locations
- bush
- mac
- reaching
- allan
- till
- cheering
- guitar
- oxford
- densely
- populated
- son's
- hubby
- comparison
- putin
- barcelona
- gss
- energy
- pan
- nyack
- worked
- unavailable
- bryan
- adams
- miss
- checkbook
- jared's
- enrique
- iglesias
- forms
- jeans
- voices
- alan
- tudek
- animals
- olx
- mts
- freed
- jenn's
- coordinates
- humid
- demographic
- otherwise
- tiffany's
- outdoor
- sheila
- lincon
- dust
- serve
- conduct
- estimated
- gaana
- funds
- downloaded
- indignation
- meijer
- necessary
- grubhub
- pancakes
- mario
- bars
- birmingham
- sites
- donuts
- chopra
- textual
- rapids
- cant
- prefix
- sounds
- provides
- amy's
- benton
- leeds
- dsw
- returning
- defective
- digital
- bhaji
- carlos
- linux
- upgrade
- shark
- attacks
- screening
- exposure
- souffle
- tracking
- od
- progress
- paused
- gilmore
- hour's
- imdb
- orleans
- european
- gdp
- surfers
- theme
- ash
- ikea
- klm
- marilia
- cars
- robin
- williams
- surfin
- ottawa
- trade
- contains
- field
- someone's
- prague
- brno
- rene
- interests
- radiolab
- harris
- strive
- accommodating
- fell
- relationship
- pharmacy
- memo
- nancy
- paid
- expressing
- disapproval
- yard
- royale
- hide
- amber
- cheeseburger
- coca
- cola
- al
- matrimony
- scott
- potato
- funniest
- polling
- mother's
- chase
- xmtune
- matt
- murphy
- detroit
- taiwan
- organic
- secrets
- domino
- ac
- assistants
- z
- fred
- owner
- required
- saga
- hanks
- trading
- erosser
- rosser
- vikki
- dhaka
- notepad
- oldies
- alison
- recur
- w
- mentioning
- languages
- lavender
- toned
- videos
- stein
- chennai
- resuming
- moms
- foke
- beep
- discussion
- woodland
- lowry
- meetups
- powerball
- toyota
- focus
- concentrate
- nbc
- roosendaal
- deactivate
- shrimp
- parmigiana
- bumper
- spouses
- lucknow
- paying
- hurry
- served
- rhythm
- enquiry
- hartford
- plaza
- hyundai
- wishing
- websites
- briefing
- complex
- calculations
- jarvis
- highway
- fired
- dissatisfied
- sandra
- bullock
- ratio
- haskell
- sharon
- horse
- mum's
- dillinger
- sunblock
- sub
- tab
- crude
- software
- stadium
- step
- short
- reddit
- appoints
- agra
- sheet
- keyboard
- kfi
- district
- connery
- carnival
- wok
- shutting
- phoenix
- cloth
- rehan
- lego
- alphabetical
- mexco
- charles's
- foodpoisoning
- ultra
- madonna's
- harley
- davidson
- daylight
- afi
- infy
- launched
- inboxes
- secretary
- increased
- resolving
- fuel
- injector
- multiple
- interval
- mike's
- espresso
- sasha
- susie
- salesperson
- country's
- cylinder
- specifications
- ivory
- pst
- zoella's
- jackman
- reacting
- potential
- frying
- boise
- wendy
- divisible
- automated
- katherine
- pre
- gaming
- containing
- decade
- industry
- foot
- chemical
- cause
- taste
- bra
- julianne
- hough
- addresses
- vonstaragrabber
- lion
- restroom
- kohl's
- mentioned
- hz
- royal
- bloodline
- relationships
- billings
- levin
- quarter
- lori's
- lori
- exclamation
- definitions
- birds
- raj
- priya
- allows
- worlds
- kelly
- clarkson
- garam
- scarlet
- found
- cub
- dmv
- excessively
- lake
- dried
- reporting
- smile
- changes
- charmin
- eternal
- smoked
- meat
- beanos
- processing
- chip
- logic
- insightbb
- highland
- terrace
- child
- peck
- midwest
- cardinal
- anthony
- barrack
- jancy
- thompson
- cassy
- gulls
- alternate
- sin
- dragons
- msnbc
- residential
- leader
- siblings
- pedro
- serendipitous
- bestbuy
- targets
- wawa
- mentions
- engagements
- hawaii
- jr
- applied
- halifax
- ahmedabad
- monty
- python
- stronomy
- blahblah
- blah
- arrivals
- subtract
- payoneer
- formal
- connors
- indranagar
- transform
- marcia
- perpetual
- arranging
- cvs
- callum
- steffi
- attention
- kanye
- mommy
- chucky
- forest
- polarized
- proposal
- conrad
- coldest
- hue
- dictator
- clancy
- geranium
- delays
- build
- lense
- rai
- transistor
- dildo
- warren
- exercises
- forman
- kinley
- bottle
- retail
- yan
- regal
- unprofessional
- annual
- payday
- tricep
- arts
- ripped
- vietnam
- trends
- chaise
- preparation
- nestle
- paula
- deen's
- bmw
- microsoft's
- bookstore
- below
- moving
- pretty
- lock
- administrator
- edition
- airways
- marvel
- garner's
- rubix
- cube
- kfc
- milwaukee
- pager
- alexander
- gilchrist
- goods
- performing
- unopened
- security
- chain
- probiotic
- colleague
- knowing
- novel
- fiesta
- comcasts
- acer
- farmers
- fraud
- weighing
- india's
- gotse
- grapefruit
- similar
- tmobile
- nifty
- sessions
- recital
- greatest
- openings
- zip
- demento
- fatigued
- disease
- prevention
- overcharged
- unquote
- cotton
- tweeter
- railways
- flipkart
- fist
- renee
- nutritional
- starred
- calculated
- mattress
- hillstead
- paul's
- jill's
- disregard
- pesto
- stinks
- nobody
- behind
- kid
- nature
- ounces
- ted
- boiled
- dancom
- wars
- fmod
- span
- along
- malls
- joining
- frequently
- realdonaldtrump
- bobby
- mcgee
- pwd
- obamacare
- clicked
- falling
- pampers
- virgin
- hayden
- pat
- amie
- infosys
- technologies
- roads
- aerosmith
- airtel
- dairy
- sends
- dues
- tobytoday
- ileana
- d'cruz
- rended
- taj
- ashok
- typhoon
- rama
- final
- missouri
- virginia
- announce
- haughty
- salmon
- joking
- goodnight
- rebecca
- believe
- vowels
- ban
- haze
- insight
- cable's
- fellow
- tweeters
- canoe
- warriors
- assassinated
- acceleration
- detailed
- wife's
- robert's
- angus
- interested
- jen's
- sjobs
- cdn
- ruth
- simran
- aapa
- kadai
- armor
- sms
- indefatigable
- indicate
- fra
- floors
- modcloth
- honor
- weigh
- priority
- hiking
- smoky
- judawa
- expense
- deals
- plethora
- sam's
- august
- elain
- bbq
- leap
- congressional
- representatives
- voting
- reproductive
- ge
- bbb
- contacted
- assigned
- jill
- drafts
- scoring
- touches
- relevance
- goggins
- medvesek
- philippiness
- booked
- board
- locality
- beth
- katey
- fans
- approximately
- charitable
- rae
- darker
- anymore
- printing
- significance
- fondle
- mate
- larry's
- larrylarry
- faripir
- gurpur
- seasons
- softball
- refreshments
- jamie
- carrie
- underwood
- abdul
- kalam
- subterranean
- colombo
- sri
- lanka
- quit
- dollar's
- award
- among
- spouse
- forgot
- ass
- millionaire
- indians
- americas
- julie's
- transcribe
- garbage
- geographics
- tree
- criticize
- tanzania
- heather's
- answering
- spam
- phishing
- reseda
- axel
- kailey
- prettiest
- century
- mattel
- toys
- grateful
- fixing
- maidan
- sophia
- betty
- reasons
- russian
- applicable
- loving
- claire
- crashed
- batteries
- philips
- person's
- compile
- ali
- matthews
- apologize
- comcastcom
- luke
- jean's
- carefully
- beg
- trying
- flooringco
- seams
- baking
- skiing
- calming
- continuously
- tale
- roraima
- innova
- bowling
- beginning
- identifier
- diverse
- santa
- continuous
- hangman
- vegetarian
- roast
- rewards
- allow
- immediately
- shelley
- hennessey
- waking
- dicaprio
- ways
- immigration
- raised
- lose
- digger
- cosmetic
- perth
- feet
- chick
- tornadoes
- upstairs
- badly
- timings
- lobster
- runner
- forum
- thunderstorms
- powered
- plugged
- rod
- mgccc
- bleed
- ga
- pune
- mixed
- dishes
- radisson
- cheetah
- what'sapp
- cm
- father's
- skill
- graham
- eggless
- collect
- favorited
- flag
- ssmith
- virtual
- bryant
- spots
- scapingyards
- washed
- springfield
- draw
- insurance
- quantity
- brightener
- cuba
- stream
- raincoat
- maiden
- soundtracks
- deliveroo
- humidity
- crowded
- built
- mesa
- rosenstock
- workpdf
- occurring
- environmental
- dbell
- converse
- radia
- logged
- scabble
- loads
- jacob
- hasbro
- aldi
- piramid
- completely
- method
- hems
- loose
- connect
- snapchats
- arizona
- festivals
- hospital
- peppers
- bowl
- korn
- lupe
- eurostar
- umf
- unchecked
- berlin
- lane
- synonyms
- hampshire
- shakira
- brads
- keanu
- reeves
- johns's
- increasing
- burgers
- stan
- falklands
- valley
- maria
- hangin
- glow
- we're
- newsource
- clark
- carrey
- jams
- crashing
- outback
- sugars
- defines
- joel
- venue
- huffington
- images
- elizabeth
- case
- agnes
- randomly
- mecky
- incredible
- even
- decreased
- vacations
- honey
- akon
- barbara
- handsome
- forensic
- spielberg
- korea
- coding
- achievements
- albert's
- clerk
- hopes
- zimbabwe
- buble
- research
- excel
- gun
- rogen
- resin
- tooth
- filling
- mody
- marinara
- vicki's
- mardi
- gras
- monika
- relatives
- chillin
- lol
- levis
- tricounty
- messy
- disgusted
- emoteck
- foroogh
- quick
- decline
- emailstudy
- atdfd
- giant
- trey
- kalka
- mcdo
- timestamp
- operate
- watched
- infinity
- tactics
- upbeat
- synonym
- racing
- towards
- fog
- muted
- coke
- eighties
- tvs
- theresa
- brent
- kamycka
- dejvicka
- tap
- peanut
- circumference
- saskatoon
- sync
- sofa
- mcdonald
- silenced
- catalogue
- algorithm
- sanctimonious
- talked
- realize
- reveca
- paok
- wipe
- bisque
- br
- rather
- silly
- stat
- tar
- vitamins
- gain
- xm
- fongs
- anywhere
- zanes
- se
- chronicles
- weber
- commence
- causes
- sangli
- german
- hedges
- truthdig
- coffees
- commuter
- plain
- mimo's
- oscar
- restrictions
- treasure
- louis
- stevenson
- fifa
- beast
- pav
- prambors
- hannah
- ringcast
- vegetable
- episodes
- overnight
- apps
- nathan
- dismiss
- karl
- hourly
- eyes
- breeds
- inside
- tribune
- join
- crabmeat
- shakira's
- yankee
- greenwich
- gala
- jump
- recall
- johnny
- cash
- pod
- cast
- rare
- suppose
- enjoyment
- emo
- nayagara
- passion
- pit
- marckel
- bohemian
- emma's
- arijit's
- pet
- prize
- receptionist's
- beat
- freds
- probles
- patagonia
- quart
- '?'
- zach
- duration
- jlo
- alphabetic
- phohouse
- badpho
- daybreak
- biryani
- battle
- divergent
- moby
- jungle
- jaiho
- casserole
- shooter
- columbine
- wednesdays
- soul
- accumulation
- squash
- calm
- debate
- schools
- amd's
- lee's
- managers
- myspace
- relaxing
- bahar
- antarctica
- atmosphere
- pinpoint
- payments
- illinois
- louisiana
- cfo
- pool
- vyas
- morel
- mysore
- rise
- sdfa
- newspaper
- calorie
- dangerous
- sunrise
- mostly
- dining
- shake
- flood
- prescription
- mix
- view
- jana
- spa
- comments
- pear
- factor
- clearance
- northern
- language
- arnold
- exxon
- mobil
- dragon
- fruit
- differences
- seashells
- seashore
- velocity
- motorolla
- haggis
- fiji
- irwin
- similarities
- hypertrophy
- sharukh
- implement
- kazakhstan
- mediterranean
- roman
- grigorean
- hardword
- quead
- amphibious
- roberts
- climatic
- tornado
- prone
- rising
- declining
- megatel
- denzel
- washington's
- citizens
- arm
- persos
- belarus
- gyllenhal
- geology
- helicopter
- iphone's
- drained
- manger
- navy
- daikin
- jerk
- nexus
- interaction
- platform
- tweeting
- at&t
- mahaboobsayyad
- kellogg
- ashmit
- ismail
- listing
- enalen
- projects
- clara
- clinic
- exams
- ammunition
- mark's
- divya
- jjnzt
- activation
- andy
- terry's
- brenden
- jeffrey
- burnette
- protests
- joshua
- pianist
- whiz
- schadenfraude
- rials
- storage
- bot
- provided
- massachusetts
- channin
- store's
- rump
- prior
- re
- intelligent
- recognise
- irobot
- areas
- lighter
- yell
- uses
- cn
- gadgets
- skynet
- marie
- lamb
- balcony
- nyt
- bennett
- ralph
- pda
- balloon
- maps
- degeneres
- character
- evans
- actor
- fitbit
- malika
- shivaji
- attitude
- lily's
- concerned
- upon
- startup
- stuffs
- tawa
- relative
- legacy
- cst
- leah
- remini
- mortgage
- amed
- cleaners
- seal
- abita
- grammar
- backdoor
- minimize
- leisure
- billie
- spicy
- training
- comfortably
- sunburn
- minneapolis
- habits
- braking
- notifier
- swan
- thoughts
- pleasure
- those
- kashmirstart
- sells
- i'dl
- kettle
- 'false'
- rta
- valia's
- visiting
- techno
- mornings
- mow
- cbs
- slightly
- francine
- vice
- postpone
- mins
- xyz
- hwood
- kept
- spider
- reopen
- billy
- connery's
- eiffel
- itinerary
- crash
- valentine's
- likexchange
- divorce
- danville
- il
- government
- menus
- capabara
- origin
- assistance
- vicinity
- chit
- drinks
- flabbergasted
- xy
- self
- double
- castle
- refrigerator
- bakery
- spray
- pyramids
- bio
- basic
- humans
- schwarzenegger
- inchoate
- rules
- caftan
- raleigh
- hobby
- ajay
- devgn
- corden
- aud
- prevailing
- kenny's
- crew
- aww
- spying
- employer
- thier
- juanpedro
- craig
- leon's
- looked
- players
- costs
- providers
- sydney
- documentary
- hyphen
- represent
- strings
- pianos
- acoustical
- celeb
- pong
- linear
- turn_down
- reaches
- strength
- routine
- billboard
- piano
- ed
- sheeran
- diet
- vietnamese
- yams
- grandmother's
- rihana
- require
- stressed
- option
- affected
- acquire
- retrieve
- clarion
- congress
- turiellos
- mates
- solar
- dice
- jalapenos
- wished
- painting
- therapy
- warehouse
- mop
- neighbor
- flappy
- returns
- someones
- spring
- wonton
- moves
- jagger
- fishing
- hiphop
- dunkin
- donut
- atlantic
- daughters
- hula
- hoop
- lessons
- scrote's
- indie
- grief
- lebron
- naughty
- preprogrammed
- alt
- needy
- sharpen
- butcher
- knife
- pulled
- starbuck's
- backward
- terrorist
- invaders
- parent
- crescent
- brewhouse
- prado
- science
- playlists
- debbie's
- sleeping
- searched
- lindsey
- lohan
- competitions
- subtracting
- challenge
- beer
- gainers
- chili's
- frubs
- police
- softly
- practical
- assessment
- bonefish
- rotating
- placed
- lakers
- barenaked
- ladies
- lord
- rings
- mar
- sneakers
- artists
- sanantha
- shuffles
- shuffled
- bardonia
- county
- analyze
- pattern
- girls
- league
- fjords
- nothing
- brewing
- smurfs
- tommy's
- lovin
- cottage
- ming
- photosynthesis
- danny's
- repeated
- peaceful
- migrations
- zydeco
- inkheart
- seller
- occurence
- telegraph
- invited
- wifi
- levels
- willie
- nelson
- dolores
- alter
- retirement
- professional
- development
- sainsburys
- byron's
- floyd
- raingear
- notorious
- bone
- explanation
- database
- likely
- lucky
- irish
- sshow
- ramsey
- aired
- sprint
- preparing
- academy
- yeshudas
- angels
- dancing
- aretha
- franklin's
- layers
- glass
- kuch
- hai
- wakey
- knitting
- mujhe
- feb
- king's
- malinda
- parents
- mirchi
- gallon
- seen
- parks
- safest
- evacuation
- beautiful
- sofia
- francs
- consequences
- various
- dicaprio's
- networth
- phelps
- disk
- constructed
- concern
- effectively
- lawrence
- zac
- galifrankas
- wheat
- prediction
- schemes
- mega
- capricorns
- dinky
- lanegan's
- princess
- pregnant
- smallest
- americans
- retweet
- insta
- sonys
- bk
- alzacz
- kohls
- cleanliness
- pizzahut
- delay
- lpg
- satisfied
- choke
- suqcom
- repairs
- killing
- miller
- budgets
- iamironman
- gbaby
- gma
- loves
- kate's
- margaret
- ben's
- brady
- palmer
- homework
- tax
- regional
- archive
- fitness
- vault
- footloose
- child's
- damage
- petco
- canceled
- passing
- pikes
- peak
- avatar
- diverge
- maron
- fault
- sword
- eventual
- contest
- dangal
- mauritania
- abs
- wondering
- southampton
- resources
- soy
- lexmark's
- hilly
- lyon
- beirut
- tribute
- madrid
- ate
- sweat
- charlize
- theron
- atif
- aslam
- capture
- actual
- shane
- dawson
- zedd
- snooker
- loquaciousness
- sholay
- tofu
- nightmare
- avenged
- sevenfold
- matters
- prompt
- panic
- brilliant
- boston's
- mckinleyville
- astrology
- strait
- countdown
- cats
- fruits
- embassy
- pita
- gyros
- negotiations
- hairdresser
- courteous
- enthusiastic
- funk
- sense
- heathens
- cabinet
- irctc
- stored
- shutoff
- glasses
- ella
- fitzgerald
- rover's
- vet
- polar
- bears
- oceanside
- medicine
- anita
- barrow
- burrito
- oliver
- covering
- ground
- zucchini
- textile
- antebellum
- chimes
- covington
- species
- bees
- cranston
- kilometer
- behaved
- rudely
- jimi
- hendrix
- calms
- outwards
- califonia
- composed
- hint
- shipping
- frosting
- sport
- napoleon
- hill
- athens
- middletown
- shirts
- sample
- politician
- investigated
- rapper
- con
- cuisine
- wizard
- brick
- conroe
- iterate
- architect
- salon
- babaji
- passed
- maryland
- surya
- monopoly
- avenue
- considering
- celebration
- brewed
- galoshes
- tutorials
- workouts
- millenium
- toward
- neighbourhood
- bannon
- storming
- reoccurring
- longtime
- sweetheart
- memos
- starfish
- centaur
- philippines
- oar
- departs
- preferably
- latte
- sides
- pentagon
- fashioned
- rescheduled
- transportation
- twins
- duker
- deadline
- samurai
- obaba
- bp
- ambiance
- automatically
- object's
- boost
- morale
- jogging
- spell
- firefly
- mura
- masa
- checklist
- biographies
- sucked
- congested
- avinash
- commando
- jolie's
- instrumentals
- clarksville
- tablespoons
- surveys
- flour
- acela
- calone
- bucket
- fulls
- valid
- references
- critical
- perpetuate
- luncheon
- ohm's
- values
- plying
- expectations
- musician
- mindsweper
- throughout
- noontime
- included
- tour's
- voted
- walgreens
- chickens
- monday's
- crankshaft
- surfer
- lunchtime
- skramz
- compounds
- diabetes
- might
- reservation
- homosapien
- engadget
- boeing
- brisbane
- ear
- headphones
- minimum
- worry
- snowplows
- burying
- driveway
- adapt
- destroy
- impanema
- equipment
- turnt
- attractive
- conducted
- cinnamon
- freshener
- watsapp
- bean
- awfully
- entitled
- murderer
- ford
- forties
- scenery
- morocco
- sf
- blokus
- preacher
- taken
- stormy
- centers
- ethics
- popup
- mysterious
- puts
- stage
- considerations
- lourie
- artic
- scoop
- carion
- merced
- bypass
- passwords
- quantico
- grade
- examples
- cuisines
- hibernate
- bear
- published
- authors
- tempo
- keidis
- tidal
- cookoff
- zones
- probable
- summerfest
- dogs
- aren't
- necessarily
- carolina
- eleventh
- chilling
- sleeve
- invoking
- term
- herald
- maria's
- poltergeist
- imagine
- uv
- index
- johncena
- instruct
- oscillate
- liter
- nelly
- shawarma
- baster
- pali
- vilnius
- tabs
- debates
- singers
- activated
- ozzy
- osbourne
- danish
- happypeoplecom
- accounting
- backpack
- im
- puttanesca
- keeps
- worse
- wrigley
- braise
- loin
- carnatic
- bases
- nick
- swisher
- stolen
- clouds
- cleared
- bola's
- norman
- reedus
- screwdriver
- window
- volcanoes
- rowan
- atkinson
- minneapoliscity
- delicacies
- monitor
- overall
- gymnastics
- channels
- kxly
- botswana
- enjoyable
- spectre
- chane
- decentralized
- men's
- freeze
- postal
- becomes
- ccn
- berth
- michigan
- composition
- shahi
- panner
- dakar
- jakarta
- equalizer
- weird
- barely
- rodriguez
- oklahoma
- giraffes
- margarita
- difficult
- crabs
- firework
- probability
- tools
- emigration
- legislation
- pdf
- cheeseburgers
- applications
- adopters
- priest
- walks
- mechanic
- h
- showers
- signs
- contrast
- recollect
- gm's
- duck
- beavers
- tail
- lucking
- horkersd
- wo
- myrtle
- hr
- steam
- entirety
- anirudh
- colored
- tropical
- bedrooms
- yellowish
- elephants
- expenses
- contents
- warmer
- royksopp
- etc
- progressives
- peoples
- cultures
- unset
- iceland
- mp
- mangalore
- tanya
- quad
- particulars
- insert
- tvf
- formidable
- origins
- eden
- depressed
- mc
- donalds
- rub
- regrets
- judgments
- scope
- intellectual
- capacity
- ahmadabad
- stethoscope
- superstitions
- rl
- stine
- quinoa
- martial
- smooth
- damn
- speeding
- stephen
- halley
- barry
- jealous
- siri's
- java
- scenarios
- pc
- transfer
- tw
- agent
- nightime
- creamy
- mirch
- dil
- cannon
- cameras
- process
- merriam
- webster
- dubstep
- rangoon
- wines
- older
- navigate
- chandelier
- egs
- recognize
- subscriptions
- mileage
- studies
- microphone
- immigrant
- electronics
- careful
- paint
- fund
- success
- resolved
- bola
- eva's
- roller
- augusta
- midtown
- surprise
- children's
- dongle
- seashell
- bots
- fallen
- centimeters
- poisoning
- sci
- fi
- outcome
- reform
- sleepy
- moderate
- chrome
- ultraviolet
- george's
- geek
- courses
- rundown
- legend
- equipments
- usher
- manor
- advertisers
- clue
- depending
- strongest
- outstation
- fallout
- shoal
- lastfm
- relocate
- pollution
- awareness
- bryce
- jessie
- carol
- nsnbc
- vacuumed
- chives
- splits
- arbor
- receiving
- toast
- futures
- brokers
- routes
- fixed
- additional
- switches
- church's
- governor
- enacted
- grams
- guitarists
- android
- babe
- sonny
- sear
- eliminate
- remain
- uc
- polk
- pakistani
- bedside
- reshuffle
- frida
- devil's
- rusk
- actors
- pakistan
- happenings
- sit
- montauk
- beethoven
- legends
- sunshine
- mothers
- smoke
- feels
- rockies
- miamy
- operations
- addition
- subtraction
- incite
- annoying
- cristiano
- ronaldo
- spin
- cows
- jenny
- spread
- wallstreet
- selections
- nashik
- ipl
- oswald
- chambers
- horoscope
- mgk
- dog's
- residing
- cricketer
- dhoni
- byron
- fluctuations
- talks
- palermo
- shallowest
- bbcnews
- nsdl
- flights
- lineup
- stick
- ribs
- jeopardy
- timetables
- emi
- maya
- mackensie
- osteen
- jimmie's
- adjustments
- precocious
- fork
- husband's
- audi
- hibachi
- disputed
- crack
- visible
- boiling
- rogan
- karachi
- babysitter
- kidnapping
- hamburgers
- madonnas
- lessen
- ipo
- greenville
- carries
- creamed
- pickled
- herring
- tackle
- brush
- geyser
- savings
- torey
- hurt
- subscribe
- picks
- birthdate
- goals
- cairo
- projected
- patrick's
- capita
- honda
- intended
- hurriedly
- activates
- it'll
- wsj
- spy
- broods
- grommet
- steven's
- underground
- seahawks
- participants
- workday
- ammi
- nightlife
- donner
- summit
- ukraine's
- ended
- arrangements
- altucher's
- writer
- fortune
- brisket
- grant
- audiobooks
- twilight
- bass
- hunger
- roses
- barbecue
- tuna
- deadly
- killers
- finally
- trilogy
- grisham
- goblet
- roadblocks
- birthday's
- biscuits
- lawyers
- steve's
- kari
- labyrinth
- commonwealth
- sharma
- gulf
- petrol
- earthly
- ultimate
- ending
- allison
- canberra
- honolulu
- flash
- salman
- gresham
- hindustani
- stroganoff
- sock
- creates
- geo
- traits
- moral
- rein
- blood
- slayer
- pro
- bono
- succinct
- dalls
- somethings
- sharp
- izzo
- whiny
- bitch
- macaroni
- nights
- jumper
- blind
- cure
- cancer
- vibrant
- sloth
- transition
- recycling
- bbc's
- columbia
- kentucky
- hire
- opera
- prefer
- avoid
- sort
- comedy
- compassionate
- nc
- va
- riddles
- segment
- youth
- charity
- surrounding
- punjabi
- sharply
- lovett
- barber
- label
- hypocrisy
- subscriber
- captain
- disillusion
- hyderabad
- dashboard
- storm
- barrel
- panasonic
- clinton
- canasta
- mittens
- badra
- amit
- trivedi
- crystal
- lewis's
- everywhere
- rue
- evaporated
- mma
- offered
- tutoring
- peas
- dream
- cafes
- lauderdale
- deletion
- precise
- parliamentary
- remotely
- connection
- calendars
- stupidest
- shovel
- western
- cutting
- ll
- rapping
- spelling
- mama
- tatum's
- fulton
- universal
- garner
- chill
- icebo
- college's
- rehman
- soundcloud
- scorecards
- ketchup
- jimmy's
- crate
- lexmark
- preference
- females
- federal
- andreas
- sportsnet
- favourites
- janice
- bins
- pamela
- covered
- rhapsody
- italian's
- ke
- panera
- remainders
- tandoori
- sukhwinder
- sunidhi
- etymology
- googleplex
- slide
- wearing
- trivial
- pursuit
- cancels
- martina
- mcbride
- finances
- vocab
- zipcode
- compaq
- composer
- margarine
- jonathan
- entrepreneur
- extended
- combo
- memories
- tupac
- affects
- drunks
- ford's
- liked
- dealership
- olky
- realtor
- thighs
- ourselves
- economics
- medication
- gross
- domestic
- donaldson
- prostate
- wicker
- rooms
- instrumental
- savannah
- outing
- affleck
- quotes
- tire
- montana
- exhausted
- acoustic
- commercials
- convenience
- consciousness
- serge
- gainsbourg
- windows
- turks
- generate
- pedicures
- btaxes
- departures
- frasier
- amazon's
- bluetooth
- verus
- neat
- forecasted
- bing's
- dropped
- recurrent
- candidate
- aware
- blackeyed
- pees
- prince's
- perimeter
- rectangle
- aaron
- carter
- involve
- drugs
- lighten
- slicker
- rains
- cloud
- carrot
- popcorn
- carmike
- cinemas
- greater
- minestart
- frog
- lenon
- unique
- hanging
- hung
- sporty
- seldom
- jocko's
- kid's
- viewers
- cantonese
- usage
- specs
- bugatti
- veyron
- chief
- blockbuster
- krishnarajpuram
- interstate
- hammers
- obligatory
- wonder
- southeast
- marlon
- brando
- ferrel
- tal
- obidallah
- manoeuvres
- merita
- rotate
- changs
- pepsi
- shanghai
- branden
- wind
- landmarks
- dvr
- congestion
- valentines
- eastwind
- lomaine
- geneva
- officially
- hopkins
- takjistan
- dimmer
- karo
- apne
- aur
- karna
- chahta
- hu
- purchased
- otherplace
- giraffe
- ute
- requirement
- watts
- powerful
- bulb
- oclock
- nba
- hulu
- composing
- melissas
- millilitres
- spoons
- goulash
- thor
- harischand
- mg
- i95
- sb
- kilo
- diana
- llyod
- webber
- wool
- penultimate
- bang
- philosophers
- nietzche
- focault
- profession
- kilograms
- turkeys
- bibulous
- angeline
- atm
- narwhal
- kilamanjaro
- captia
- volkswagen
- onkyo
- av
- receiver
- ipad
- aniston's
- summarize
- ice
- jindel
- pump
- nikki
- minaj
- nationality
- snoodle
- yemen
- sudan
- unprompted
- organization
- megan
- fares
- engage
- functioning
- dinar
- conservative
- korean
- sahara
- kingdom
- antartica
- telugu
- tamil
- tsunami
- rajani
- khanth
- venture
- goalkeeper
- dushambe
- abrupt
- hbo
- sopranos
- parana
- cave
- anime
- posters
- johny
- depp
- invisible
- graphical
- joli
- pricing
- beech
- nuclear
- triad
- hilton
- borders
- lucille
- redhead
- geraldine
- ferraro
- bde
- lowered
- phrases
- nicole
- mcgoat's
- manipulate
- roip
- nasa
- google's
- davy
- crockett
- springsteen's
- richest
- costliest
- easily
- gm
- psso
- kroner
- maple
- trees
- christie
- brinkley
- libraries
- gmb
- key
- mongolia
- anastasia
- telekenesis
- promise
- stray
- cruise's
- starring
- odyssey
- polish
- zloty
- hook
- ups
- integral
- exponential
- berkshire
- hathaway
- tables
- pink's
- alligator
- porto
- tommy
- hilfiger
- print
- networks
- snaps
- celebrate
- bina
- yay
- smiley
- emoticon
- commented
- folgers
- hathway
- huge
- lfi
- tagged
- treated
- hersheys
- aircel
- nastyburger
- linkedin
- tracy
- waiter
- drain
- charge
- neptunal
- poorly
- waited
- inappropriate
- potus
- accounts
- vodafone
- complaining
- spoiled
- positive
- tumblr
- unpleasant
- overpricing
- cheating
- connected
- else's
- greetings
- thought
- waste
- excess
- micro
- lodge
- snapdeal
- sonic
- hole
- sole
- patel's
- insect
- packet
- elsewhere
- moan
- easyjet
- snotty
- expired
- xl
- sizes
- filing
- applebee's
- angela
- merkel
- swagging
- moto
- sluggish
- flavia
- mum
- jacob's
- existing
- cannot
- pleas
- mahmoud
- ebay
- smsayyad1985
- kishore17051985
- fedex
- truette
- petey's
- tessa
- gaurav
- karen
- mongomery
- llc
- joseph
- turnpike
- accumulated
- deadlines
- fees
- ppt
- emergency
- missing
- carl's
- attach
- physical
- drill
- marilyn
- jugal
- here's
- bug
- sarasigmon123
- lindafancy55
- markpolomm
- gary's
- mailing
- bill's
- erins
- beth's
- wont
- stacy
- cadwell
- tori
- aloud
- brenda
- thisome
- smurfette
- smithjoe
- hwacuk
- chong
- giselle
- bosses
- havent
- frieda's
- jjjindia
- exists
- batch
- samuelwaters
- joose
- hellen
- builders
- accepted
- victor
- taxi's
- terry
- macdonald
- yahoocom
- metion
- rodger
- christy's
- otp
- jayesh
- tried
- morgan's
- office's
- rob
- qerwerq
- secured
- gerry
- raj's
- junable
- shopyourway
- reference
- jhonny's
- marissa
- rosa
- bert
- ana
- goddammit
- pronounce
- serious
- recheck
- slowly
- failed
- fuck
- executed
- clearly
- errors
- showed
- races
- thursdays
- funky
- handmaid's
- beam
- scotty
- debit
- wiki
- editor's
- automobiles
- promo
- discount
- director
- act
- bejeweled
- aside
- snakes
- ladders
- marsala
- influx
- bayou
- reasonably
- tapas
- az
- ddlj
- meatball
- newscast
- bibber
- tmz
- devon
- applebees
- hihop
- doggie
- feelings
- radios
- litle
- tsos
- congratulate
- links
- treble
- flame
- eta
- encourage
- students
- choices
- lobby
- vf
- chore
- butterfly
- clips
- urban
- regular
- bi-weekly
- baltimore
- sport's
- breakups
- dale's
- brea
- douglasville
- fundraiser
- dolphines
- maradona
- pe
- becky
- appointed
- deputy
- utar
- pradesh
- anniston
- handy
- sainsbury's
- attenuate
- parcel
- jakes
- bristo
- stressful
- deposit
- mathematical
- superstar
- survivor
- destiny's
- westcombe
- facility
- oboe
- mcnamara
- abolish
- swim
- repair
- grub
- hub
- ill
- dec
- dreams
- wyatts
- obstacle
- poach
- dental
- rose
- davinci
- trevor
- noah
- ncaa
- entrapreneur
- sanam
- differs
- ave
- hopsin
- enya
- wbc
- accordingly
- remarks
- sufi
- beibers
- arrested
- sensor
- music's
- author
- antwerp
- cnn's
- foodnetworkcom
- customize
- preferred
- unable
- duct
- tape
- gooseto
- apig
- ringer
- secure
- passage
- tomatoes
- wan
- senelena
- americano
- makeup
- robotics
- teleconference
- robotic
- poughkeepsie
- steel
- day's
- soundtrack
- tobymac
- transit
- gloria
- furious
- nazi
- hunting
- effect
- marvin
- gaye
- pasadena
- ca
- constrain
- singles
- outer
- nowhereville
- comfortable
- erica
- grebe
- wooly
- trigonametry
- obsessed
- graphics
- undone
- tough
- treasury
- toledo
- munich
- obtain
- nutritionally
- balanced
- internal
- locks
- exit
- mocking
- lyft
- transaction
- tasty
- mixture
- according
- hands
- supports
- canceling
- congressman's
- lenin
- spagetti
- controversial
- statements
- walker
- humor
- nkotb
- jon
- snow's
- possibility
- wellington
- nz
- advantages
- disadvantages
- driver
- towels
- stretch
- gear
- joey
- crimson
- chose
- pineapple
- asparagus
- teaspoons
- bling
- medieval
- engines
- foods
- hurts
- cannibal
- tonic
- bitcoin
- collection
- hidden
- figures
- brasil
- politic
- superb
- dalida
- capuccino
- analysts
- thankama
- kodaikanal
- vote
- burritto
- chipolte
- abut
- sedaka
- chamber
- rfi
- knock
- cnncom
- remchi
- fl
- ortcars
- flip
- wire
- thriller
- fiasco
- breaks
- dam
- paradise
- presidency
- sigur
- ros
- socks
- van
- halen
- wayne
- spare
- lightness
- appropriately
- both
- musics
- coastal
- cry
- friend's
- wore
- veganism
- picnic
- regent
- visited
- therapist
- inauguration
- swatishs
- dorothy
- known
- supervision
- superbowl
- eric's
- bday
- kar
- abhi
- achche
- ache
- rahe
- honge
- mhz
- sponge
- bistros
- brownies
- tenderloin
- enchiladas
- gluten
- hotdog
- row
- bing
- notebook
- pulldown
- clearer
- medford
- drivers
- waverley
- canal
- connecting
- summers
- gibraltar
- monoprice
- mxblue
- mechanical
- turbulence
- carey
- blunder
- factorial
- depends
- commands
- stand
- draymond
- susumu
- hirasawa
- yosemite
- '200'
- baguette
- stonehenge
- douriff
- ivf
- ivr
- litt
- runs
- hesitant
- crock
- guetta
- malaysia
- whelers
- sadness
- william
- coral
- daft
- punk
- sandle
- santha
- ingerman
- calc
- shibaru
- alcohols
- nano
- gina
- desta
- mgmt
- bana
- talking
- garvin
- trilly
- nytimes
- chhana
- mereya
- favor
- strained
- cooler
- films
- einstein's
- aroma
- ska
- raphsody
- trebuchet
- forth
- relate
- qualifications
- kirk
- franklin
- arithmetic
- skyfall
- bathrooms
- raghu
- dixit
- reports
- availables
- haddock
- odd
- cape
- cod
- noisy
- dull
- hackernews
- porn
- pad
- fight
- fighter
- nzd
- melodious
- burton
- helena
- campaign
- mcclanahan
- mummy's
- motown
- rasgulla
- janta
- pvt
- ltd
- heartthrob
- justin's
- velociraptor
- hippo
- senatra
- giggle
- peru
- nirvana
- anirudh's
- retro
- mf
- doom
- summarise
- ariana
- grande
- predicted
- creed
- user
- desire
- kenny
- roger
- sia's
- thrills
- wapo
- stockholm
- okinawa
- occasionally
- shuffling
- veggie
- mukkala
- mukkabilla
- guardian
- anytime
- themes
- horror
- ennema
- eatha
- homestead
- forever
- mayor's
- stance
- council
- master
- louies
- keane's
- fears
- noe
- reggae
- largo
- swiftm
- afi's
- xinhua
- dedicated
- bottom
- franks
- yelawolf
- ucl
- flop
- grammys
- espn
- joni
- mitchell
- shot
- tequila
- sleepyhead
- aces
- redder
- edms
- lamp's
- loudest
- brolly
- thao
- nguyen
- interior
- dine
- dogwalking
- nytimescom
- overcast
- deactive
- foo
- disasters
- opacity
- dea
- guam
- drug
- abuse
- itzhak
- perlman
- drawing
- sweden
- bombing
- ireland
- poll
- hotha
- defrosting
- salt
- toggle
- spb
- weatherit
- either
- forecasts
- intellicast
- weathercom
- orevena
- recorder
- pizzahouse
- reorganize
- sticky
- umbrellas
- opened
- cleaned
- shakin
- bakey
- tips
- hypoallergenic
- sarcastic
- cheat
- ii
- developers
- edg
- yaad
- dilana
- kahin
- samantha's
- rita's
- adding
- bro's
- attendees
- maggie
- valet
- groomer
- timeframe
- pete
- faculty
- parade
- greens
- jack's
- walter
- gemma
- nail
- arora's
- namkeen
- tonights
- ggg
- tie
- iheartradio
- rov
- javan
- wfrn
- kicks
- osteen's
- wgrr
- lite
- prairie
- companion
- palhunik
- pudding
- tutorial
- welsh
- rarebit
- oatmeal
- pathia
- achieve
- veg
- pulav
- crockpot
- prepared
- keno
- pinball
- fishdom
- nfs
- harvest
- crops
- farmvile
- millionaires
- vodka
- depend
- pon
- stationary
- mad
- errands
- paav
- queried
- pepper
- rowling
- shadi
- viewed
- mlb
- heavyweight
- citadel
- scene
- circus
- trolls
- grab
- kung
- fu
- bowery
- railway
- coach
- fare
- metrolink
- navigation
- westwood
- layfayette
- inconvenience
- emotions
- arrahman
- cosmos
- multiplied
- abouts
- hitting
- eliot's
- el
- ribbons
- sperm
- whale
- eaten
- lbs
- pinhead
- timeliness
- defining
- thesaurus
- penalty
- approval
- poetry
- ambulance
- jello
- shots
- ferrell
- stassi
- schroedder's
- tacobell
- hierophant
- zealand
- stockton
- emissions
- blowing
- kennedy
- ziggurat
- gagas
- gretszky
- hemingway
- pages
- earn
- nobel
- actions
- sloths
- parton's
- madagascar
- acting
- tiangle
- trebuchets
- googs
- gandhiji
- amal
- brazil
- adviser
- rich
- acted
- rihanas
- stamp
- mugy
- msn
- busdriver
- fergie
- flick
- ribons
- nakumuka
- postmates
- complaintum
- glinder
- gta
- rcg
- outlet
- hadock
- mclanahan
- coal
- mumy's
- piza
- wheelers
- guarante
- debugging
- debuging
- proper
- sung
- bilando
- terrorism
- cover
- dimmed
- vanilli
- marauthr
- wooo
- michael's
- shutdown
- pittsburgh
- precipitation
- riff
- portland
- muggy
- giants
- banks
- steelz
- ensure
- ricky
- matin
- tyres
- plant
- chased
- advice
- gossiping
- society
- mitushree
- hairdresser's
- biology
- fsu
- reflect
- yashas
- vinay
- vally
- closed
- shoutcast
- pilkington
- soda
- powder
- sambar
- cookingforu
- thermonuclear
- battleship
- cereal
- wishlist
- wrist
- hipsterhood
- duncan
- trussel's
- simmons
- wide
- cisco
- crafts
- sporting
- presently
- sheffield
- septa
- lead
- fransisco
- washingdon
- evolution
- mariah
- kya
- tum
- mere
- karne
- karoge
- acts
- assembly
- idle
- brand
- meridian
- terranova
- guarantee
- marian
- fields
- farthest
- philippine
- cambodia
- situated
- foruget
- monopricechanical
- peenth
- moroco
- piz
- tre
- supplwn
- viki
- shivle
- loged
- applebe
- acess
- madagar
- anp
- socer
- subcribe
- pluged
- imigration
- audiowan
- debie's
- imediately
- f
- locar
- duark
- rebeca
- talle
- banas
- ragh
- acordingly
- wakely
- en
- bress
- acording
- stefanan
- puding
- vegie
- vius
- edie
- domizza
- eg
- cheeseiza
- ocurred
- brightnes
- alaba
- memory
- fransico
- sunderland
- boogie
- butt
- leviathan
- shinning
- premier
- cleanup
- wacky
- aman
- cherry
- bomb
- solstice
- silently
- closet
- nakumukka
- shed
- responses
- yankees
- investigation
- dooa
- pieces
- imogen
- heap
- stole
- dynamite
- cease
- operating
- rained
- uptown
- suggestion
- finlee's
- bedtime
- sockets
- sanfranscio
- abbas
- cn's
- vibrate
- cooling
- sheriffs
- hike
- ilayaraja
- speaking
- un
- storms
- roof
- tube
- jackpot
- classmates
- extremely
- somewhere
- drenched
- sentient
- budy
- heating
- apt
- parenting
- concerning
- seo
- searches
- sticking
- patterns
- numbered
- impression
- reunion
- presents
- mehta
- willing
- discuss
- evan
- parker
- violin
- lesson
- musicworkz
- registration
- opens
- evening's
- thursday's
- nineteenth's
- hayathis
- shower
- corresponding
- showcase
- famosa
- kamp
- neal
- brenan
- gx
- nonstop
- rm
- giver
- traveller
- knowledge
- crispy
- supper
- broil
- noodle
- stuffed
- maccoroni
- almond
- clash
- clans
- ping
- keeper
- enemy
- coc
- detergent
- corn
- dill
- pickles
- ranch
- dressing
- lentils
- translate
- toothpaste
- rearrange
- groups
- santana
- pritzker
- winners
- libertarian
- mc's
- vitaly
- nfl
- mythical
- oriented
- provisional
- experiences
- safely
- themselves
- mia
- reducing
- learly
- court
- vin
- diesel
- netbooks
- chinatown
- aberdeen
- queens
- luni
- purchasing
- timing
- bagmati
- narrow
- egypt
- represented
- revelation
- britain
- aamir
- priyanka
- middleton
- base
- original
- nhl
- goal
- scorers
- osteoperosis
- laws
- correlation
- motivation
- ncaaa
- tense
- touring
- framework
- adel
- diamond
- schwarzenegger's
- stomachs
- cow
- chairs
- steph
- subjegant
- pategonia
- michelle
- todlers
- stakes
- tinder
- matches
- fjord
- equator
- triumph
- hell
- moldova
- presley's
- wa
- rajinikanth
- basalt
- bali
- airplane
- hash
- lit
- <sos/eos>
two_pass: false
pre_postencoder_norm: false
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
extract_feats_in_collect_stats: false
pre_postencoder_norm: false
transcript_token_list:
- <blank>
- <unk>
- the
- to
- i
- me
- you
- is
- what
- please
- my
- a
- for
- 'on'
- in
- of
- email
- this
- it
- have
- from
- and
- play
- olly
- that
- new
- can
- do
- how
- tell
- about
- at
- any
- today
- not
- time
- are
- check
- list
- send
- with
- an
- one
- emails
- last
- will
- am
- again
- set
- next
- would
- was
- up
- like
- turn
- said
- calendar
- meeting
- get
- what's
- right
- all
- did
- be
- need
- want
- song
- tweet
- add
- event
- your
- news
- 'off'
- weather
- there
- lights
- more
- now
- alarm
- pm
- music
- show
- confirm
- train
- could
- think
- does
- make
- command
- just
- find
- when
- tomorrow
- much
- where
- week
- by
- give
- events
- know
- day
- start
- two
- latest
- response
- that's
- remind
- done
- but
- thank
- stock
- some
- you've
- answer
- five
- open
- current
- many
- remove
- radio
- good
- book
- 'no'
- facebook
- going
- it's
- volume
- reply
- work
- delete
- go
- complaint
- contact
- if
- service
- let
- thanks
- so
- hear
- once
- correct
- john
- playlist
- birthday
- got
- post
- ten
- order
- sorry
- has
- date
- hey
- coffee
- who
- rate
- three
- exchange
- further
- light
- twenty
- price
- mail
- reminder
- explain
- podcast
- ticket
- down
- really
- clear
- seven
- schedule
- alarms
- say
- morning
- change
- twitter
- cancel
- number
- dollar
- stop
- out
- appreciated
- hundred
- wrong
- don't
- information
- address
- contacts
- read
- york
- us
- which
- should
- 'yes'
- details
- songs
- between
- nine
- anything
- s1
- received
- playing
- shut
- dot
- mind
- com
- google
- most
- put
- job
- traffic
- four
- best
- six
- create
- recent
- yeah
- happening
- friday
- name
- very
- area
- mom
- or
- take
- appointment
- yeap
- room
- world
- home
- hour
- message
- eight
- clarify
- s2
- party
- episode
- here
- elaborate
- alexa
- appreciate
- customer
- i'd
- sent
- thing
- march
- look
- tonight
- place
- try
- after
- definition
- call
- well
- times
- rock
- phone
- speak
- today's
- whats
- food
- thirty
- see
- joke
- every
- pizza
- write
- lists
- game
- shopping
- weekend
- rephrase
- month
- matter
- s
- update
- station
- vacuum
- great
- detail
- long
- gmail
- old
- repeat
- city
- audiobook
- perfectly
- status
- inbox
- mute
- local
- near
- restaurant
- thousand
- tuesday
- year
- we
- media
- before
- around
- resume
- musch
- her
- house
- taxi
- hours
- didn't
- describe
- answers
- understand
- incorrect
- word
- listen
- first
- item
- d
- trump
- save
- days
- socket
- recipe
- nice
- u
- reminders
- social
- search
- as
- monday
- subject
- location
- movie
- saturday
- euro
- dinner
- them
- ask
- let's
- scheduled
- plug
- i'm
- gotten
- question
- minutes
- friend
- favorite
- meetings
- define
- instructions
- exactly
- cook
- understood
- sentence
- thursday
- grocery
- correcly
- their
- words
- temperature
- person
- amazon
- catch
- company
- mean
- something
- correctly
- living
- fantastic
- help
- following
- dollars
- rain
- speakers
- instruction
- helpful
- increase
- consumer
- evening
- family
- upcoming
- jazz
- saying
- way
- switch
- forecast
- task
- cleaner
- love
- late
- boss
- wednesday
- yesterday
- updates
- lower
- people
- cool
- wonderful
- twelve
- afternoon
- color
- wake
- oh
- lunch
- perfect
- back
- understanding
- useful
- amazing
- his
- dim
- movies
- chicago
- things
- takeaway
- fifty
- unread
- happy
- available
- noon
- wouldn't
- night
- had
- appointments
- idea
- michael
- doing
- over
- doesn't
- select
- hi
- shit
- may
- they
- delivery
- nearest
- buy
- apple
- car
- left
- confirmed
- report
- worth
- robot
- uber
- wemo
- sunday
- excellent
- outside
- blue
- looking
- messages
- top
- wear
- point
- too
- i've
- country
- prices
- bring
- store
- awesome
- unclear
- ok
- mark
- speaker
- app
- sound
- hot
- live
- jackson
- bad
- recently
- currently
- smith
- pull
- whatever
- india
- messed
- kitchen
- ninety
- percent
- him
- use
- office
- brightness
- care
- gave
- description
- tom
- regarding
- meaning
- meet
- siri
- bob
- joe
- hmm
- leave
- sarah
- smart
- come
- chicken
- seventeen
- walmart
- bill
- enough
- choose
- louder
- our
- trending
- born
- london
- zone
- account
- cnn
- audio
- president
- isn't
- compose
- coming
- second
- manner
- pick
- album
- uhh
- plus
- provide
- erase
- notification
- played
- channel
- donald
- pound
- instagram
- made
- bbc
- recommend
- happened
- united
- replay
- shop
- free
- dammit
- nope
- b
- nearby
- pop
- shops
- california
- highest
- notifications
- shuffle
- fm
- chinese
- currency
- uh
- restaurants
- jack
- april
- robert
- only
- been
- why
- states
- friends
- skip
- important
- he
- samsung
- later
- notify
- bedroom
- john's
- mails
- eleven
- red
- exact
- cold
- cup
- rates
- incorrectly
- fifth
- money
- boston
- spoke
- tomorrow's
- forward
- respond
- funny
- wait
- business
- market
- star
- headlines
- third
- favorites
- bother
- retry
- stocks
- high
- g
- favourite
- george
- umbrella
- directions
- wedding
- content
- m
- close
- spoken
- concert
- run
- alert
- searching
- mary
- into
- artist
- located
- mike
- anyone
- snow
- tickets
- then
- reset
- garden
- route
- hello
- tall
- likes
- talk
- forty
- share
- feed
- were
- indian
- washington
- difference
- remember
- convert
- receive
- tune
- level
- asking
- capital
- life
- dad
- yen
- street
- raining
- mistake
- correctly?
- quite
- pandora
- jane
- town
- yet
- player
- park
- san
- american
- far
- sports
- raise
- popular
- display
- these
- couldn't
- mountain
- dentist
- importance
- unimportant
- complain
- clean
- continue
- euros
- los
- ready
- yahoo
- can't
- classical
- politics
- newest
- lighting
- miami
- trip
- horrible
- info
- added
- prepare
- iphone
- machine
- mother
- miles
- via
- chris
- tv
- since
- bathroom
- state
- cheese
- request
- items
- oops
- ah
- closest
- warm
- microsoft
- settings
- value
- keep
- brighter
- note
- everything
- wife
- decrease
- okay
- using
- rap
- election
- sunny
- eat
- usa
- eighty
- fifteen
- until
- wanted
- wrongly
- dog
- obama
- years
- coat
- week's
- japan
- quiet
- paris
- angeles
- comcast
- target
- emailed
- airport
- interesting
- mcdonalds
- mr
- married
- green
- product
- past
- little
- other
- t
- listening
- cooking
- activate
- earth
- dance
- title
- florida
- rupee
- travel
- kids
- takeout
- pending
- america
- making
- its
- than
- doctor
- population
- bar
- plans
- power
- fourth
- silent
- ride
- milk
- how's
- seventy
- sure
- fine
- jennifer
- july
- sister
- brighten
- picture
- deliver
- singer
- clock
- inform
- brad
- burger
- never
- pesos
- object
- hero
- arrive
- classic
- olive
- games
- group
- watch
- line
- justin
- cost
- project
- called
- lets
- track
- still
- starbucks
- form
- repeating
- christmas
- breaking
- due
- cheapest
- forget
- posted
- james
- posts
- central
- lot
- stories
- whole
- small
- ever
- steak
- review
- requested
- wish
- david
- workout
- alex
- seems
- given
- gym
- largest
- la
- average
- compare
- china
- fifteenth
- having
- rupees
- band
- background
- meal
- online
- reserve
- file
- lamp
- laugh
- sun
- anniversary
- eastern
- busy
- mobile
- bit
- jokes
- places
- geographic
- else
- chess
- meant
- working
- p
- planned
- program
- seconds
- rated
- large
- issues
- road
- pay
- big
- holiday
- daily
- 'true'
- celebrity
- better
- hut
- being
- sixty
- away
- helped
- peter
- god
- cab
- someone
- internet
- page
- anna
- feel
- video
- steve
- opening
- lately
- sandy
- bank
- weeks
- id
- sam
- pitt
- river
- february
- i'll
- saved
- soup
- phrase
- distance
- economy
- hits
- sony
- eggs
- low
- water
- text
- topic
- co
- begin
- attend
- groceries
- adele
- reach
- within
- pause
- half
- yourself
- kind
- dark
- replied
- enter
- must
- asked
- beatles
- fun
- ingredients
- against
- invite
- soon
- colour
- different
- jacket
- updated
- seattle
- denver
- canada
- vegas
- mode
- pasta
- january
- doe
- listed
- refresh
- listened
- team
- longest
- spotify
- remainder
- telling
- mumbai
- you're
- orlando
- card
- rice
- during
- reduce
- locate
- future
- starting
- boil
- genre
- class
- slow
- famous
- named
- allen
- youtube
- works
- olly's
- dc
- brew
- through
- pounds
- football
- pacific
- white
- sings
- egg
- oil
- festival
- clothes
- moment
- die
- orange
- school
- kim
- las
- divided
- whether
- photo
- everyday
- ryan
- bills
- headline
- fix
- square
- npr
- jake
- brother
- todays
- terrible
- weekly
- type
- topics
- months
- chat
- yoga
- reading
- products
- extra
- cut
- adjust
- king
- personal
- client
- jan
- data
- doctor's
- computer
- rohit
- johns
- o'clock
- canadian
- mistakes
- rid
- names
- control
- sunscreen
- per
- lady
- head
- taylor
- always
- budget
- pink
- bought
- x
- side
- ahead
- articles
- english
- ny
- able
- reschedule
- fast
- hashtag
- tweets
- countries
- numbers
- running
- alabama
- blank
- madonna
- bright
- yellow
- west
- went
- options
- story
- october
- russia
- together
- n
- basketball
- joe's
- dominos
- tomorrows
- less
- situation
- colors
- mom's
- end
- payment
- drop
- downtown
- provider
- joes
- means
- helping
- mexican
- friday's
- cricket
- return
- needed
- death
- tech
- charlotte
- heavy
- draft
- sea
- paul
- r
- condition
- seventh
- dallas
- hip
- related
- article
- heard
- war
- elvis
- everest
- problem
- stating
- bieber
- system
- sales
- shoes
- hard
- become
- based
- kevin
- age
- she
- quality
- mile
- hair
- gas
- biggest
- inr
- climate
- hate
- twentieth
- sucks
- dean
- angelina
- turkey
- harry
- cake
- national
- record
- longer
- dave
- subjects
- brown
- supposed
- ocean
- church
- drive
- gandhi
- needs
- above
- theatre
- cookies
- abraham
- gone
- map
- television
- such
- face
- sale
- jim
- francisco
- sean
- june
- romantic
- compared
- curry
- ball
- jeff
- subway
- lincoln
- bed
- lagos
- turned
- south
- won
- trains
- girlfriend
- mahatma
- nsa
- hop
- amy
- commute
- solve
- came
- created
- dont
- history
- math
- telephone
- says
- laptop
- pawel
- offer
- fox
- single
- sixth
- midnight
- missed
- potter
- loud
- richard
- chuck
- looks
- practice
- body
- dan
- husband
- waiting
- birth
- stuff
- adam
- sender
- gaga
- truck
- france
- texas
- restart
- intel
- colours
- statue
- liberty
- intensity
- previous
- problems
- outlook
- visit
- wine
- peso
- continent
- utterance
- helps
- asssistance
- each
- north
- grand
- patrick
- match
- opinion
- plan
- trump's
- papa
- instead
- martin
- root
- purchase
- perry
- richards
- closing
- cloudy
- eddie
- senders
- move
- susan
- tesco
- size
- shows
- folder
- spaghetti
- doctors
- stores
- presidential
- dates
- theater
- menu
- agenda
- ann
- code
- animal
- frequency
- kansas
- roomba
- technology
- tasks
- without
- flight
- who's
- beach
- empty
- tired
- driving
- entire
- carry
- british
- dr
- asia
- rccg
- uncle
- vacation
- pepperoni
- programme
- standard
- reminding
- maximum
- starts
- tallest
- gonna
- fourteenth
- playback
- medium
- nike
- cruise
- changed
- diego
- arrange
- bowie
- learn
- mount
- particular
- costumer
- sundays
- fire
- calls
- silence
- podcasts
- spain
- dominoes
- website
- italy
- strongly
- agree
- agreed
- suggest
- mood
- fourteen
- result
- metallica
- thinking
- session
- profile
- england
- active
- ohio
- grid
- fall
- pot
- marriage
- queue
- told
- narendra
- jerry
- mt
- frank
- tenth
- wishes
- recording
- finished
- international
- calculate
- hit
- towers
- ninth
- site
- feeling
- macy's
- tag
- actually
- black
- birthdays
- hottest
- mary's
- expect
- snapchat
- jay
- smith's
- mountains
- building
- setting
- cleaning
- height
- initiate
- hall
- breakfast
- martha
- conference
- aol
- win
- steps
- fancy
- smartphone
- led
- zeppelin
- houses
- holy
- currencies
- club
- children
- atlanta
- einstein
- happen
- cell
- landline
- coworker
- objects
- negative
- modi
- soft
- haven't
- mention
- radius
- books
- daughter
- results
- earlier
- bruce
- butter
- stars
- remaining
- delivers
- device
- domino's
- unmute
- joy
- twelfth
- voice
- taking
- snowing
- sick
- boots
- cleveland
- journey
- destination
- worker
- poker
- lee
- katy
- australia
- incoming
- least
- lisa
- experience
- million
- recurring
- scenario
- sacramento
- geography
- library
- brief
- jolie
- monthly
- elton
- sirius
- alaska
- lyrics
- oven
- log
- random
- moscow
- barack
- disney
- alive
- measurements
- maker
- poor
- error
- stone
- versus
- hotmail
- interpret
- sarah's
- memorial
- goes
- stay
- delhi
- health
- special
- speed
- thirteen
- test
- edinburgh
- credit
- facts
- cat
- neighborhood
- sometime
- empire
- entry
- financial
- comment
- link
- hockey
- circuit
- holidays
- singh
- jodhpur
- rockville
- ones
- features
- bread
- eye
- mall
- directv
- contain
- seacrest
- chance
- under
- table
- few
- hotel
- rude
- services
- yesterday's
- certain
- fb
- abc
- netflix
- linda
- notes
- length
- reminded
- shoe
- wild
- employees
- beef
- sushi
- fastest
- thirteenth
- recommendations
- fish
- tennis
- main
- jersey
- jones
- break
- concerts
- gomez
- angry
- uk
- replies
- emily
- kickball
- released
- upload
- effects
- quickest
- italian
- caroline
- emma
- real
- human
- minute
- took
- activity
- jeff's
- staff
- handler
- touch
- hold
- joanne
- range
- moon
- submit
- ends
- tomato
- lost
- prime
- twelveth
- phones
- amd
- hectic
- bobburgers
- screwed
- porch
- reviews
- vegan
- rihanna
- houston
- ham
- mondays
- general
- engaged
- walk
- melody
- electronic
- held
- selected
- equal
- getting
- tata
- wall
- clothing
- round
- leaving
- nasdaq
- total
- pressure
- expensive
- border
- exhibition
- trash
- november
- handle
- halloween
- attachment
- kardashian
- shoot
- rewind
- rating
- toronto
- department
- procedure
- member
- ray
- chelsea
- rohan
- arrow
- checked
- modify
- wasn't
- chances
- protest
- lottery
- prince
- include
- jo
- net
- pie
- sleep
- enjoy
- nineties
- taco
- banana
- source
- quieter
- bored
- desert
- guys
- gary
- activities
- already
- contract
- st
- minister
- disable
- woman
- europe
- arijit
- audible
- presentation
- cad
- records
- trips
- booking
- tacos
- sally
- non
- centre
- direct
- advance
- selena
- policy
- orders
- stefan
- arrival
- divide
- chocolate
- dish
- teeth
- hdfc
- silvia
- stove
- coast
- defined
- digest
- snafu
- manager
- pinterest
- tim
- conversation
- bulldog
- titanic
- brunch
- heat
- canyon
- dial
- earliest
- region
- stopped
- foreign
- folk
- watching
- brexit
- albert
- joejoe
- early
- cities
- manchester
- december
- biloxi
- often
- questions
- garage
- tunes
- possible
- ms
- ar
- kiss
- shares
- bangalore
- heading
- derek's
- desk
- cheers
- tomasz
- terms
- companyname
- sara
- asap
- super
- meryl
- streep
- rent
- dress
- cinema
- usually
- trend
- conversion
- friendly
- ties
- ordered
- electricity
- marked
- migration
- choice
- journal
- norris
- aniston
- mailbox
- minus
- fried
- miley
- cyrus
- newly
- theory
- rest
- swift
- windy
- dan's
- mass
- comes
- selfie
- wings
- julie
- masti
- celine
- plays
- pack
- including
- responded
- jason's
- ale
- apples
- dolly
- oranges
- lg
- washer
- substitute
- global
- feedback
- grandma
- ben
- drainage
- invoice
- sunset
- takeaways
- man
- art
- universe
- suitable
- antonio
- full
- delivered
- laundry
- wrote
- min
- register
- snap
- nixon
- bird
- spend
- rome
- jesse
- calories
- cappuccino
- quickly
- buying
- britney
- spears
- spacey
- jobs
- arriving
- jean
- potholes
- janet
- pictures
- ashwin
- morgan
- freeman
- baby
- microwave
- yellowstone
- francis
- dubai
- invitation
- hope
- melbourne
- rocky
- kroger
- rivers
- charles
- jim's
- rectify
- statement
- carpet
- baked
- jessica
- meatballs
- mushrooms
- amount
- switzerland
- relating
- zero
- front
- phonebook
- hows
- cheesecake
- carryout
- magic
- ola
- replace
- recorded
- access
- land
- where's
- elephant
- removed
- liz
- load
- metal
- package
- diner
- goog
- bob's
- k
- year's
- mars
- guy
- assistant
- rahman
- eagle
- part
- burn
- aran
- stevens
- daughter's
- eighteen
- chemistry
- action
- selling
- thats
- koc
- lines
- sugar
- major
- chair
- easter
- departing
- africa
- nigeria
- requests
- conditions
- you'll
- manhattan
- roll
- cracow
- candy
- crush
- bell
- massive
- gold
- happens
- usual
- andrew
- equals
- dead
- plane
- graduation
- warned
- shaun
- triangle
- wyatt's
- pass
- function
- max
- space
- programmes
- awful
- parton
- exciting
- battery
- hwu
- recipes
- dirham
- rushmore
- johndoe
- button
- express
- pontificate
- easiest
- magda
- selection
- reservations
- guess
- copy
- classes
- supplies
- schedules
- winning
- berkeley
- notice
- headed
- outgoing
- mi
- rainy
- wikipedia
- entertainment
- dow
- everyone
- aunt
- furniture
- oceans
- softer
- heart
- newmail
- while
- baseball
- easy
- stations
- philadelphia
- alice
- swat
- yearly
- poem
- soccer
- president's
- milan
- paper
- kardashian's
- loop
- shown
- sandals
- yo
- scan
- nevada
- apahelp
- coldplay
- french
- bay
- higher
- rumplestiltskin
- airlines
- fresh
- standing
- cream
- hamburger
- broadway
- oscars
- tokyo
- cable
- shipment
- formula
- teacher
- sweet
- golden
- newsfeed
- confirmation
- shirt
- austin
- own
- canon
- wanna
- gods
- spanish
- count
- seat
- ideas
- study
- tara
- mutual
- jennifer's
- because
- edit
- denmark
- direction
- timer
- growth
- luther
- marketing
- cd
- mine
- public
- peter's
- bolshoi
- flat
- crazy
- others
- dry
- pub
- theatres
- bro
- fashion
- teams
- cycle
- pickup
- dion
- teach
- series
- checkout
- male
- noise
- solitaire
- pf
- cassie
- travelling
- davis
- naty
- income
- disco
- dropping
- donna
- follow
- shelly
- accidents
- plot
- irene
- download
- circle
- law
- tea
- organize
- principal
- weekends
- camera
- solution
- bombay
- wuthering
- heights
- charged
- colorado
- kong
- keys
- race
- mona
- entries
- j
- nyc
- potatoes
- gospel
- raju
- trivia
- bike
- dating
- oregon
- event's
- prefers
- rush
- percentages
- peking
- cooker
- husbands
- won't
- tower
- heaven
- hugh
- june's
- fake
- figure
- purple
- takes
- l
- howard
- stern
- nineteen
- percentage
- motorola
- doe's
- outstanding
- tesla
- laura
- dale
- warning
- eighteenth
- golf
- island
- career
- bieber's
- vacuuming
- pizzas
- refund
- weekday
- s's
- derek
- thanksgiving
- delayed
- query
- buffet
- rachel
- pants
- wash
- survey
- photos
- except
- topography
- door
- jen
- queen
- depart
- cheap
- theaters
- web
- jesse's
- multiply
- workhouse
- press
- click
- loss
- recipient
- verizon
- volcano
- rolls
- royce
- pixel
- affirmative
- completing
- thai
- walking
- bananas
- hollywood
- equation
- dirty
- scores
- katrina
- exam
- creating
- letter
- sing
- construction
- broadcast
- tom's
- rupies
- management
- permanently
- converting
- ist
- iron
- religion
- kings
- tucson
- standup
- tic
- tac
- toe
- headset
- sex
- diapers
- purpose
- seventeenth
- eighth
- dylan
- temple
- refer
- gift
- fact
- drink
- inches
- air
- carpets
- newcastle
- clients
- private
- tasting
- sams
- nj
- chili
- cultural
- swimming
- they're
- iowa
- jordan
- period
- accept
- cincinnati
- college
- rainbow
- myself
- deep
- deepest
- warming
- sky
- vp
- seeing
- indianapolis
- kmart
- nikesupport
- image
- suck
- broiler
- timeline
- dell
- parisa
- brandon
- example
- y
- filter
- sad
- shine
- sixteen
- christian
- pic
- pdr
- fry
- another
- network
- omelette
- kilometers
- municipality
- giving
- leo
- cups
- earthquake
- susan's
- application
- cross
- across
- carl
- pawel's
- sauce
- relativity
- rail
- sisters
- letting
- shorts
- vs
- rajesh
- swift's
- starving
- discussing
- block
- written
- n9ne
- women
- celebrities
- bake
- cookie
- continents
- workers
- leonardo
- mel
- gibson
- shall
- beauty
- sum
- fair
- deli
- middle
- same
- nile
- sell
- role
- boat
- sandwich
- parts
- hearing
- knows
- sand
- manoj
- delivering
- rahul
- neil
- australian
- kindly
- properly
- assist
- esurance
- emilia
- breach
- loudly
- harvard
- marc
- nintendo
- scrabble
- farm
- lie
- patio
- greg
- screen
- degrees
- yesterdays
- carrots
- receipt
- lasagna
- clooney
- there's
- degree
- preferences
- hallway
- latin
- nicest
- lauren
- worst
- also
- checkers
- input
- boyfriend
- masala
- tournament
- monet's
- burmuda
- section
- eric
- japanese
- supervisor
- junk
- performance
- effective
- urgent
- oldest
- tone
- sweater
- goa
- bag
- lowest
- aus
- peace
- julia
- summer
- fan
- hurricane
- colder
- steven
- sachin
- tendulkar
- watson
- exorbitant
- bags
- macs
- yulia
- matthew
- pole
- toby
- pennsylvania
- carmen
- tiffany
- complete
- electric
- wallet
- albums
- maths
- distribution
- eminem
- familiar
- regard
- upwards
- ron
- couple
- acme
- angel
- zoo
- nineteenth
- shazam
- inflation
- offers
- devotional
- jackie
- tony
- artificial
- intelligence
- grill
- father
- predictions
- repeats
- manila
- cooked
- reason
- learning
- nowadays
- cheer
- jingle
- bells
- anxiety
- hoizer
- girl
- pondichery
- position
- teachers
- dictionary
- nap
- cafe
- m's
- meting
- crime
- eve
- horn
- bristol
- pubs
- companies
- johnson
- resolve
- waterfall
- female
- biriyani
- drama
- nothappy
- haircut
- remote
- colleagues
- bones
- saturdays
- cambridge
- jam
- maine
- category
- invented
- chang's
- boy
- planning
- chen
- assignment
- publish
- hunt
- alerts
- dad's
- deal
- leading
- trail
- follows
- young
- jay's
- summary
- ko
- beyonce
- vergara
- mexico
- whishes
- arrived
- placid
- specific
- depot
- tikka
- expire
- markets
- problematic
- highly
- blues
- thirtieth
- brooklyn
- tatum
- argentinian
- redso
- des
- moines
- women's
- richard's
- cellphone
- division
- hong
- political
- charley's
- steakhouse
- accident
- normal
- wakeup
- satellite
- freezing
- forex
- jimmy
- chores
- snooze
- design
- museum
- guide
- speech
- ran
- shift
- inferior
- mashed
- jcpenney
- environment
- raw
- disturbed
- sia
- chips
- anybody
- present
- reynolds
- limbaugh
- weekdays
- islands
- viral
- asian
- streets
- inception
- meatloaf
- alternative
- compliant
- sensex
- phil
- est
- hand
- switched
- recap
- ferrari
- nandy
- promotion
- kate
- brothers
- ma
- followers
- closer
- deleted
- gloves
- bands
- platter
- boland
- corner
- strong
- chipotle
- eu
- amtrak
- son
- charges
- version
- rajdhani
- chart
- manage
- musical
- hat
- den
- tonight's
- syria
- stronger
- homelessness
- nails
- support
- ally
- sentences
- penn
- ago
- turning
- center
- hungry
- actress
- keywords
- usain
- bolt
- ongoing
- cancelled
- idol
- julia's
- wells
- fargo
- ri
- sarahs
- computers
- devices
- toms
- regards
- quote
- production
- brother's
- inch
- shell
- marathon
- directory
- dictate
- huey
- lewis
- elections
- alone
- marry
- apart
- danielle
- jane's
- mankind
- singularity
- nye
- feynman
- whom
- inventory
- makes
- dept
- apple's
- education
- bugs
- settle
- when's
- geographical
- jason
- exchanges
- mcdonald's
- tgi
- ship
- hershey
- facing
- faulty
- zita
- jeremy
- irons
- wallmart
- sphere
- hp
- gottten
- pardon
- engagement
- showing
- format
- absolute
- interest
- messenger
- gate
- enable
- columbus
- hips
- tour
- sterling
- thumbs
- priced
- tablet
- amc
- bible
- safeway
- organism
- undertake
- freedom
- charger
- documents
- jars
- clay
- members
- o
- vegetables
- delicious
- beaumont
- tx
- finance
- exhibitions
- trumps
- month's
- v
- applebee
- dakota
- bus
- brighton
- pa
- darken
- promoted
- liverpool
- utah
- suggestions
- micheal
- complaints
- pencil
- keith
- fridays
- temperatures
- hardware
- exercise
- jpearsonjessica
- release
- hoover
- goshen
- chester
- wood
- woodchuck
- healthcare
- borges
- calculator
- dune
- reality
- jobe
- gossip
- piece
- convenient
- titled
- pork
- belongs
- hongbin
- wreck
- tool
- started
- gather
- bruno
- costa
- patel
- daniel
- corporate
- controversy
- wendy's
- texans
- biography
- flowers
- investing
- arrives
- finish
- spot
- crop
- culture
- enjoying
- fetch
- kill
- auto
- washing
- buffalo
- he's
- titles
- ross
- whose
- types
- pleasant
- erin
- madison
- tuesday's
- lif
- khan
- affordable
- season
- policies
- c
- expected
- hypothesis
- seth
- kicked
- unhappy
- gallery
- xorg
- used
- monali
- thakur
- noodles
- cher
- sally's
- tracks
- mid
- launch
- glasgow
- bridge
- releases
- pitt's
- server
- clarity
- yens
- motivational
- scratch
- blanket
- aib
- reads
- singing
- monas
- tuesdays
- winter
- rocket
- lands
- chan
- economic
- sister's
- aa
- film
- pb
- indiana
- departure
- pipeline
- stitch
- sleeved
- hail
- logan
- style
- quantum
- physics
- labeled
- delia
- began
- rrcg
- shape
- awards
- improve
- pertaining
- trance
- lives
- weight
- met
- brian
- sinatra
- sunglasses
- attending
- falls
- requesting
- sunday's
- overhead
- greg's
- rom
- historic
- georgia
- guest
- jaipur
- iroomba
- alfredo
- pride
- prejudice
- fill
- interview
- daddy
- wangs
- manchow
- university
- locally
- lowes
- tiring
- east
- medical
- metro
- bach
- schubert
- rooster
- czk
- channing
- pad's
- identify
- yelp
- scandal
- affect
- suffering
- enabled
- arby's
- saw
- mango
- itunes
- highlights
- brings
- sixteenth
- tourist
- wendys
- presley
- sold
- intern
- affairs
- fries
- buttermilk
- panda
- wants
- floor
- clint
- eastwood
- moe's
- planets
- equivalent
- morrocco
- gravity
- uploaded
- someplace
- availability
- issue
- fly
- jpy
- natural
- delta
- disappointed
- files
- q
- cindy
- shortest
- simple
- ring
- lotion
- maroon
- fort
- died
- bonus
- repetitive
- icecream
- statistics
- rebel
- lawn
- leith
- measure
- daytime
- september
- pilots
- pda's
- shade
- sil
- cap
- punjab
- gwalior
- ashley
- juice
- nagar
- ellen
- programs
- fairs
- invest
- suits
- ingredient
- launches
- leaves
- bjork
- crater
- elevation
- stewart
- hotels
- spices
- bubbles
- grass
- broccoli
- capricious
- philosophy
- anthony's
- apply
- pings
- gps
- thomas
- koontz
- acdc
- beijing
- ratings
- union
- prayer
- todo
- angles
- scissors
- stashable
- cinch
- bacon
- passive
- que
- occurred
- lakeland
- tulsa
- advise
- singapore
- risotto
- invested
- model
- helmsworth
- bench
- julian
- buddy
- rogers
- brains
- chap
- badminton
- dick
- lopez
- apartment
- points
- germany
- unknown
- thugs
- healthy
- rash
- casey
- oriam
- ps
- plants
- mailed
- ikoyi
- grassmarket
- marleen's
- locations
- bush
- mac
- reaching
- allan
- till
- cheering
- guitar
- oxford
- densely
- populated
- son's
- hubby
- comparison
- putin
- barcelona
- gss
- energy
- pan
- nyack
- worked
- unavailable
- bryan
- adams
- miss
- checkbook
- jared's
- enrique
- iglesias
- forms
- jeans
- voices
- alan
- tudek
- animals
- olx
- mts
- freed
- jenn's
- coordinates
- humid
- demographic
- otherwise
- tiffany's
- outdoor
- sheila
- lincon
- dust
- serve
- conduct
- estimated
- gaana
- funds
- downloaded
- indignation
- meijer
- necessary
- grubhub
- pancakes
- mario
- bars
- birmingham
- sites
- donuts
- chopra
- textual
- rapids
- cant
- prefix
- sounds
- provides
- amy's
- benton
- leeds
- dsw
- returning
- defective
- digital
- bhaji
- carlos
- linux
- upgrade
- shark
- attacks
- screening
- exposure
- souffle
- tracking
- od
- progress
- paused
- gilmore
- hour's
- imdb
- orleans
- european
- gdp
- surfers
- theme
- ash
- ikea
- klm
- marilia
- cars
- robin
- williams
- surfin
- ottawa
- trade
- contains
- field
- someone's
- prague
- brno
- rene
- interests
- radiolab
- harris
- strive
- accommodating
- fell
- relationship
- pharmacy
- memo
- nancy
- paid
- expressing
- disapproval
- yard
- royale
- hide
- amber
- cheeseburger
- coca
- cola
- al
- matrimony
- scott
- potato
- funniest
- polling
- mother's
- chase
- xmtune
- matt
- murphy
- detroit
- taiwan
- organic
- secrets
- domino
- ac
- assistants
- z
- fred
- owner
- required
- saga
- hanks
- trading
- erosser
- rosser
- vikki
- dhaka
- notepad
- oldies
- alison
- recur
- w
- mentioning
- languages
- lavender
- toned
- videos
- stein
- chennai
- resuming
- moms
- foke
- beep
- discussion
- woodland
- lowry
- meetups
- powerball
- toyota
- focus
- concentrate
- nbc
- roosendaal
- deactivate
- shrimp
- parmigiana
- bumper
- spouses
- lucknow
- paying
- hurry
- served
- rhythm
- enquiry
- hartford
- plaza
- hyundai
- wishing
- websites
- briefing
- complex
- calculations
- jarvis
- highway
- fired
- dissatisfied
- sandra
- bullock
- ratio
- haskell
- sharon
- horse
- mum's
- dillinger
- sunblock
- sub
- tab
- crude
- software
- stadium
- step
- short
- reddit
- appoints
- agra
- sheet
- keyboard
- kfi
- district
- connery
- carnival
- wok
- shutting
- phoenix
- cloth
- rehan
- lego
- alphabetical
- mexco
- charles's
- foodpoisoning
- ultra
- madonna's
- harley
- davidson
- daylight
- afi
- infy
- launched
- inboxes
- secretary
- increased
- resolving
- fuel
- injector
- multiple
- interval
- mike's
- espresso
- sasha
- susie
- salesperson
- country's
- cylinder
- specifications
- ivory
- pst
- zoella's
- jackman
- reacting
- potential
- frying
- boise
- wendy
- divisible
- automated
- katherine
- pre
- gaming
- containing
- decade
- industry
- foot
- chemical
- cause
- taste
- bra
- julianne
- hough
- addresses
- vonstaragrabber
- lion
- restroom
- kohl's
- mentioned
- hz
- royal
- bloodline
- relationships
- billings
- levin
- quarter
- lori's
- lori
- exclamation
- definitions
- birds
- raj
- priya
- allows
- worlds
- kelly
- clarkson
- garam
- scarlet
- found
- cub
- dmv
- excessively
- lake
- dried
- reporting
- smile
- changes
- charmin
- eternal
- smoked
- meat
- beanos
- processing
- chip
- logic
- insightbb
- highland
- terrace
- child
- peck
- midwest
- cardinal
- anthony
- barrack
- jancy
- thompson
- cassy
- gulls
- alternate
- sin
- dragons
- msnbc
- residential
- leader
- siblings
- pedro
- serendipitous
- bestbuy
- targets
- wawa
- mentions
- engagements
- hawaii
- jr
- applied
- halifax
- ahmedabad
- monty
- python
- stronomy
- blahblah
- blah
- arrivals
- subtract
- payoneer
- formal
- connors
- indranagar
- transform
- marcia
- perpetual
- arranging
- cvs
- callum
- steffi
- attention
- kanye
- mommy
- chucky
- forest
- polarized
- proposal
- conrad
- coldest
- hue
- dictator
- clancy
- geranium
- delays
- build
- lense
- rai
- transistor
- dildo
- warren
- exercises
- forman
- kinley
- bottle
- retail
- yan
- regal
- unprofessional
- annual
- payday
- tricep
- arts
- ripped
- vietnam
- trends
- chaise
- preparation
- nestle
- paula
- deen's
- bmw
- microsoft's
- bookstore
- below
- moving
- pretty
- lock
- administrator
- edition
- airways
- marvel
- garner's
- rubix
- cube
- kfc
- milwaukee
- pager
- alexander
- gilchrist
- goods
- performing
- unopened
- security
- chain
- probiotic
- colleague
- knowing
- novel
- fiesta
- comcasts
- acer
- farmers
- fraud
- weighing
- india's
- gotse
- grapefruit
- similar
- tmobile
- nifty
- sessions
- recital
- greatest
- openings
- zip
- demento
- fatigued
- disease
- prevention
- overcharged
- unquote
- cotton
- tweeter
- railways
- flipkart
- fist
- renee
- nutritional
- starred
- calculated
- mattress
- hillstead
- paul's
- jill's
- disregard
- pesto
- stinks
- nobody
- behind
- kid
- nature
- ounces
- ted
- boiled
- dancom
- wars
- fmod
- span
- along
- malls
- joining
- frequently
- realdonaldtrump
- bobby
- mcgee
- pwd
- obamacare
- clicked
- falling
- pampers
- virgin
- hayden
- pat
- amie
- infosys
- technologies
- roads
- aerosmith
- airtel
- dairy
- sends
- dues
- tobytoday
- ileana
- d'cruz
- rended
- taj
- ashok
- typhoon
- rama
- final
- missouri
- virginia
- announce
- haughty
- salmon
- joking
- goodnight
- rebecca
- believe
- vowels
- ban
- haze
- insight
- cable's
- fellow
- tweeters
- canoe
- warriors
- assassinated
- acceleration
- detailed
- wife's
- robert's
- angus
- interested
- jen's
- sjobs
- cdn
- ruth
- simran
- aapa
- kadai
- armor
- sms
- indefatigable
- indicate
- fra
- floors
- modcloth
- honor
- weigh
- priority
- hiking
- smoky
- judawa
- expense
- deals
- plethora
- sam's
- august
- elain
- bbq
- leap
- congressional
- representatives
- voting
- reproductive
- ge
- bbb
- contacted
- assigned
- jill
- drafts
- scoring
- touches
- relevance
- goggins
- medvesek
- philippiness
- booked
- board
- locality
- beth
- katey
- fans
- approximately
- charitable
- rae
- darker
- anymore
- printing
- significance
- fondle
- mate
- larry's
- larrylarry
- faripir
- gurpur
- seasons
- softball
- refreshments
- jamie
- carrie
- underwood
- abdul
- kalam
- subterranean
- colombo
- sri
- lanka
- quit
- dollar's
- award
- among
- spouse
- forgot
- ass
- millionaire
- indians
- americas
- julie's
- transcribe
- garbage
- geographics
- tree
- criticize
- tanzania
- heather's
- answering
- spam
- phishing
- reseda
- axel
- kailey
- prettiest
- century
- mattel
- toys
- grateful
- fixing
- maidan
- sophia
- betty
- reasons
- russian
- applicable
- loving
- claire
- crashed
- batteries
- philips
- person's
- compile
- ali
- matthews
- apologize
- comcastcom
- luke
- jean's
- carefully
- beg
- trying
- flooringco
- seams
- baking
- skiing
- calming
- continuously
- tale
- roraima
- innova
- bowling
- beginning
- identifier
- diverse
- santa
- continuous
- hangman
- vegetarian
- roast
- rewards
- allow
- immediately
- shelley
- hennessey
- waking
- dicaprio
- ways
- immigration
- raised
- lose
- digger
- cosmetic
- perth
- feet
- chick
- tornadoes
- upstairs
- badly
- timings
- lobster
- runner
- forum
- thunderstorms
- powered
- plugged
- rod
- mgccc
- bleed
- ga
- pune
- mixed
- dishes
- radisson
- cheetah
- what'sapp
- cm
- father's
- skill
- graham
- eggless
- collect
- favorited
- flag
- ssmith
- virtual
- bryant
- spots
- scapingyards
- washed
- springfield
- draw
- insurance
- quantity
- brightener
- cuba
- stream
- raincoat
- maiden
- soundtracks
- deliveroo
- humidity
- crowded
- built
- mesa
- rosenstock
- workpdf
- occurring
- environmental
- dbell
- converse
- radia
- logged
- scabble
- loads
- jacob
- hasbro
- aldi
- piramid
- completely
- method
- hems
- loose
- connect
- snapchats
- arizona
- festivals
- hospital
- peppers
- bowl
- korn
- lupe
- eurostar
- umf
- unchecked
- berlin
- lane
- synonyms
- hampshire
- shakira
- brads
- keanu
- reeves
- johns's
- increasing
- burgers
- stan
- falklands
- valley
- maria
- hangin
- glow
- we're
- newsource
- clark
- carrey
- jams
- crashing
- outback
- sugars
- defines
- joel
- venue
- huffington
- images
- elizabeth
- case
- agnes
- randomly
- mecky
- incredible
- even
- decreased
- vacations
- honey
- akon
- barbara
- handsome
- forensic
- spielberg
- korea
- coding
- achievements
- albert's
- clerk
- hopes
- zimbabwe
- buble
- research
- excel
- gun
- rogen
- resin
- tooth
- filling
- mody
- marinara
- vicki's
- mardi
- gras
- monika
- relatives
- chillin
- lol
- levis
- tricounty
- messy
- disgusted
- emoteck
- foroogh
- quick
- decline
- emailstudy
- atdfd
- giant
- trey
- kalka
- mcdo
- timestamp
- operate
- watched
- infinity
- tactics
- upbeat
- synonym
- racing
- towards
- fog
- muted
- coke
- eighties
- tvs
- theresa
- brent
- kamycka
- dejvicka
- tap
- peanut
- circumference
- saskatoon
- sync
- sofa
- mcdonald
- silenced
- catalogue
- algorithm
- sanctimonious
- talked
- realize
- reveca
- paok
- wipe
- bisque
- br
- rather
- silly
- stat
- tar
- vitamins
- gain
- xm
- fongs
- anywhere
- zanes
- se
- chronicles
- weber
- commence
- causes
- sangli
- german
- hedges
- truthdig
- coffees
- commuter
- plain
- mimo's
- oscar
- restrictions
- treasure
- louis
- stevenson
- fifa
- beast
- pav
- prambors
- hannah
- ringcast
- vegetable
- episodes
- overnight
- apps
- nathan
- dismiss
- karl
- hourly
- eyes
- breeds
- inside
- tribune
- join
- crabmeat
- shakira's
- yankee
- greenwich
- gala
- jump
- recall
- johnny
- cash
- pod
- cast
- rare
- suppose
- enjoyment
- emo
- nayagara
- passion
- pit
- marckel
- bohemian
- emma's
- arijit's
- pet
- prize
- receptionist's
- beat
- freds
- probles
- patagonia
- quart
- '?'
- zach
- duration
- jlo
- alphabetic
- phohouse
- badpho
- daybreak
- biryani
- battle
- divergent
- moby
- jungle
- jaiho
- casserole
- shooter
- columbine
- wednesdays
- soul
- accumulation
- squash
- calm
- debate
- schools
- amd's
- lee's
- managers
- myspace
- relaxing
- bahar
- antarctica
- atmosphere
- pinpoint
- payments
- illinois
- louisiana
- cfo
- pool
- vyas
- morel
- mysore
- rise
- sdfa
- newspaper
- calorie
- dangerous
- sunrise
- mostly
- dining
- shake
- flood
- prescription
- mix
- view
- jana
- spa
- comments
- pear
- factor
- clearance
- northern
- language
- arnold
- exxon
- mobil
- dragon
- fruit
- differences
- seashells
- seashore
- velocity
- motorolla
- haggis
- fiji
- irwin
- similarities
- hypertrophy
- sharukh
- implement
- kazakhstan
- mediterranean
- roman
- grigorean
- hardword
- quead
- amphibious
- roberts
- climatic
- tornado
- prone
- rising
- declining
- megatel
- denzel
- washington's
- citizens
- arm
- persos
- belarus
- gyllenhal
- geology
- helicopter
- iphone's
- drained
- manger
- navy
- daikin
- jerk
- nexus
- interaction
- platform
- tweeting
- at&t
- mahaboobsayyad
- kellogg
- ashmit
- ismail
- listing
- enalen
- projects
- clara
- clinic
- exams
- ammunition
- mark's
- divya
- jjnzt
- activation
- andy
- terry's
- brenden
- jeffrey
- burnette
- protests
- joshua
- pianist
- whiz
- schadenfraude
- rials
- storage
- bot
- provided
- massachusetts
- channin
- store's
- rump
- prior
- re
- intelligent
- recognise
- irobot
- areas
- lighter
- yell
- uses
- cn
- gadgets
- skynet
- marie
- lamb
- balcony
- nyt
- bennett
- ralph
- pda
- balloon
- maps
- degeneres
- character
- evans
- actor
- fitbit
- malika
- shivaji
- attitude
- lily's
- concerned
- upon
- startup
- stuffs
- tawa
- relative
- legacy
- cst
- leah
- remini
- mortgage
- amed
- cleaners
- seal
- abita
- grammar
- backdoor
- minimize
- leisure
- billie
- spicy
- training
- comfortably
- sunburn
- minneapolis
- habits
- braking
- notifier
- swan
- thoughts
- pleasure
- those
- kashmirstart
- sells
- i'dl
- kettle
- 'false'
- rta
- valia's
- visiting
- techno
- mornings
- mow
- cbs
- slightly
- francine
- vice
- postpone
- mins
- xyz
- hwood
- kept
- spider
- reopen
- billy
- connery's
- eiffel
- itinerary
- crash
- valentine's
- likexchange
- divorce
- danville
- il
- government
- menus
- capabara
- origin
- assistance
- vicinity
- chit
- drinks
- flabbergasted
- xy
- self
- double
- castle
- refrigerator
- bakery
- spray
- pyramids
- bio
- basic
- humans
- schwarzenegger
- inchoate
- rules
- caftan
- raleigh
- hobby
- ajay
- devgn
- corden
- aud
- prevailing
- kenny's
- crew
- aww
- spying
- employer
- thier
- juanpedro
- craig
- leon's
- looked
- players
- costs
- providers
- sydney
- documentary
- hyphen
- represent
- strings
- pianos
- acoustical
- celeb
- pong
- linear
- turn_down
- reaches
- strength
- routine
- billboard
- piano
- ed
- sheeran
- diet
- vietnamese
- yams
- grandmother's
- rihana
- require
- stressed
- option
- affected
- acquire
- retrieve
- clarion
- congress
- turiellos
- mates
- solar
- dice
- jalapenos
- wished
- painting
- therapy
- warehouse
- mop
- neighbor
- flappy
- returns
- someones
- spring
- wonton
- moves
- jagger
- fishing
- hiphop
- dunkin
- donut
- atlantic
- daughters
- hula
- hoop
- lessons
- scrote's
- indie
- grief
- lebron
- naughty
- preprogrammed
- alt
- needy
- sharpen
- butcher
- knife
- pulled
- starbuck's
- backward
- terrorist
- invaders
- parent
- crescent
- brewhouse
- prado
- science
- playlists
- debbie's
- sleeping
- searched
- lindsey
- lohan
- competitions
- subtracting
- challenge
- beer
- gainers
- chili's
- frubs
- police
- softly
- practical
- assessment
- bonefish
- rotating
- placed
- lakers
- barenaked
- ladies
- lord
- rings
- mar
- sneakers
- artists
- sanantha
- shuffles
- shuffled
- bardonia
- county
- analyze
- pattern
- girls
- league
- fjords
- nothing
- brewing
- smurfs
- tommy's
- lovin
- cottage
- ming
- photosynthesis
- danny's
- repeated
- peaceful
- migrations
- zydeco
- inkheart
- seller
- occurence
- telegraph
- invited
- wifi
- levels
- willie
- nelson
- dolores
- alter
- retirement
- professional
- development
- sainsburys
- byron's
- floyd
- raingear
- notorious
- bone
- explanation
- database
- likely
- lucky
- irish
- sshow
- ramsey
- aired
- sprint
- preparing
- academy
- yeshudas
- angels
- dancing
- aretha
- franklin's
- layers
- glass
- kuch
- hai
- wakey
- knitting
- mujhe
- feb
- king's
- malinda
- parents
- mirchi
- gallon
- seen
- parks
- safest
- evacuation
- beautiful
- sofia
- francs
- consequences
- various
- dicaprio's
- networth
- phelps
- disk
- constructed
- concern
- effectively
- lawrence
- zac
- galifrankas
- wheat
- prediction
- schemes
- mega
- capricorns
- dinky
- lanegan's
- princess
- pregnant
- smallest
- americans
- retweet
- insta
- sonys
- bk
- alzacz
- kohls
- cleanliness
- pizzahut
- delay
- lpg
- satisfied
- choke
- suqcom
- repairs
- killing
- miller
- budgets
- iamironman
- gbaby
- gma
- loves
- kate's
- margaret
- ben's
- brady
- palmer
- homework
- tax
- regional
- archive
- fitness
- vault
- footloose
- child's
- damage
- petco
- canceled
- passing
- pikes
- peak
- avatar
- diverge
- maron
- fault
- sword
- eventual
- contest
- dangal
- mauritania
- abs
- wondering
- southampton
- resources
- soy
- lexmark's
- hilly
- lyon
- beirut
- tribute
- madrid
- ate
- sweat
- charlize
- theron
- atif
- aslam
- capture
- actual
- shane
- dawson
- zedd
- snooker
- loquaciousness
- sholay
- tofu
- nightmare
- avenged
- sevenfold
- matters
- prompt
- panic
- brilliant
- boston's
- mckinleyville
- astrology
- strait
- countdown
- cats
- fruits
- embassy
- pita
- gyros
- negotiations
- hairdresser
- courteous
- enthusiastic
- funk
- sense
- heathens
- cabinet
- irctc
- stored
- shutoff
- glasses
- ella
- fitzgerald
- rover's
- vet
- polar
- bears
- oceanside
- medicine
- anita
- barrow
- burrito
- oliver
- covering
- ground
- zucchini
- textile
- antebellum
- chimes
- covington
- species
- bees
- cranston
- kilometer
- behaved
- rudely
- jimi
- hendrix
- calms
- outwards
- califonia
- composed
- hint
- shipping
- frosting
- sport
- napoleon
- hill
- athens
- middletown
- shirts
- sample
- politician
- investigated
- rapper
- con
- cuisine
- wizard
- brick
- conroe
- iterate
- architect
- salon
- babaji
- passed
- maryland
- surya
- monopoly
- avenue
- considering
- celebration
- brewed
- galoshes
- tutorials
- workouts
- millenium
- toward
- neighbourhood
- bannon
- storming
- reoccurring
- longtime
- sweetheart
- memos
- starfish
- centaur
- philippines
- oar
- departs
- preferably
- latte
- sides
- pentagon
- fashioned
- rescheduled
- transportation
- twins
- duker
- deadline
- samurai
- obaba
- bp
- ambiance
- automatically
- object's
- boost
- morale
- jogging
- spell
- firefly
- mura
- masa
- checklist
- biographies
- sucked
- congested
- avinash
- commando
- jolie's
- instrumentals
- clarksville
- tablespoons
- surveys
- flour
- acela
- calone
- bucket
- fulls
- valid
- references
- critical
- perpetuate
- luncheon
- ohm's
- values
- plying
- expectations
- musician
- mindsweper
- throughout
- noontime
- included
- tour's
- voted
- walgreens
- chickens
- monday's
- crankshaft
- surfer
- lunchtime
- skramz
- compounds
- diabetes
- might
- reservation
- homosapien
- engadget
- boeing
- brisbane
- ear
- headphones
- minimum
- worry
- snowplows
- burying
- driveway
- adapt
- destroy
- impanema
- equipment
- turnt
- attractive
- conducted
- cinnamon
- freshener
- watsapp
- bean
- awfully
- entitled
- murderer
- ford
- forties
- scenery
- morocco
- sf
- blokus
- preacher
- taken
- stormy
- centers
- ethics
- popup
- mysterious
- puts
- stage
- considerations
- lourie
- artic
- scoop
- carion
- merced
- bypass
- passwords
- quantico
- grade
- examples
- cuisines
- hibernate
- bear
- published
- authors
- tempo
- keidis
- tidal
- cookoff
- zones
- probable
- summerfest
- dogs
- aren't
- necessarily
- carolina
- eleventh
- chilling
- sleeve
- invoking
- term
- herald
- maria's
- poltergeist
- imagine
- uv
- index
- johncena
- instruct
- oscillate
- liter
- nelly
- shawarma
- baster
- pali
- vilnius
- tabs
- debates
- singers
- activated
- ozzy
- osbourne
- danish
- happypeoplecom
- accounting
- backpack
- im
- puttanesca
- keeps
- worse
- wrigley
- braise
- loin
- carnatic
- bases
- nick
- swisher
- stolen
- clouds
- cleared
- bola's
- norman
- reedus
- screwdriver
- window
- volcanoes
- rowan
- atkinson
- minneapoliscity
- delicacies
- monitor
- overall
- gymnastics
- channels
- kxly
- botswana
- enjoyable
- spectre
- chane
- decentralized
- men's
- freeze
- postal
- becomes
- ccn
- berth
- michigan
- composition
- shahi
- panner
- dakar
- jakarta
- equalizer
- weird
- barely
- rodriguez
- oklahoma
- giraffes
- margarita
- difficult
- crabs
- firework
- probability
- tools
- emigration
- legislation
- pdf
- cheeseburgers
- applications
- adopters
- priest
- walks
- mechanic
- h
- showers
- signs
- contrast
- recollect
- gm's
- duck
- beavers
- tail
- lucking
- horkersd
- wo
- myrtle
- hr
- steam
- entirety
- anirudh
- colored
- tropical
- bedrooms
- yellowish
- elephants
- expenses
- contents
- warmer
- royksopp
- etc
- progressives
- peoples
- cultures
- unset
- iceland
- mp
- mangalore
- tanya
- quad
- particulars
- insert
- tvf
- formidable
- origins
- eden
- depressed
- mc
- donalds
- rub
- regrets
- judgments
- scope
- intellectual
- capacity
- ahmadabad
- stethoscope
- superstitions
- rl
- stine
- quinoa
- martial
- smooth
- damn
- speeding
- stephen
- halley
- barry
- jealous
- siri's
- java
- scenarios
- pc
- transfer
- tw
- agent
- nightime
- creamy
- mirch
- dil
- cannon
- cameras
- process
- merriam
- webster
- dubstep
- rangoon
- wines
- older
- navigate
- chandelier
- egs
- recognize
- subscriptions
- mileage
- studies
- microphone
- immigrant
- electronics
- careful
- paint
- fund
- success
- resolved
- bola
- eva's
- roller
- augusta
- midtown
- surprise
- children's
- dongle
- seashell
- bots
- fallen
- centimeters
- poisoning
- sci
- fi
- outcome
- reform
- sleepy
- moderate
- chrome
- ultraviolet
- george's
- geek
- courses
- rundown
- legend
- equipments
- usher
- manor
- advertisers
- clue
- depending
- strongest
- outstation
- fallout
- shoal
- lastfm
- relocate
- pollution
- awareness
- bryce
- jessie
- carol
- nsnbc
- vacuumed
- chives
- splits
- arbor
- receiving
- toast
- futures
- brokers
- routes
- fixed
- additional
- switches
- church's
- governor
- enacted
- grams
- guitarists
- android
- babe
- sonny
- sear
- eliminate
- remain
- uc
- polk
- pakistani
- bedside
- reshuffle
- frida
- devil's
- rusk
- actors
- pakistan
- happenings
- sit
- montauk
- beethoven
- legends
- sunshine
- mothers
- smoke
- feels
- rockies
- miamy
- operations
- addition
- subtraction
- incite
- annoying
- cristiano
- ronaldo
- spin
- cows
- jenny
- spread
- wallstreet
- selections
- nashik
- ipl
- oswald
- chambers
- horoscope
- mgk
- dog's
- residing
- cricketer
- dhoni
- byron
- fluctuations
- talks
- palermo
- shallowest
- bbcnews
- nsdl
- flights
- lineup
- stick
- ribs
- jeopardy
- timetables
- emi
- maya
- mackensie
- osteen
- jimmie's
- adjustments
- precocious
- fork
- husband's
- audi
- hibachi
- disputed
- crack
- visible
- boiling
- rogan
- karachi
- babysitter
- kidnapping
- hamburgers
- madonnas
- lessen
- ipo
- greenville
- carries
- creamed
- pickled
- herring
- tackle
- brush
- geyser
- savings
- torey
- hurt
- subscribe
- picks
- birthdate
- goals
- cairo
- projected
- patrick's
- capita
- honda
- intended
- hurriedly
- activates
- it'll
- wsj
- spy
- broods
- grommet
- steven's
- underground
- seahawks
- participants
- workday
- ammi
- nightlife
- donner
- summit
- ukraine's
- ended
- arrangements
- altucher's
- writer
- fortune
- brisket
- grant
- audiobooks
- twilight
- bass
- hunger
- roses
- barbecue
- tuna
- deadly
- killers
- finally
- trilogy
- grisham
- goblet
- roadblocks
- birthday's
- biscuits
- lawyers
- steve's
- kari
- labyrinth
- commonwealth
- sharma
- gulf
- petrol
- earthly
- ultimate
- ending
- allison
- canberra
- honolulu
- flash
- salman
- gresham
- hindustani
- stroganoff
- sock
- creates
- geo
- traits
- moral
- rein
- blood
- slayer
- pro
- bono
- succinct
- dalls
- somethings
- sharp
- izzo
- whiny
- bitch
- macaroni
- nights
- jumper
- blind
- cure
- cancer
- vibrant
- sloth
- transition
- recycling
- bbc's
- columbia
- kentucky
- hire
- opera
- prefer
- avoid
- sort
- comedy
- compassionate
- nc
- va
- riddles
- segment
- youth
- charity
- surrounding
- punjabi
- sharply
- lovett
- barber
- label
- hypocrisy
- subscriber
- captain
- disillusion
- hyderabad
- dashboard
- storm
- barrel
- panasonic
- clinton
- canasta
- mittens
- badra
- amit
- trivedi
- crystal
- lewis's
- everywhere
- rue
- evaporated
- mma
- offered
- tutoring
- peas
- dream
- cafes
- lauderdale
- deletion
- precise
- parliamentary
- remotely
- connection
- calendars
- stupidest
- shovel
- western
- cutting
- ll
- rapping
- spelling
- mama
- tatum's
- fulton
- universal
- garner
- chill
- icebo
- college's
- rehman
- soundcloud
- scorecards
- ketchup
- jimmy's
- crate
- lexmark
- preference
- females
- federal
- andreas
- sportsnet
- favourites
- janice
- bins
- pamela
- covered
- rhapsody
- italian's
- ke
- panera
- remainders
- tandoori
- sukhwinder
- sunidhi
- etymology
- googleplex
- slide
- wearing
- trivial
- pursuit
- cancels
- martina
- mcbride
- finances
- vocab
- zipcode
- compaq
- composer
- margarine
- jonathan
- entrepreneur
- extended
- combo
- memories
- tupac
- affects
- drunks
- ford's
- liked
- dealership
- olky
- realtor
- thighs
- ourselves
- economics
- medication
- gross
- domestic
- donaldson
- prostate
- wicker
- rooms
- instrumental
- savannah
- outing
- affleck
- quotes
- tire
- montana
- exhausted
- acoustic
- commercials
- convenience
- consciousness
- serge
- gainsbourg
- windows
- turks
- generate
- pedicures
- btaxes
- departures
- frasier
- amazon's
- bluetooth
- verus
- neat
- forecasted
- bing's
- dropped
- recurrent
- candidate
- aware
- blackeyed
- pees
- prince's
- perimeter
- rectangle
- aaron
- carter
- involve
- drugs
- lighten
- slicker
- rains
- cloud
- carrot
- popcorn
- carmike
- cinemas
- greater
- minestart
- frog
- lenon
- unique
- hanging
- hung
- sporty
- seldom
- jocko's
- kid's
- viewers
- cantonese
- usage
- specs
- bugatti
- veyron
- chief
- blockbuster
- krishnarajpuram
- interstate
- hammers
- obligatory
- wonder
- southeast
- marlon
- brando
- ferrel
- tal
- obidallah
- manoeuvres
- merita
- rotate
- changs
- pepsi
- shanghai
- branden
- wind
- landmarks
- dvr
- congestion
- valentines
- eastwind
- lomaine
- geneva
- officially
- hopkins
- takjistan
- dimmer
- karo
- apne
- aur
- karna
- chahta
- hu
- purchased
- otherplace
- giraffe
- ute
- requirement
- watts
- powerful
- bulb
- oclock
- nba
- hulu
- composing
- melissas
- millilitres
- spoons
- goulash
- thor
- harischand
- mg
- i95
- sb
- kilo
- diana
- llyod
- webber
- wool
- penultimate
- bang
- philosophers
- nietzche
- focault
- profession
- kilograms
- turkeys
- bibulous
- angeline
- atm
- narwhal
- kilamanjaro
- captia
- volkswagen
- onkyo
- av
- receiver
- ipad
- aniston's
- summarize
- ice
- jindel
- pump
- nikki
- minaj
- nationality
- snoodle
- yemen
- sudan
- unprompted
- organization
- megan
- fares
- engage
- functioning
- dinar
- conservative
- korean
- sahara
- kingdom
- antartica
- telugu
- tamil
- tsunami
- rajani
- khanth
- venture
- goalkeeper
- dushambe
- abrupt
- hbo
- sopranos
- parana
- cave
- anime
- posters
- johny
- depp
- invisible
- graphical
- joli
- pricing
- beech
- nuclear
- triad
- hilton
- borders
- lucille
- redhead
- geraldine
- ferraro
- bde
- lowered
- phrases
- nicole
- mcgoat's
- manipulate
- roip
- nasa
- google's
- davy
- crockett
- springsteen's
- richest
- costliest
- easily
- gm
- psso
- kroner
- maple
- trees
- christie
- brinkley
- libraries
- gmb
- key
- mongolia
- anastasia
- telekenesis
- promise
- stray
- cruise's
- starring
- odyssey
- polish
- zloty
- hook
- ups
- integral
- exponential
- berkshire
- hathaway
- tables
- pink's
- alligator
- porto
- tommy
- hilfiger
- print
- networks
- snaps
- celebrate
- bina
- yay
- smiley
- emoticon
- commented
- folgers
- hathway
- huge
- lfi
- tagged
- treated
- hersheys
- aircel
- nastyburger
- linkedin
- tracy
- waiter
- drain
- charge
- neptunal
- poorly
- waited
- inappropriate
- potus
- accounts
- vodafone
- complaining
- spoiled
- positive
- tumblr
- unpleasant
- overpricing
- cheating
- connected
- else's
- greetings
- thought
- waste
- excess
- micro
- lodge
- snapdeal
- sonic
- hole
- sole
- patel's
- insect
- packet
- elsewhere
- moan
- easyjet
- snotty
- expired
- xl
- sizes
- filing
- applebee's
- angela
- merkel
- swagging
- moto
- sluggish
- flavia
- mum
- jacob's
- existing
- cannot
- pleas
- mahmoud
- ebay
- smsayyad1985
- kishore17051985
- fedex
- truette
- petey's
- tessa
- gaurav
- karen
- mongomery
- llc
- joseph
- turnpike
- accumulated
- deadlines
- fees
- ppt
- emergency
- missing
- carl's
- attach
- physical
- drill
- marilyn
- jugal
- here's
- bug
- sarasigmon123
- lindafancy55
- markpolomm
- gary's
- mailing
- bill's
- erins
- beth's
- wont
- stacy
- cadwell
- tori
- aloud
- brenda
- thisome
- smurfette
- smithjoe
- hwacuk
- chong
- giselle
- bosses
- havent
- frieda's
- jjjindia
- exists
- batch
- samuelwaters
- joose
- hellen
- builders
- accepted
- victor
- taxi's
- terry
- macdonald
- yahoocom
- metion
- rodger
- christy's
- otp
- jayesh
- tried
- morgan's
- office's
- rob
- qerwerq
- secured
- gerry
- raj's
- junable
- shopyourway
- reference
- jhonny's
- marissa
- rosa
- bert
- ana
- goddammit
- pronounce
- serious
- recheck
- slowly
- failed
- fuck
- executed
- clearly
- errors
- showed
- races
- thursdays
- funky
- handmaid's
- beam
- scotty
- debit
- wiki
- editor's
- automobiles
- promo
- discount
- director
- act
- bejeweled
- aside
- snakes
- ladders
- marsala
- influx
- bayou
- reasonably
- tapas
- az
- ddlj
- meatball
- newscast
- bibber
- tmz
- devon
- applebees
- hihop
- doggie
- feelings
- radios
- litle
- tsos
- congratulate
- links
- treble
- flame
- eta
- encourage
- students
- choices
- lobby
- vf
- chore
- butterfly
- clips
- urban
- regular
- bi-weekly
- baltimore
- sport's
- breakups
- dale's
- brea
- douglasville
- fundraiser
- dolphines
- maradona
- pe
- becky
- appointed
- deputy
- utar
- pradesh
- anniston
- handy
- sainsbury's
- attenuate
- parcel
- jakes
- bristo
- stressful
- deposit
- mathematical
- superstar
- survivor
- destiny's
- westcombe
- facility
- oboe
- mcnamara
- abolish
- swim
- repair
- grub
- hub
- ill
- dec
- dreams
- wyatts
- obstacle
- poach
- dental
- rose
- davinci
- trevor
- noah
- ncaa
- entrapreneur
- sanam
- differs
- ave
- hopsin
- enya
- wbc
- accordingly
- remarks
- sufi
- beibers
- arrested
- sensor
- music's
- author
- antwerp
- cnn's
- foodnetworkcom
- customize
- preferred
- unable
- duct
- tape
- gooseto
- apig
- ringer
- secure
- passage
- tomatoes
- wan
- senelena
- americano
- makeup
- robotics
- teleconference
- robotic
- poughkeepsie
- steel
- day's
- soundtrack
- tobymac
- transit
- gloria
- furious
- nazi
- hunting
- effect
- marvin
- gaye
- pasadena
- ca
- constrain
- singles
- outer
- nowhereville
- comfortable
- erica
- grebe
- wooly
- trigonametry
- obsessed
- graphics
- undone
- tough
- treasury
- toledo
- munich
- obtain
- nutritionally
- balanced
- internal
- locks
- exit
- mocking
- lyft
- transaction
- tasty
- mixture
- according
- hands
- supports
- canceling
- congressman's
- lenin
- spagetti
- controversial
- statements
- walker
- humor
- nkotb
- jon
- snow's
- possibility
- wellington
- nz
- advantages
- disadvantages
- driver
- towels
- stretch
- gear
- joey
- crimson
- chose
- pineapple
- asparagus
- teaspoons
- bling
- medieval
- engines
- foods
- hurts
- cannibal
- tonic
- bitcoin
- collection
- hidden
- figures
- brasil
- politic
- superb
- dalida
- capuccino
- analysts
- thankama
- kodaikanal
- vote
- burritto
- chipolte
- abut
- sedaka
- chamber
- rfi
- knock
- cnncom
- remchi
- fl
- ortcars
- flip
- wire
- thriller
- fiasco
- breaks
- dam
- paradise
- presidency
- sigur
- ros
- socks
- van
- halen
- wayne
- spare
- lightness
- appropriately
- both
- musics
- coastal
- cry
- friend's
- wore
- veganism
- picnic
- regent
- visited
- therapist
- inauguration
- swatishs
- dorothy
- known
- supervision
- superbowl
- eric's
- bday
- kar
- abhi
- achche
- ache
- rahe
- honge
- mhz
- sponge
- bistros
- brownies
- tenderloin
- enchiladas
- gluten
- hotdog
- row
- bing
- notebook
- pulldown
- clearer
- medford
- drivers
- waverley
- canal
- connecting
- summers
- gibraltar
- monoprice
- mxblue
- mechanical
- turbulence
- carey
- blunder
- factorial
- depends
- commands
- stand
- draymond
- susumu
- hirasawa
- yosemite
- '200'
- baguette
- stonehenge
- douriff
- ivf
- ivr
- litt
- runs
- hesitant
- crock
- guetta
- malaysia
- whelers
- sadness
- william
- coral
- daft
- punk
- sandle
- santha
- ingerman
- calc
- shibaru
- alcohols
- nano
- gina
- desta
- mgmt
- bana
- talking
- garvin
- trilly
- nytimes
- chhana
- mereya
- favor
- strained
- cooler
- films
- einstein's
- aroma
- ska
- raphsody
- trebuchet
- forth
- relate
- qualifications
- kirk
- franklin
- arithmetic
- skyfall
- bathrooms
- raghu
- dixit
- reports
- availables
- haddock
- odd
- cape
- cod
- noisy
- dull
- hackernews
- porn
- pad
- fight
- fighter
- nzd
- melodious
- burton
- helena
- campaign
- mcclanahan
- mummy's
- motown
- rasgulla
- janta
- pvt
- ltd
- heartthrob
- justin's
- velociraptor
- hippo
- senatra
- giggle
- peru
- nirvana
- anirudh's
- retro
- mf
- doom
- summarise
- ariana
- grande
- predicted
- creed
- user
- desire
- kenny
- roger
- sia's
- thrills
- wapo
- stockholm
- okinawa
- occasionally
- shuffling
- veggie
- mukkala
- mukkabilla
- guardian
- anytime
- themes
- horror
- ennema
- eatha
- homestead
- forever
- mayor's
- stance
- council
- master
- louies
- keane's
- fears
- noe
- reggae
- largo
- swiftm
- afi's
- xinhua
- dedicated
- bottom
- franks
- yelawolf
- ucl
- flop
- grammys
- espn
- joni
- mitchell
- shot
- tequila
- sleepyhead
- aces
- redder
- edms
- lamp's
- loudest
- brolly
- thao
- nguyen
- interior
- dine
- dogwalking
- nytimescom
- overcast
- deactive
- foo
- disasters
- opacity
- dea
- guam
- drug
- abuse
- itzhak
- perlman
- drawing
- sweden
- bombing
- ireland
- poll
- hotha
- defrosting
- salt
- toggle
- spb
- weatherit
- either
- forecasts
- intellicast
- weathercom
- orevena
- recorder
- pizzahouse
- reorganize
- sticky
- umbrellas
- opened
- cleaned
- shakin
- bakey
- tips
- hypoallergenic
- sarcastic
- cheat
- ii
- developers
- edg
- yaad
- dilana
- kahin
- samantha's
- rita's
- adding
- bro's
- attendees
- maggie
- valet
- groomer
- timeframe
- pete
- faculty
- parade
- greens
- jack's
- walter
- gemma
- nail
- arora's
- namkeen
- tonights
- ggg
- tie
- iheartradio
- rov
- javan
- wfrn
- kicks
- osteen's
- wgrr
- lite
- prairie
- companion
- palhunik
- pudding
- tutorial
- welsh
- rarebit
- oatmeal
- pathia
- achieve
- veg
- pulav
- crockpot
- prepared
- keno
- pinball
- fishdom
- nfs
- harvest
- crops
- farmvile
- millionaires
- vodka
- depend
- pon
- stationary
- mad
- errands
- paav
- queried
- pepper
- rowling
- shadi
- viewed
- mlb
- heavyweight
- citadel
- scene
- circus
- trolls
- grab
- kung
- fu
- bowery
- railway
- coach
- fare
- metrolink
- navigation
- westwood
- layfayette
- inconvenience
- emotions
- arrahman
- cosmos
- multiplied
- abouts
- hitting
- eliot's
- el
- ribbons
- sperm
- whale
- eaten
- lbs
- pinhead
- timeliness
- defining
- thesaurus
- penalty
- approval
- poetry
- ambulance
- jello
- shots
- ferrell
- stassi
- schroedder's
- tacobell
- hierophant
- zealand
- stockton
- emissions
- blowing
- kennedy
- ziggurat
- gagas
- gretszky
- hemingway
- pages
- earn
- nobel
- actions
- sloths
- parton's
- madagascar
- acting
- tiangle
- trebuchets
- googs
- gandhiji
- amal
- brazil
- adviser
- rich
- acted
- rihanas
- stamp
- mugy
- msn
- busdriver
- fergie
- flick
- ribons
- nakumuka
- postmates
- complaintum
- glinder
- gta
- rcg
- outlet
- hadock
- mclanahan
- coal
- mumy's
- piza
- wheelers
- guarante
- debugging
- debuging
- proper
- sung
- bilando
- terrorism
- cover
- dimmed
- vanilli
- marauthr
- wooo
- michael's
- shutdown
- pittsburgh
- precipitation
- riff
- portland
- muggy
- giants
- banks
- steelz
- ensure
- ricky
- matin
- tyres
- plant
- chased
- advice
- gossiping
- society
- mitushree
- hairdresser's
- biology
- fsu
- reflect
- yashas
- vinay
- vally
- closed
- shoutcast
- pilkington
- soda
- powder
- sambar
- cookingforu
- thermonuclear
- battleship
- cereal
- wishlist
- wrist
- hipsterhood
- duncan
- trussel's
- simmons
- wide
- cisco
- crafts
- sporting
- presently
- sheffield
- septa
- lead
- fransisco
- washingdon
- evolution
- mariah
- kya
- tum
- mere
- karne
- karoge
- acts
- assembly
- idle
- brand
- meridian
- terranova
- guarantee
- marian
- fields
- farthest
- philippine
- cambodia
- situated
- foruget
- monopricechanical
- peenth
- moroco
- piz
- tre
- supplwn
- viki
- shivle
- loged
- applebe
- acess
- madagar
- anp
- socer
- subcribe
- pluged
- imigration
- audiowan
- debie's
- imediately
- f
- locar
- duark
- rebeca
- talle
- banas
- ragh
- acordingly
- wakely
- en
- bress
- acording
- stefanan
- puding
- vegie
- vius
- edie
- domizza
- eg
- cheeseiza
- ocurred
- brightnes
- alaba
- memory
- fransico
- sunderland
- boogie
- butt
- leviathan
- shinning
- premier
- cleanup
- wacky
- aman
- cherry
- bomb
- solstice
- silently
- closet
- nakumukka
- shed
- responses
- yankees
- investigation
- dooa
- pieces
- imogen
- heap
- stole
- dynamite
- cease
- operating
- rained
- uptown
- suggestion
- finlee's
- bedtime
- sockets
- sanfranscio
- abbas
- cn's
- vibrate
- cooling
- sheriffs
- hike
- ilayaraja
- speaking
- un
- storms
- roof
- tube
- jackpot
- classmates
- extremely
- somewhere
- drenched
- sentient
- budy
- heating
- apt
- parenting
- concerning
- seo
- searches
- sticking
- patterns
- numbered
- impression
- reunion
- presents
- mehta
- willing
- discuss
- evan
- parker
- violin
- lesson
- musicworkz
- registration
- opens
- evening's
- thursday's
- nineteenth's
- hayathis
- shower
- corresponding
- showcase
- famosa
- kamp
- neal
- brenan
- gx
- nonstop
- rm
- giver
- traveller
- knowledge
- crispy
- supper
- broil
- noodle
- stuffed
- maccoroni
- almond
- clash
- clans
- ping
- keeper
- enemy
- coc
- detergent
- corn
- dill
- pickles
- ranch
- dressing
- lentils
- translate
- toothpaste
- rearrange
- groups
- santana
- pritzker
- winners
- libertarian
- mc's
- vitaly
- nfl
- mythical
- oriented
- provisional
- experiences
- safely
- themselves
- mia
- reducing
- learly
- court
- vin
- diesel
- netbooks
- chinatown
- aberdeen
- queens
- luni
- purchasing
- timing
- bagmati
- narrow
- egypt
- represented
- revelation
- britain
- aamir
- priyanka
- middleton
- base
- original
- nhl
- goal
- scorers
- osteoperosis
- laws
- correlation
- motivation
- ncaaa
- tense
- touring
- framework
- adel
- diamond
- schwarzenegger's
- stomachs
- cow
- chairs
- steph
- subjegant
- pategonia
- michelle
- todlers
- stakes
- tinder
- matches
- fjord
- equator
- triumph
- hell
- moldova
- presley's
- wa
- rajinikanth
- basalt
- bali
- airplane
- hash
- lit
- <sos/eos>
two_pass: false
use_preprocessor: true
token_type: word
bpemodel: null
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 30
num_freq_mask: 2
apply_time_mask: true
time_mask_width_range:
- 0
- 40
num_time_mask: 2
normalize: utterance_mvn
normalize_conf: {}
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
deliberationencoder: conformer
deliberationencoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: linear
normalize_before: true
macaron_style: true
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
decoder2: rnn
decoder2_conf: {}
postdecoder: hugging_face_transformers
postdecoder_conf:
model_name_or_path: bert-base-cased
output_size: 512
required:
- output_dir
- token_list
version: 0.10.3a2
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
cf430869c492adc069eacafb9e018cf1
|
adache/xlm-roberta-base-finetuned-panx-en
|
adache
|
xlm-roberta
| 9 | 5 |
transformers
| 0 |
token-classification
| true | false | false |
mit
| null |
['xtreme']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-en
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3921
- F1: 0.6922
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.1465 | 1.0 | 50 | 0.5838 | 0.4777 |
| 0.5055 | 2.0 | 100 | 0.4477 | 0.6374 |
| 0.3713 | 3.0 | 150 | 0.3921 | 0.6922 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
|
02b5e65de5a60904de4be6373de8b00d
|
tomekkorbak/vigorous_saha
|
tomekkorbak
|
gpt2
| 23 | 0 |
transformers
| 0 | null | true | false | false |
mit
|
['en']
|
['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 7,596 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vigorous_saha
This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- training_steps: 50354
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.5.1
- Tokenizers 0.11.6
# Full config
{'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000',
'tomekkorbak/pii-pile-chunk3-50000-100000',
'tomekkorbak/pii-pile-chunk3-100000-150000',
'tomekkorbak/pii-pile-chunk3-150000-200000',
'tomekkorbak/pii-pile-chunk3-200000-250000',
'tomekkorbak/pii-pile-chunk3-250000-300000',
'tomekkorbak/pii-pile-chunk3-300000-350000',
'tomekkorbak/pii-pile-chunk3-350000-400000',
'tomekkorbak/pii-pile-chunk3-400000-450000',
'tomekkorbak/pii-pile-chunk3-450000-500000',
'tomekkorbak/pii-pile-chunk3-500000-550000',
'tomekkorbak/pii-pile-chunk3-550000-600000',
'tomekkorbak/pii-pile-chunk3-600000-650000',
'tomekkorbak/pii-pile-chunk3-650000-700000',
'tomekkorbak/pii-pile-chunk3-700000-750000',
'tomekkorbak/pii-pile-chunk3-750000-800000',
'tomekkorbak/pii-pile-chunk3-800000-850000',
'tomekkorbak/pii-pile-chunk3-850000-900000',
'tomekkorbak/pii-pile-chunk3-900000-950000',
'tomekkorbak/pii-pile-chunk3-950000-1000000',
'tomekkorbak/pii-pile-chunk3-1000000-1050000',
'tomekkorbak/pii-pile-chunk3-1050000-1100000',
'tomekkorbak/pii-pile-chunk3-1100000-1150000',
'tomekkorbak/pii-pile-chunk3-1150000-1200000',
'tomekkorbak/pii-pile-chunk3-1200000-1250000',
'tomekkorbak/pii-pile-chunk3-1250000-1300000',
'tomekkorbak/pii-pile-chunk3-1300000-1350000',
'tomekkorbak/pii-pile-chunk3-1350000-1400000',
'tomekkorbak/pii-pile-chunk3-1400000-1450000',
'tomekkorbak/pii-pile-chunk3-1450000-1500000',
'tomekkorbak/pii-pile-chunk3-1500000-1550000',
'tomekkorbak/pii-pile-chunk3-1550000-1600000',
'tomekkorbak/pii-pile-chunk3-1600000-1650000',
'tomekkorbak/pii-pile-chunk3-1650000-1700000',
'tomekkorbak/pii-pile-chunk3-1700000-1750000',
'tomekkorbak/pii-pile-chunk3-1750000-1800000',
'tomekkorbak/pii-pile-chunk3-1800000-1850000',
'tomekkorbak/pii-pile-chunk3-1850000-1900000',
'tomekkorbak/pii-pile-chunk3-1900000-1950000'],
'is_split_by_sentences': True},
'generation': {'force_call_on': [25354],
'metrics_configs': [{}, {'n': 1}, {'n': 2}],
'scenario_configs': [{'generate_kwargs': {'do_sample': True,
'max_length': 128,
'min_length': 10,
'temperature': 0.7,
'top_k': 0,
'top_p': 0.9},
'name': 'unconditional',
'num_samples': 2048}],
'scorer_config': {}},
'kl_gpt3_callback': {'max_tokens': 64, 'num_samples': 4096},
'model': {'from_scratch': True,
'gpt2_config_kwargs': {'reorder_and_upcast_attn': True,
'scale_attn_by': True},
'path_or_name': 'gpt2'},
'objective': {'alpha': 1, 'name': 'Unlikelihood', 'score_threshold': 0.0},
'tokenizer': {'path_or_name': 'gpt2'},
'training': {'dataloader_num_workers': 0,
'effective_batch_size': 64,
'evaluation_strategy': 'no',
'fp16': True,
'hub_model_id': 'vigorous_saha',
'hub_strategy': 'all_checkpoints',
'learning_rate': 0.0005,
'logging_first_step': True,
'logging_steps': 1,
'num_tokens': 3300000000,
'output_dir': 'training_output2',
'per_device_train_batch_size': 16,
'push_to_hub': True,
'remove_unused_columns': False,
'save_steps': 25354,
'save_strategy': 'steps',
'seed': 42,
'warmup_ratio': 0.01,
'weight_decay': 0.1}}
# Wandb URL:
https://wandb.ai/tomekkorbak/apo/runs/1c8cpo9k
|
832f6101f96141b070c4c2da6e9dfc72
|
EdBianchi/vit-fire-detection
|
EdBianchi
|
vit
| 30 | 6 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,964 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# vit-fire-detection
This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0126
- Precision: 0.9960
- Recall: 0.9960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|
| 0.1018 | 1.0 | 190 | 0.0375 | 0.9934 | 0.9934 |
| 0.0484 | 2.0 | 380 | 0.0167 | 0.9961 | 0.9960 |
| 0.0357 | 3.0 | 570 | 0.0253 | 0.9948 | 0.9947 |
| 0.0133 | 4.0 | 760 | 0.0198 | 0.9961 | 0.9960 |
| 0.012 | 5.0 | 950 | 0.0203 | 0.9947 | 0.9947 |
| 0.0139 | 6.0 | 1140 | 0.0204 | 0.9947 | 0.9947 |
| 0.0076 | 7.0 | 1330 | 0.0175 | 0.9961 | 0.9960 |
| 0.0098 | 8.0 | 1520 | 0.0115 | 0.9974 | 0.9974 |
| 0.0062 | 9.0 | 1710 | 0.0133 | 0.9960 | 0.9960 |
| 0.0012 | 10.0 | 1900 | 0.0126 | 0.9960 | 0.9960 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.14.0.dev20221111
- Datasets 2.8.0
- Tokenizers 0.12.1
|
39e6a000edb5a5aa29f4759b7b11580a
|
skandavivek2/spam-classifier
|
skandavivek2
|
distilbert
| 18 | 20 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,454 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spam-classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0614
- Accuracy: 0.9885
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 262 | 0.0968 | 0.9799 |
| 0.0573 | 2.0 | 524 | 0.0693 | 0.9856 |
| 0.0573 | 3.0 | 786 | 0.0599 | 0.9871 |
| 0.0111 | 4.0 | 1048 | 0.0551 | 0.9885 |
| 0.0111 | 5.0 | 1310 | 0.0614 | 0.9885 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
|
7c5dfbc8b225777b6a096c525d64fe0c
|
SnailPoo/distilbert-base-uncased-finetuned-ner
|
SnailPoo
|
distilbert
| 13 | 9 |
transformers
| 0 |
token-classification
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,552 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-ner
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1079
- Precision: 0.8408
- Recall: 0.8686
- F1: 0.8545
- Accuracy: 0.9638
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 453 | 0.1322 | 0.7759 | 0.8370 | 0.8053 | 0.9498 |
| 0.246 | 2.0 | 906 | 0.1115 | 0.8284 | 0.8616 | 0.8446 | 0.9611 |
| 0.1012 | 3.0 | 1359 | 0.1079 | 0.8408 | 0.8686 | 0.8545 | 0.9638 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
d53c590770c4870a60a980339a0571c7
|
Nithiwat/wav2vec2-colab
|
Nithiwat
|
wav2vec2
| 25 | 1 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,473 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-colab
This model is a fine-tuned version of [facebook/wav2vec2-xlsr-53-espeak-cv-ft](https://huggingface.co/facebook/wav2vec2-xlsr-53-espeak-cv-ft) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: inf
- Wer: 0.9155
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 48
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7628 | 7.83 | 400 | inf | 0.9155 |
| 1.0544 | 15.68 | 800 | inf | 0.9155 |
| 7.5478 | 23.52 | 1200 | inf | 0.9155 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.10.0+cu113
- Datasets 2.8.0
- Tokenizers 0.13.2
|
575ac1a1527962679affc1a370fdca9d
|
Medivvv/distilbert-imdb
|
Medivvv
|
distilbert
| 10 | 9 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['imdb']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,081 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 391 | 0.1829 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
|
28bb8428aeb1ec79b026b883a9cdd585
|
speechbrain/tts-fastspeech2-ljspeech
|
speechbrain
| null | 5 | 8 |
speechbrain
| 0 |
text-to-speech
| false | false | false |
apache-2.0
|
['en']
|
['LJSpeech']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['text-to-speech', 'TTS', 'speech-synthesis', 'fastspeech2', 'speechbrain']
| false | true | true | 3,885 | false |
<iframe src="https://ghbtns.com/github-btn.html?user=speechbrain&repo=speechbrain&type=star&count=true&size=large&v=2" frameborder="0" scrolling="0" width="170" height="30" title="GitHub"></iframe>
<br/><br/>
**IMPORTANT: This is a work in progress. This model is not providing meaningful output at the moment**
# Text-to-Speech (TTS) with FastSpeech2 trained on LJSpeech
This repository provides all the necessary tools for Text-to-Speech (TTS) with SpeechBrain using a [FastSpeech2](https://arxiv.org/abs/2006.04558) pretrained on [LJSpeech](https://keithito.com/LJ-Speech-Dataset/).
The pre-trained model takes in input a short text and produces a spectrogram in output. One can get the final waveform by applying a vocoder (e.g., HiFIGAN) on top of the generated spectrogram.
## Install SpeechBrain
```
pip install speechbrain
```
Please notice that we encourage you to read our tutorials and learn more about
[SpeechBrain](https://speechbrain.github.io).
### Perform Text-to-Speech (TTS) with FastSpeech2
```
import torchaudio
from speechbrain.pretrained import FastSpeech2
from speechbrain.pretrained import HIFIGAN
# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
fastspeech2 = FastSpeech2.from_hparams(source="speechbrain/tts-fastspeech2-ljspeech", savedir="tmpdir_tts")
hifi_gan = HIFIGAN.from_hparams(source="speechbrain/tts-hifigan-libritts-16kHz", savedir="tmpdir_vocoder")
# Running the TTS
mel_output, durations, pitch, energy = fastspeech2.encode_text(input_text)
# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)
# Save the waverform
torchaudio.save('example_TTS.wav', waveforms.squeeze(1), 16000)
```
If you want to generate multiple sentences in one-shot, you can do in this way:
```
from speechbrain.pretrained import FastSpeech2
fastspeech2 = FastSpeech2.from_hparams(source="speechbrain/tts-fastspeech2-ljspeech", savedir="tmpdir_tts")
items = [
"A quick brown fox jumped over the lazy dog",
"How much wood would a woodchuck chuck?",
"Never odd or even"
]
mel_outputs, durations, pitch, energy = fastspeech2.encode_batch(items)
```
### Inference on GPU
To perform inference on the GPU, add `run_opts={"device":"cuda"}` when calling the `from_hparams` method.
### Training
The model was trained with SpeechBrain.
To train it from scratch follow these steps:
1. Clone SpeechBrain:
```bash
git clone https://github.com/speechbrain/speechbrain/
```
2. Install it:
```bash
cd speechbrain
pip install -r requirements.txt
pip install -e .
```
3. Run Training:
```bash
cd recipes/LJSpeech/TTS/fastspeech2/
python train.py --device=cuda:0 --max_grad_norm=1.0 --data_folder=/your_folder/LJSpeech-1.1 hparams/train.yaml
```
You can find our training results (models, logs, etc) [here](https://drive.google.com/drive/folders/1Yb8CDCrW7JF1_jg8Xc4U15z3W37VjrY5?usp=share_link).
### Limitations
The SpeechBrain team does not provide any warranty on the performance achieved by this model when used on other datasets.
# **About SpeechBrain**
- Website: https://speechbrain.github.io/
- Code: https://github.com/speechbrain/speechbrain/
- HuggingFace: https://huggingface.co/speechbrain/
# **Citing SpeechBrain**
Please, cite SpeechBrain if you use it for your research or business.
```bibtex
@misc{speechbrain,
title={{SpeechBrain}: A General-Purpose Speech Toolkit},
author={Mirco Ravanelli and Titouan Parcollet and Peter Plantinga and Aku Rouhe and Samuele Cornell and Loren Lugosch and Cem Subakan and Nauman Dawalatabad and Abdelwahab Heba and Jianyuan Zhong and Ju-Chieh Chou and Sung-Lin Yeh and Szu-Wei Fu and Chien-Feng Liao and Elena Rastorgueva and François Grondin and William Aris and Hwidong Na and Yan Gao and Renato De Mori and Yoshua Bengio},
year={2021},
eprint={2106.04624},
archivePrefix={arXiv},
primaryClass={eess.AS},
note={arXiv:2106.04624}
}
```
|
23672991290e5fea2eeae5b0cad3efbc
|
willcai/wav2vec2_common_voice_accents_3
|
willcai
|
wav2vec2
| 11 | 8 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null |
['common_voice']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 2,487 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents_3
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0042
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 48
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 384
- total_eval_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.584 | 1.27 | 400 | 1.1439 |
| 0.481 | 2.55 | 800 | 0.1986 |
| 0.2384 | 3.82 | 1200 | 0.1060 |
| 0.1872 | 5.1 | 1600 | 0.1016 |
| 0.158 | 6.37 | 2000 | 0.0942 |
| 0.1427 | 7.64 | 2400 | 0.0646 |
| 0.1306 | 8.92 | 2800 | 0.0612 |
| 0.1197 | 10.19 | 3200 | 0.0423 |
| 0.1129 | 11.46 | 3600 | 0.0381 |
| 0.1054 | 12.74 | 4000 | 0.0326 |
| 0.0964 | 14.01 | 4400 | 0.0293 |
| 0.0871 | 15.29 | 4800 | 0.0239 |
| 0.0816 | 16.56 | 5200 | 0.0168 |
| 0.0763 | 17.83 | 5600 | 0.0202 |
| 0.0704 | 19.11 | 6000 | 0.0224 |
| 0.0669 | 20.38 | 6400 | 0.0208 |
| 0.063 | 21.66 | 6800 | 0.0074 |
| 0.0585 | 22.93 | 7200 | 0.0126 |
| 0.0548 | 24.2 | 7600 | 0.0086 |
| 0.0512 | 25.48 | 8000 | 0.0080 |
| 0.0487 | 26.75 | 8400 | 0.0052 |
| 0.0455 | 28.03 | 8800 | 0.0062 |
| 0.0433 | 29.3 | 9200 | 0.0042 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
256805d79de6cf634076175f42192ef5
|
google/maxim-s3-deblurring-gopro
|
google
| null | 7 | 257 |
keras
| 8 |
image-to-image
| false | false | false |
apache-2.0
|
['en']
|
['gopro']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['vision', 'maxim', 'image-to-image']
| false | true | true | 2,522 | false |
# MAXIM pre-trained on GoPro for image deblurring
MAXIM model pre-trained for image deblurring. It was introduced in the paper [MAXIM: Multi-Axis MLP for Image Processing](https://arxiv.org/abs/2201.02973) by Zhengzhong Tu, Hossein Talebi, Han Zhang, Feng Yang, Peyman Milanfar, Alan Bovik, Yinxiao Li and first released in [this repository](https://github.com/google-research/maxim).
Disclaimer: The team releasing MAXIM did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
MAXIM introduces a shared MLP-based backbone for different image processing tasks such as image deblurring, deraining, denoising, dehazing, low-light image enhancement, and retouching. The following figure depicts the main components of MAXIM:

## Training procedure and results
The authors didn't release the training code. For more details on how the model was trained, refer to the [original paper](https://arxiv.org/abs/2201.02973).
As per the [table](https://github.com/google-research/maxim#results-and-pre-trained-models), the model achieves a PSNR of 32.86 and an SSIM of 0.961.
## Intended uses & limitations
You can use the raw model for image deblurring tasks.
The model is [officially released in JAX](https://github.com/google-research/maxim). It was ported to TensorFlow in [this repository](https://github.com/sayakpaul/maxim-tf).
### How to use
Here is how to use this model:
```python
from huggingface_hub import from_pretrained_keras
from PIL import Image
import tensorflow as tf
import numpy as np
import requests
url = "https://github.com/sayakpaul/maxim-tf/raw/main/images/Deblurring/input/1fromGOPR0950.png"
image = Image.open(requests.get(url, stream=True).raw)
image = np.array(image)
image = tf.convert_to_tensor(image)
image = tf.image.resize(image, (256, 256))
model = from_pretrained_keras("google/maxim-s3-deblurring-gopro")
predictions = model.predict(tf.expand_dims(image, 0))
```
For a more elaborate prediction pipeline, refer to [this Colab Notebook](https://colab.research.google.com/github/sayakpaul/maxim-tf/blob/main/notebooks/inference-dynamic-resize.ipynb).
### Citation
```bibtex
@article{tu2022maxim,
title={MAXIM: Multi-Axis MLP for Image Processing},
author={Tu, Zhengzhong and Talebi, Hossein and Zhang, Han and Yang, Feng and Milanfar, Peyman and Bovik, Alan and Li, Yinxiao},
journal={CVPR},
year={2022},
}
```
|
cb3981e1ddc9215e7f8a19721e37077e
|
sd-concepts-library/final-fantasy-logo
|
sd-concepts-library
| null | 10 | 0 | null | 2 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 1 | 1 | 0 |
[]
| false | true | true | 1,232 | false |
### Final Fantasy logo on Stable Diffusion
This is the `<final-fantasy-logo>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:





|
2f66d8d891d12c4b666cf9e0d1210d36
|
nntadotzip/xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022
|
nntadotzip
|
xlnet
| 12 | 9 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,304 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlnet-base-cased-IUChatbot-ontologyDts-xlnetBaseCased-bertTokenizer-12April2022
This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4240
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 357 | 0.6451 |
| 0.8416 | 2.0 | 714 | 0.4428 |
| 0.5227 | 3.0 | 1071 | 0.4240 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
|
9cc836eefb2fbed86772ce020390afb9
|
nlp04/kobart_32_6e-5_datav2_min30_lp5.0_temperature1.0
|
nlp04
|
bart
| 15 | 2 |
transformers
| 0 |
text2text-generation
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,596 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kobart_32_6e-5_datav2_min30_lp5.0_temperature1.0
This model is a fine-tuned version of [gogamza/kobart-base-v2](https://huggingface.co/gogamza/kobart-base-v2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6110
- Rouge1: 35.8879
- Rouge2: 12.9302
- Rougel: 23.7819
- Bleu1: 30.0048
- Bleu2: 17.5297
- Bleu3: 10.3153
- Bleu4: 5.9092
- Gen Len: 50.8508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 6e-05
- train_batch_size: 32
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Bleu1 | Bleu2 | Bleu3 | Bleu4 | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:-------:|:-------:|:------:|:-------:|
| 1.5664 | 3.78 | 5000 | 2.6110 | 35.8879 | 12.9302 | 23.7819 | 30.0048 | 17.5297 | 10.3153 | 5.9092 | 50.8508 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu117
- Datasets 2.7.1
- Tokenizers 0.13.2
|
227c716c4b04f6f803d9d6520bdc55fe
|
sagard21/python-code-explainer
|
sagard21
|
t5
| 9 | 24 |
transformers
| 2 |
summarization
| true | false | false |
mit
|
['en']
|
['sagard21/autotrain-data-code-explainer']
|
{'emissions': 5.393079045128973}
| 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['autotrain', 'summarization']
| false | true | true | 1,269 | false |
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 2745581349
- CO2 Emissions (in grams): 5.3931
# Model Description
This model is an attempt to simplify code understanding by generating line by line explanation of a source code. This model was fine-tuned using the Salesforce/codet5-large model. Currently it is trained on a small subset of Python snippets.
# Model Usage
```py
from transformers import (
AutoModelForSeq2SeqLM,
AutoTokenizer,
AutoConfig,
pipeline,
)
model_name = "sagard21/python-code-explainer"
tokenizer = AutoTokenizer.from_pretrained(model_name, padding=True)
model = AutoModelForSeq2SeqLM.from_pretrained(model_name)
config = AutoConfig.from_pretrained(model_name)
model.eval()
pipe = pipeline("summarization", model=model_name, config=config, tokenizer=tokenizer)
raw_code = """
def preprocess(text: str) -> str:
text = str(text)
text = text.replace("\n", " ")
tokenized_text = text.split(" ")
preprocessed_text = " ".join([token for token in tokenized_text if token])
return preprocessed_text
"""
print(pipe(raw_code)[0]["summary_text"])
```
## Validation Metrics
- Loss: 2.156
- Rouge1: 29.375
- Rouge2: 18.128
- RougeL: 25.445
- RougeLsum: 28.084
- Gen Len: 19.000
|
0b88d1b9d404d885422e4ebfa8f6e0fd
|
zhuqi/t5-large-coqr-canard
|
zhuqi
|
t5
| 8 | 5 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
|
['en']
|
['CANARD']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['t5-large', 'text2text-generation', 'conversational question rewriting']
| true | true | true | 1,766 | false |
# t5-large-coqr-canard
This model is a fine-tuned version of [t5-large](https://huggingface.co/t5-large) on the [CANARD](https://sites.google.com/view/qanta/projects/canard) dataset.
It achieves the following results on the test set:
- Loss: 0.3064
- Bleu: 77.1979
- Generation Length: 9.576
## Model description
CANARD dataset rewrites the original questions in conversations to make them context-independent (understandable w/o context).
On the contrary, this model is trained to rewrite context-independent questions to conversational questions, aiming to create fluent dialog with anaphora and ellipsis.
Input:
```
Rewrite the question according to the given context to make the dialog fluent using anaphora and ellipsis.
question: How did people respond to Superstar Billy Graham's return?
context: Superstar Billy Graham
Return to WWWF (1977-1981)
Why did he return to the WWWF?
an agreement with promoter Vincent J. McMahon (Senior
What was his agreement with McMahon?
I don't know.
```
Target:
```
How did people respond to his return?
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.001
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 512
- total_eval_batch_size: 512
- optimizer: Adafactor
- lr_scheduler_type: linear
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|
| No log | 1.0 | 62 | 0.2987 | 77.2361 | 9.4534 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0+cu113
- Datasets 2.6.1
- Tokenizers 0.12.1
|
c63c9455960b4a32228282b44f643906
|
stevenwh/indobert-base-p2-finetuned-mer
|
stevenwh
|
bert
| 13 | 5 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,680 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# indobert-base-p2-finetuned-mer
This model is a fine-tuned version of [indobenchmark/indobert-base-p2](https://huggingface.co/indobenchmark/indobert-base-p2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1964
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 7.7183 | 1.0 | 28 | 6.6949 |
| 6.3179 | 2.0 | 56 | 5.7267 |
| 5.5857 | 3.0 | 84 | 5.2449 |
| 5.17 | 4.0 | 112 | 4.8586 |
| 4.893 | 5.0 | 140 | 4.6777 |
| 4.7121 | 6.0 | 168 | 4.4832 |
| 4.5402 | 7.0 | 196 | 4.3532 |
| 4.4698 | 8.0 | 224 | 4.2814 |
| 4.4012 | 9.0 | 252 | 4.2612 |
| 4.3725 | 10.0 | 280 | 4.2325 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.6.1
- Tokenizers 0.13.2
|
9d3dc94e56934dc178f491f4c47d1a9e
|
huxxx657/roberta-base-finetuned-deletion-squad-15
|
huxxx657
|
roberta
| 13 | 8 |
transformers
| 0 |
question-answering
| true | false | false |
mit
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,156 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-finetuned-deletion-squad-15
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1057
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.1127 | 1.0 | 5531 | 1.1057 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
|
f4d68f0b6f6706848128c4175f7d9777
|
Helsinki-NLP/opus-mt-guw-de
|
Helsinki-NLP
|
marian
| 10 | 7 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 776 | false |
### opus-mt-guw-de
* source languages: guw
* target languages: de
* OPUS readme: [guw-de](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/guw-de/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.zip)
* test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.test.txt)
* test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/guw-de/opus-2020-01-20.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.guw.de | 22.7 | 0.434 |
|
520173af9fc0e2018b73ccd8aefc8e36
|
yanaiela/roberta-base-epoch_26
|
yanaiela
|
roberta
| 9 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_26']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 26
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_26.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
894662836067cab4c0686a7a9019f98b
|
Neha2608/distilbert-base-uncased-finetuned-emotion
|
Neha2608
|
distilbert
| 16 | 0 |
transformers
| 0 |
text-classification
| true | false | false |
apache-2.0
| null |
['emotion']
| null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,360 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy is: 0.9185
- F1: 0.9185
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy is | F1 |
|:-------------:|:-----:|:----:|:---------------:|:-----------:|:------:|
| 0.8026 | 1.0 | 250 | 0.3114 | 0.905 | 0.9035 |
| 0.2409 | 2.0 | 500 | 0.2207 | 0.9185 | 0.9185 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.0+cu113
- Datasets 1.16.1
- Tokenizers 0.10.3
|
ff732a0273060b33987f72073620098c
|
CCMat/fluffalpaca-llama-v2
|
CCMat
| null | 20 | 46 |
diffusers
| 0 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
| false | true | true | 1,581 | false |
# DreamBooth model for the fluffalpaca concept trained on the CCMat/db-aplaca dataset.
This is a Stable Diffusion model fine-tuned on the fluffalpaca concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of fluffalpaca llama**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `llama` images for the animal theme.
### Training Hyperparemeters
Pretrained Model: [stabilityai/stable-diffusion-2](https://huggingface.co/stabilityai/stable-diffusion-2)<br>
Learning rate: 1e-6<br>
Steps:1100<br>
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('CCMat/fluffalpaca-llama-1100')
image = pipeline().images[0]
image
```
## Samples
Prompt: "fluffalpaca llama in space by Enki Bilal"

Prompt: "fluffalpaca llama in front of the Eiffel Tower"

Prompt: "a photo of fluffalpaca llama swimming in the river"

Prompt: "a photo of fluffalpaca llama in front of the Colosseum in Rome, professional photograph"

Prompt: "USSR propoganda poster. Long live the fluffalpaca llama"

|
4b5c25be5b3abe881d3ea69ac2feab86
|
tranmc/Bronya_7.5e-7_4800
|
tranmc
| null | 33 | 14 |
diffusers
| 0 |
text-to-image
| false | false | false |
creativeml-openrail-m
| null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['text-to-image', 'stable-diffusion']
| false | true | true | 523 | false |
### Model Dreambooth concept Bronya được train bởi tranmc bằng [Shinja Zero SoTA DreamBooth_Stable_Diffusion](https://colab.research.google.com/drive/1G7qx6M_S1PDDlsWIMdbZXwdZik6sUlEh) notebook <br>
Test concept bằng [Shinja Zero no Notebook](https://colab.research.google.com/drive/1Hp1ZIjPbsZKlCtomJVmt2oX7733W44b0) <br>
Hoặc test bằng `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb)
Ảnh mẫu của concept: WIP
|
b2e7caf3423ea93112a0ada57ea8641d
|
Reggie/DeBERTa-v3-base-joke_detector
|
Reggie
|
deberta-v2
| 7 | 10 |
transformers
| 0 |
text-classification
| true | false | false |
mit
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['deberta', 'deberta-v3']
| false | true | true | 1,874 | false |
### What is this?
This model has been developed to detect "narrative-style" jokes, stories and anecdotes (i.e. they are narrated as a story) spoken during speeches or conversations etc. It works best when jokes/anecdotes are at least 40 words or longer. It is based on [Moritz Laurer's DeBERTa-v3](https://huggingface.co/MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli).
The training dataset was a private collection of around 2000 jokes. This model has not been trained or tested on one-liners, puns or Reddit-style language-manipulation jokes such as knock-knock, Q&A jokes etc.
See the example in the inference widget or How to use section for what constitues a narrative-style joke.
For a slightly less accurate model (0.4% less) that is 65% faster at inference, see the [Roberta model](https://huggingface.co/Reggie/muppet-roberta-base-joke_detector). For a much more inaccurate model (2.9% less) that is way faster at inference, see the [distilbert model](https://huggingface.co/Reggie/distilbert-joke_detector).
### Install these first
You'll need to pip install transformers & maybe sentencepiece
### How to use
```python
from transformers import pipeline
import torch
device = 0 if torch.cuda.is_available() else -1
model_name = 'Reggie/DeBERTa-v3-base-joke_detector/'
max_seq_len = 510
pipe = pipeline(model=model_name, device=device, truncation=True, max_length=max_seq_len)
is_it_a_joke = """A nervous passenger is about to book a flight ticket, and he asks the airlines' ticket seller, "I hope your planes are safe. Do they have a good track record for safety?" The airline agent replies, "Sir, I can guarantee you, we've never had a plane that has crashed more than once." """
result = pipe(is_it_a_joke) # [{'label': 'LABEL_1', 'score': 0.7313136458396912}]
print('This is a joke') if result[0]['label'] == 'LABEL_1' else print('This is not a joke')
```
|
1474d26788c06cbe1b13702c4dd77dda
|
anas-awadalla/t5-small-finetuned-squad-infilling-lr-5e-5
|
anas-awadalla
|
t5
| 17 | 1 |
transformers
| 0 |
text2text-generation
| true | false | false |
apache-2.0
| null |
['squad']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,059 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-squad-infilling-lr-5e-5
This model is a fine-tuned version of [google/t5-v1_1-small](https://huggingface.co/google/t5-v1_1-small) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 48
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.3.2
- Tokenizers 0.11.6
|
9258c451a12e4e1173d396ec93cac346
|
Helsinki-NLP/opus-mt-ccs-en
|
Helsinki-NLP
|
marian
| 11 | 13 |
transformers
| 0 |
translation
| true | true | false |
apache-2.0
|
['ka', 'ccs', 'en']
| null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 |
['translation']
| false | true | true | 2,097 | false |
### ccs-eng
* source group: South Caucasian languages
* target group: English
* OPUS readme: [ccs-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md)
* model: transformer
* source language(s): kat
* target language(s): eng
* model: transformer
* pre-processing: normalization + SentencePiece (spm12k,spm12k)
* download original weights: [opus2m-2020-07-31.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip)
* test set translations: [opus2m-2020-07-31.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt)
* test set scores: [opus2m-2020-07-31.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.kat-eng.kat.eng | 18.0 | 0.357 |
| Tatoeba-test.multi.eng | 18.0 | 0.357 |
### System Info:
- hf_name: ccs-eng
- source_languages: ccs
- target_languages: eng
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ccs-eng/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['ka', 'ccs', 'en']
- src_constituents: {'kat'}
- tgt_constituents: {'eng'}
- src_multilingual: True
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm12k,spm12k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ccs-eng/opus2m-2020-07-31.test.txt
- src_alpha3: ccs
- tgt_alpha3: eng
- short_pair: ccs-en
- chrF2_score: 0.35700000000000004
- bleu: 18.0
- brevity_penalty: 1.0
- ref_len: 5992.0
- src_name: South Caucasian languages
- tgt_name: English
- train_date: 2020-07-31
- src_alpha2: ccs
- tgt_alpha2: en
- prefer_old: False
- long_pair: ccs-eng
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41
|
3d3b776de8b460521ad0df43517a47ae
|
sd-concepts-library/paul-noir
|
sd-concepts-library
| null | 11 | 0 | null | 0 | null | false | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,232 | false |
### Paul Noir on Stable Diffusion
This is the `<paul-noir>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:






|
3e463ab08b0de61b4bff51b8cb192c29
|
alexandrainst/da-emotion-classification-base
|
alexandrainst
|
bert
| 10 | 357 |
transformers
| 1 |
text-classification
| true | true | false |
cc-by-sa-4.0
|
['da']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 1,249 | false |
# Danish BERT for emotion classification
The BERT Emotion model classifies a Danish text in one of the following class:
* Glæde/Sindsro
* Tillid/Accept
* Forventning/Interrese
* Overasket/Målløs
* Vrede/Irritation
* Foragt/Modvilje
* Sorg/trist
* Frygt/Bekymret
It is based on the pretrained [Danish BERT](https://github.com/certainlyio/nordic_bert) model by BotXO which has been fine-tuned on social media data.
This model should be used after detecting whether the text contains emotion or not, using the binary [BERT Emotion model](https://huggingface.co/alexandrainst/da-binary-emotion-classification-base).
See the [DaNLP documentation](https://danlp-alexandra.readthedocs.io/en/latest/docs/tasks/sentiment_analysis.html#bert-emotion) for more details.
Here is how to use the model:
```python
from transformers import BertTokenizer, BertForSequenceClassification
model = BertForSequenceClassification.from_pretrained("alexandrainst/da-emotion-classification-base")
tokenizer = BertTokenizer.from_pretrained("alexandrainst/da-emotion-classification-base")
```
## Training data
The data used for training has not been made publicly available. It consists of social media data manually annotated in collaboration with Danmarks Radio.
|
33eea0afa1e3346ae5de9b6124752133
|
stanfordnlp/stanza-fa
|
stanfordnlp
| null | 19 | 51 |
stanza
| 1 |
token-classification
| false | false | false |
apache-2.0
|
['fa']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['stanza', 'token-classification']
| false | true | true | 580 | false |
# Stanza model for Persian (fa)
Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing.
Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza).
This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo
Last updated 2022-10-12 02:57:21.212
|
031922644c74729269495ffe7ee40035
|
tftransformers/bart-base
|
tftransformers
| null | 6 | 4 | null | 0 | null | false | false | false |
apache-2.0
|
['en']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
[]
| false | true | true | 2,730 | false |
# BART (base-sized model)
BART model pre-trained on English language. It was introduced in the paper [BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension](https://arxiv.org/abs/1910.13461) by Lewis et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/bart).
Disclaimer: The team releasing BART did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
BART is a transformer encoder-encoder (seq2seq) model with a bidirectional (BERT-like) encoder and an autoregressive (GPT-like) decoder. BART is pre-trained by (1) corrupting text with an arbitrary noising function, and (2) learning a model to reconstruct the original text.
BART is particularly effective when fine-tuned for text generation (e.g. summarization, translation) but also works well for comprehension tasks (e.g. text classification, question answering).
## Intended uses & limitations
You can use the raw model for text infilling. However, the model is mostly meant to be fine-tuned on a supervised dataset. See the [model hub](https://huggingface.co/models?search=bart) to look for fine-tuned versions on a task that interests you.
### How to use
Here is how to use this model in tf_transformers:
```python
from tf_transformers.models import BartModel
from transformers import BartTokenizer
tokenizer = BartTokenizer.from_pretrained('facebook/bart-base')
model = BartModel.from_pretrained('facebook/bart-base')
inputs_tf = {}
inputs = tokenizer("Hello, my dog is cute", return_tensors="pt")
inputs_tf["encoder_input_ids"] = inputs["input_ids"]
inputs_tf["encoder_input_mask"] = inputs["attention_mask"]
inputs_tf["decoder_input_ids"] = decoder_input_ids
outputs_tf = model(inputs_tf)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1910-13461,
author = {Mike Lewis and
Yinhan Liu and
Naman Goyal and
Marjan Ghazvininejad and
Abdelrahman Mohamed and
Omer Levy and
Veselin Stoyanov and
Luke Zettlemoyer},
title = {{BART:} Denoising Sequence-to-Sequence Pre-training for Natural Language
Generation, Translation, and Comprehension},
journal = {CoRR},
volume = {abs/1910.13461},
year = {2019},
url = {http://arxiv.org/abs/1910.13461},
eprinttype = {arXiv},
eprint = {1910.13461},
timestamp = {Thu, 31 Oct 2019 14:02:26 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1910-13461.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
|
2872bc2810571d323039217f9f01629f
|
jannesg/takalane_ssw_roberta
|
jannesg
|
roberta
| 8 | 8 |
transformers
| 0 |
fill-mask
| true | false | true |
mit
|
['tn']
| null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['tn', 'fill-mask', 'pytorch', 'roberta', 'masked-lm']
| false | true | true | 1,059 | false |
# Takalani Sesame - Tswana 🇿🇦
<img src="https://pbs.twimg.com/media/EVjR6BsWoAAFaq5.jpg" width="600"/>
## Model description
Takalani Sesame (named after the South African version of Sesame Street) is a project that aims to promote the use of South African languages in NLP, and in particular look at techniques for low-resource languages to equalise performance with larger languages around the world.
## Intended uses & limitations
#### How to use
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("jannesg/takalane_ssw_roberta")
model = AutoModelWithLMHead.from_pretrained("jannesg/takalane_ssw_roberta")
```
#### Limitations and bias
Updates will be added continously to improve performance.
## Training data
Data collected from [https://wortschatz.uni-leipzig.de/en](https://wortschatz.uni-leipzig.de/en) <br/>
**Sentences:** 380
## Training procedure
No preprocessing. Standard Huggingface hyperparameters.
## Author
Jannes Germishuys [website](http://jannesgg.github.io)
|
44ced331f635941817198e940aae8cc0
|
minhhoque/swin-tiny-patch4-window7-224-finetuned-eurosat
|
minhhoque
|
swin
| 14 | 1 |
transformers
| 0 |
image-classification
| true | false | false |
apache-2.0
| null |
['imagefolder']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,492 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# swin-tiny-patch4-window7-224-finetuned-eurosat
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0775
- Accuracy: 0.9730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2658 | 1.0 | 190 | 0.1305 | 0.9615 |
| 0.1591 | 2.0 | 380 | 0.0781 | 0.9726 |
| 0.1364 | 3.0 | 570 | 0.0775 | 0.9730 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Datasets 2.7.1
- Tokenizers 0.13.2
|
59908dfda27c4d354517ef26f1fd7544
|
yanaiela/roberta-base-epoch_70
|
yanaiela
|
roberta
| 9 | 3 |
transformers
| 0 |
fill-mask
| true | false | false |
mit
|
['en']
|
['wikipedia', 'bookcorpus']
| null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['roberta-base', 'roberta-base-epoch_70']
| false | true | true | 2,102 | false |
# RoBERTa, Intermediate Checkpoint - Epoch 70
This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692),
trained on Wikipedia and the Book Corpus only.
We train this model for almost 100K steps, corresponding to 83 epochs.
We provide the 84 checkpoints (including the randomly initialized weights before the training)
to provide the ability to study the training dynamics of such models, and other possible use-cases.
These models were trained in part of a work that studies how simple statistics from data,
such as co-occurrences affects model predictions, which are described in the paper
[Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251).
This is RoBERTa-base epoch_70.
## Model Description
This model was captured during a reproduction of
[RoBERTa-base](https://huggingface.co/roberta-base), for English: it
is a Transformers model pretrained on a large corpus of English data, using the
Masked Language Modelling (MLM).
The intended uses, limitations, training data and training procedure for the fully trained model are similar
to [RoBERTa-base](https://huggingface.co/roberta-base). Two major
differences with the original model:
* We trained our model for 100K steps, instead of 500K
* We only use Wikipedia and the Book Corpus, as corpora which are publicly available.
### How to use
Using code from
[RoBERTa-base](https://huggingface.co/roberta-base), here is an example based on
PyTorch:
```
from transformers import pipeline
model = pipeline("fill-mask", model='yanaiela/roberta-base-epoch_83', device=-1, top_k=10)
model("Hello, I'm the <mask> RoBERTa-base language model")
```
## Citation info
```bibtex
@article{2207.14251,
Author = {Yanai Elazar and Nora Kassner and Shauli Ravfogel and Amir Feder and Abhilasha Ravichander and Marius Mosbach and Yonatan Belinkov and Hinrich Schütze and Yoav Goldberg},
Title = {Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions},
Year = {2022},
Eprint = {arXiv:2207.14251},
}
```
|
179ae0719334e83660357bfd78c054e8
|
Nyaaneet/donut-cord
|
Nyaaneet
|
vision-encoder-decoder
| 19 | 0 |
transformers
| 0 | null | true | false | false |
mit
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 948 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# donut-cord
This model is a fine-tuned version of [naver-clova-ix/donut-base](https://huggingface.co/naver-clova-ix/donut-base).
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 2
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
|
abfe37d9ac051c66d8ef0e2589990cc7
|
scasutt/wav2vec2-base_toy_train_data_random_noise_0.1
|
scasutt
|
wav2vec2
| 7 | 5 |
transformers
| 0 |
automatic-speech-recognition
| true | false | false |
apache-2.0
| null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 |
['generated_from_trainer']
| true | true | true | 1,798 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base_toy_train_data_random_noise_0.1
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9263
- Wer: 0.7213
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.1296 | 2.1 | 250 | 3.5088 | 1.0 |
| 3.0728 | 4.2 | 500 | 3.1694 | 1.0 |
| 1.8686 | 6.3 | 750 | 1.3414 | 0.9321 |
| 1.1241 | 8.4 | 1000 | 1.0196 | 0.8321 |
| 0.8704 | 10.5 | 1250 | 0.9387 | 0.7962 |
| 0.6734 | 12.6 | 1500 | 0.9309 | 0.7640 |
| 0.5832 | 14.7 | 1750 | 0.9329 | 0.7346 |
| 0.5207 | 16.8 | 2000 | 0.9060 | 0.7247 |
| 0.4857 | 18.9 | 2250 | 0.9263 | 0.7213 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
|
dbf86143339dedb9fb13fce30e39f9c9
|
EloimEssaim/svsv-dog-heywhale
|
EloimEssaim
| null | 17 | 10 |
diffusers
| 1 |
text-to-image
| true | false | false |
creativeml-openrail-m
| null | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 |
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
| false | true | true | 791 | false |
# DreamBooth model for the svsv concept trained by EloimEssaim.
This is a Stable Diffusion model fine-tuned on the svsv concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of svsv dog**
This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
## Description
This is a Stable Diffusion model fine-tuned on `dog` images for the animal theme,
for the Hugging Face DreamBooth Hackathon, from the HF CN Community,
corporated with the HeyWhale.
## Usage
```python
from diffusers import StableDiffusionPipeline
pipeline = StableDiffusionPipeline.from_pretrained('EloimEssaim/svsv-dog-heywhale')
image = pipeline().images[0]
image
```
|
d8b06ec4a26471d25de7512e50c8bf16
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.