modelId
stringlengths 5
139
| author
stringlengths 2
42
| last_modified
timestamp[us, tz=UTC]date 2020-02-15 11:33:14
2025-08-29 18:27:06
| downloads
int64 0
223M
| likes
int64 0
11.7k
| library_name
stringclasses 526
values | tags
listlengths 1
4.05k
| pipeline_tag
stringclasses 55
values | createdAt
timestamp[us, tz=UTC]date 2022-03-02 23:29:04
2025-08-29 18:26:56
| card
stringlengths 11
1.01M
|
---|---|---|---|---|---|---|---|---|---|
Rustem/roberta-base-trained-50k-docs
|
Rustem
| 2022-03-16T12:38:46Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-16T12:33:53Z |
---
license: apache-2.0
---
|
RobertoMCA97/xlm-roberta-base-finetuned-panx-de-fr
|
RobertoMCA97
| 2022-03-16T12:24:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T12:03:40Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de-fr
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1667
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2885 | 1.0 | 715 | 0.1817 | 0.8287 |
| 0.1497 | 2.0 | 1430 | 0.1618 | 0.8442 |
| 0.0944 | 3.0 | 2145 | 0.1667 | 0.8582 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
RobertoMCA97/xlm-roberta-base-finetuned-panx-de
|
RobertoMCA97
| 2022-03-16T11:55:06Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"dataset:xtreme",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-15T11:29:05Z |
---
license: mit
tags:
- generated_from_trainer
datasets:
- xtreme
metrics:
- f1
model-index:
- name: xlm-roberta-base-finetuned-panx-de
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: xtreme
type: xtreme
args: PAN-X.de
metrics:
- name: F1
type: f1
value: 0.8590909090909091
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1380
- F1: 0.8591
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2642 | 1.0 | 525 | 0.1624 | 0.8251 |
| 0.1315 | 2.0 | 1050 | 0.1445 | 0.8508 |
| 0.0832 | 3.0 | 1575 | 0.1380 | 0.8591 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
anton-l/xtreme_s_xlsr_minds14_upd
|
anton-l
| 2022-03-16T11:52:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"minds14",
"google/xtreme_s",
"generated_from_trainer",
"dataset:xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-16T11:48:51Z |
---
license: apache-2.0
tags:
- minds14
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- f1
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14_upd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14_upd
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6303
- F1: 0.0223
- Accuracy: 0.0833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
ixa-ehu/roberta-eus-mc4-base-cased
|
ixa-ehu
| 2022-03-16T11:49:27Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T09:56:03Z |
---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus mc4 base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa model trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa model trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa model trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viรฑaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
ixa-ehu/roberta-eus-euscrawl-base-cased
|
ixa-ehu
| 2022-03-16T11:48:42Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"basque",
"eu",
"arxiv:2203.08111",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-16T09:54:43Z |
---
language: eu
license: cc-by-nc-4.0
tags:
- basque
- roberta
---
# Roberta-eus Euscrawl base cased
This is a RoBERTa model for Basque model presented in [Does corpus quality really matter for low-resource languages?](https://arxiv.org/abs/2203.08111). There are several models for Basque using the RoBERTa architecture, which are pre-trained using different corpora:
- roberta-eus-euscrawl-base-cased: Basque RoBERTa trained on Euscrawl, a corpus created using tailored crawling from Basque sites. EusCrawl contains 12,528k documents and 423M tokens.
- roberta-eus-euscrawl-large-cased: Basque RoBERTa large trained on EusCrawl.
- roberta-eus-mC4-base-cased: Basque RoBERTa trained on the Basque portion of mc4 dataset.
- roberta-eus-CC100-base-cased: Basque RoBERTa trained on Basque portion of cc100 dataset.
The models have been tested on five different downstream tasks for Basque: Topic classification, Sentiment analysis, Stance detection, Named Entity Recognition (NER), and Question Answering (refer to the [paper](https://arxiv.org/abs/2203.08111) for more details). See summary of results below:
| Model | Topic class. | Sentiment | Stance det. | NER | QA | Average |
|----------------------------------|--------------|-----------|-------------|----------|----------|----------|
| roberta-eus-euscrawl-base-cased | 76.2 | 77.7 | 57.4 | 86.8 | 34.6 | 66.5 |
| roberta-eus-euscrawl-large-cased | **77.6** | 78.8 | 62.9 | **87.2** | **38.3** | **69.0** |
| roberta-eus-mC4-base-cased | 75.3 | **80.4** | 59.1 | 86.0 | 35.2 | 67.2 |
| roberta-eus-CC100-base-cased | 76.2 | 78.8 | **63.4** | 85.2 | 35.8 | 67.9 |
If you use any of these models, please cite the following paper:
```
@misc{artetxe2022euscrawl,
title={Does corpus quality really matter for low-resource languages?},
author={Mikel Artetxe, Itziar Aldabe, Rodrigo Agerri,
Olatz Perez-de-Viรฑaspre, Aitor Soroa},
year={2022},
eprint={2203.08111},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
tae898/emoberta-base
|
tae898
| 2022-03-16T11:01:29Z | 124 | 5 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"text-classification",
"emoberta",
"en",
"dataset:MELD",
"dataset:IEMOCAP",
"arxiv:2108.12009",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T20:03:08Z |
---
language: en
tags:
- emoberta
- roberta
license: mit
datasets:
- MELD
- IEMOCAP
---
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | โ *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
|
fabianrausch/german-financial-statements-bert
|
fabianrausch
| 2022-03-16T09:58:56Z | 173 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"de",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-05T14:26:11Z |
---
license: mit
language: de
---
# german-financial-statements-bert
This model is a fine-tuned version of [bert-base-german-cased](https://huggingface.co/bert-base-german-cased) using German financial statements.
It achieves the following results on the evaluation set:
- Loss: 1.2025
- Accuracy: 0.7376
- Perplexity: 3.3285
## Model description
Annual financial statements in Germany are published in the Federal Gazette and are freely accessible. The documents describe the entrepreneurial and in particular the financial situation of a company with reference to a reporting period. The german-financial-statements-bert model aims to provide a BERT model specifically for this domain.
## Training and evaluation data
The training was performed with 100,000 natural language sentences from annual financial statements. 50,000 of these sentences were taken unfiltered and randomly from 5,500 different financial statement documents, and another 50,000 were also taken randomly from 5,500 different financial statement documents, but this half was filtered so that only sentences referring to a financial entity were selected. Specifically, this means that the second half of the sentences contains an indicator for a reference to a financial entity (EUR, Euro, TEUR, โฌ, Tโฌ). The evaluation was carried out with 20,000 sentences of the same origin and distribution.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.3
- Tokenizers 0.11.6
|
navteca/nli-deberta-v3-xsmall
|
navteca
| 2022-03-16T09:49:34Z | 18 | 1 |
transformers
|
[
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"microsoft/deberta-v3-xsmall",
"zero-shot-classification",
"en",
"dataset:multi_nli",
"dataset:snli",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
zero-shot-classification
| 2022-03-16T09:37:56Z |
---
datasets:
- multi_nli
- snli
language: en
license: apache-2.0
metrics:
- accuracy
pipeline_tag: zero-shot-classification
tags:
- microsoft/deberta-v3-xsmall
---
# Cross-Encoder for Natural Language Inference
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. This model is based on [microsoft/deberta-v3-xsmall](https://huggingface.co/microsoft/deberta-v3-xsmall)
## Training Data
The model was trained on the [SNLI](https://nlp.stanford.edu/projects/snli/) and [MultiNLI](https://cims.nyu.edu/~sbowman/multinli/) datasets. For a given sentence pair, it will output three scores corresponding to the labels: contradiction, entailment, neutral.
## Performance
- Accuracy on SNLI-test dataset: 91.64
- Accuracy on MNLI mismatched set: 87.77
For futher evaluation results, see [SBERT.net - Pretrained Cross-Encoder](https://www.sbert.net/docs/pretrained_cross-encoders.html#nli).
## Usage
Pre-trained models can be used like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('cross-encoder/nli-deberta-v3-xsmall')
scores = model.predict([('A man is eating pizza', 'A man eats something'), ('A black race car starts up in front of a crowd of people.', 'A man is driving down a lonely road.')])
#Convert scores to labels
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(axis=1)]
```
## Usage with Transformers AutoModel
You can use the model also directly with Transformers library (without SentenceTransformers library):
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
model = AutoModelForSequenceClassification.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
tokenizer = AutoTokenizer.from_pretrained('cross-encoder/nli-deberta-v3-xsmall')
features = tokenizer(['A man is eating pizza', 'A black race car starts up in front of a crowd of people.'], ['A man eats something', 'A man is driving down a lonely road.'], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
label_mapping = ['contradiction', 'entailment', 'neutral']
labels = [label_mapping[score_max] for score_max in scores.argmax(dim=1)]
print(labels)
```
## Zero-Shot Classification
This model can also be used for zero-shot-classification:
```python
from transformers import pipeline
classifier = pipeline("zero-shot-classification", model='cross-encoder/nli-deberta-v3-xsmall')
sent = "Apple just announced the newest iPhone X"
candidate_labels = ["technology", "sports", "politics"]
res = classifier(sent, candidate_labels)
print(res)
```
|
navteca/ms-marco-MiniLM-L-6-v2
|
navteca
| 2022-03-16T09:36:49Z | 103,091 | 2 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-16T09:26:53Z |
---
language: en
license: mit
pipeline_tag: text-classification
tags:
- sentence-transformers
---
# Cross-Encoder for MS Marco
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
## Usage
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
datarpit/distilbert-base-uncased-finetuned-natural-questions
|
datarpit
| 2022-03-16T07:52:09Z | 91 | 3 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:natural_questions",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-08T20:12:53Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- natural_questions
model-index:
- name: distilbert-base-uncased-finetuned-natural-questions
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-natural-questions
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the natural_questions dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 40
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:------:|:---------------:|
| 2.0532 | 1.0 | 5104 | 0.2393 |
| 1.8912 | 2.0 | 10208 | 0.2284 |
| 1.7854 | 3.0 | 15312 | 0.2357 |
| 1.6856 | 4.0 | 20416 | 0.2487 |
| 1.5918 | 5.0 | 25520 | 0.2743 |
| 1.5067 | 6.0 | 30624 | 0.2586 |
| 1.4323 | 7.0 | 35728 | 0.2763 |
| 1.365 | 8.0 | 40832 | 0.2753 |
| 1.3162 | 9.0 | 45936 | 0.3200 |
| 1.281 | 10.0 | 51040 | 0.3127 |
| 1.308 | 11.0 | 57104 | 0.2947 |
| 1.241 | 12.0 | 62208 | 0.2941 |
| 1.1391 | 13.0 | 67312 | 0.3103 |
| 1.0334 | 14.0 | 72416 | 0.3694 |
| 0.9538 | 15.0 | 77520 | 0.3658 |
| 0.8749 | 16.0 | 82624 | 0.4009 |
| 0.8154 | 17.0 | 87728 | 0.3672 |
| 0.7533 | 18.0 | 92832 | 0.3675 |
| 0.7079 | 19.0 | 97936 | 0.4611 |
| 0.6658 | 20.0 | 103040 | 0.4222 |
| 0.595 | 21.0 | 108144 | 0.4095 |
| 0.5765 | 22.0 | 113248 | 0.4400 |
| 0.5259 | 23.0 | 118352 | 0.5109 |
| 0.4804 | 24.0 | 123456 | 0.4711 |
| 0.4389 | 25.0 | 128560 | 0.5072 |
| 0.4034 | 26.0 | 133664 | 0.5363 |
| 0.374 | 27.0 | 138768 | 0.5460 |
| 0.3434 | 28.0 | 143872 | 0.5627 |
| 0.3181 | 29.0 | 148976 | 0.5657 |
| 0.2971 | 30.0 | 154080 | 0.5819 |
| 0.275 | 31.0 | 159184 | 0.5649 |
| 0.2564 | 32.0 | 164288 | 0.6087 |
| 0.2431 | 33.0 | 169392 | 0.6137 |
| 0.2289 | 34.0 | 174496 | 0.6123 |
| 0.2151 | 35.0 | 179600 | 0.5979 |
| 0.2041 | 36.0 | 184704 | 0.6196 |
| 0.1922 | 37.0 | 189808 | 0.6191 |
| 0.1852 | 38.0 | 194912 | 0.6313 |
| 0.1718 | 39.0 | 200016 | 0.6234 |
| 0.1718 | 39.81 | 204160 | 0.6267 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ScandinavianMrT/gpt2_prefinetune_SARC_1epoch_withcontext
|
ScandinavianMrT
| 2022-03-16T07:23:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-16T06:24:23Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: gpt2_prefinetune_SARC_1epoch_withcontext
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2_prefinetune_SARC_1epoch_withcontext
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7899
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.8788 | 1.0 | 14028 | 3.7899 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
Ravikantcool2022/Ethereum.wiki
|
Ravikantcool2022
| 2022-03-16T05:08:13Z | 0 | 1 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-16T05:08:13Z |
---
license: apache-2.0
---
|
lijingxin/bert-base-uncased-issues-128
|
lijingxin
| 2022-03-16T03:19:04Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-15T15:32:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-issues-128
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0981 | 1.0 | 291 | 1.6917 |
| 1.6493 | 2.0 | 582 | 1.4357 |
| 1.4831 | 3.0 | 873 | 1.3923 |
| 1.3957 | 4.0 | 1164 | 1.4056 |
| 1.3339 | 5.0 | 1455 | 1.1944 |
| 1.2936 | 6.0 | 1746 | 1.2888 |
| 1.2458 | 7.0 | 2037 | 1.2715 |
| 1.2004 | 8.0 | 2328 | 1.1992 |
| 1.1785 | 9.0 | 2619 | 1.1726 |
| 1.1389 | 10.0 | 2910 | 1.2157 |
| 1.1313 | 11.0 | 3201 | 1.1977 |
| 1.0935 | 12.0 | 3492 | 1.1794 |
| 1.0826 | 13.0 | 3783 | 1.2260 |
| 1.0729 | 14.0 | 4074 | 1.1549 |
| 1.0599 | 15.0 | 4365 | 1.1269 |
| 1.0538 | 16.0 | 4656 | 1.2540 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
kSaluja/roberta-finetuned-ner-without-data-sort
|
kSaluja
| 2022-03-16T01:27:44Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-16T00:41:56Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-finetuned-ner-without-data-sort
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-ner-without-data-sort
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0420
- Precision: 0.9914
- Recall: 0.9909
- F1: 0.9912
- Accuracy: 0.9920
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 213 | 0.1879 | 0.9378 | 0.9414 | 0.9396 | 0.9493 |
| No log | 2.0 | 426 | 0.1038 | 0.9725 | 0.9750 | 0.9737 | 0.9751 |
| 0.4424 | 3.0 | 639 | 0.0701 | 0.9861 | 0.9851 | 0.9856 | 0.9863 |
| 0.4424 | 4.0 | 852 | 0.0637 | 0.9882 | 0.9880 | 0.9881 | 0.9880 |
| 0.0675 | 5.0 | 1065 | 0.0546 | 0.9851 | 0.9878 | 0.9865 | 0.9879 |
| 0.0675 | 6.0 | 1278 | 0.0480 | 0.9894 | 0.9904 | 0.9899 | 0.9901 |
| 0.0675 | 7.0 | 1491 | 0.0473 | 0.9919 | 0.9904 | 0.9912 | 0.9911 |
| 0.0426 | 8.0 | 1704 | 0.0441 | 0.9921 | 0.9916 | 0.9919 | 0.9921 |
| 0.0426 | 9.0 | 1917 | 0.0426 | 0.9921 | 0.9916 | 0.9919 | 0.9922 |
| 0.033 | 10.0 | 2130 | 0.0420 | 0.9914 | 0.9909 | 0.9912 | 0.9920 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
sap-ai-research/RoBERTa-base-SCD-ACL2022
|
sap-ai-research
| 2022-03-16T00:41:41Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-15T23:32:07Z |
---
license: apache-2.0
---
|
golivaresm/roberta-base-bne-finetuned-amazon_reviews_multi
|
golivaresm
| 2022-03-16T00:34:07Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:34:51Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.93125
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2328
- Accuracy: 0.9313
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1985 | 1.0 | 1250 | 0.1730 | 0.9327 |
| 0.0982 | 2.0 | 2500 | 0.2328 | 0.9313 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
kSaluja/roberta-finetuned-ner
|
kSaluja
| 2022-03-16T00:00:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"token-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-15T23:20:13Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: roberta-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-ner
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1322
- Precision: 0.9772
- Recall: 0.9782
- F1: 0.9777
- Accuracy: 0.9767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 253 | 0.1694 | 0.9636 | 0.9555 | 0.9595 | 0.9617 |
| 0.4479 | 2.0 | 506 | 0.1374 | 0.9743 | 0.9762 | 0.9752 | 0.9743 |
| 0.4479 | 3.0 | 759 | 0.1322 | 0.9772 | 0.9782 | 0.9777 | 0.9767 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
AnnaR/literature_summarizer
|
AnnaR
| 2022-03-15T23:54:39Z | 5 | 0 |
transformers
|
[
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-15T23:47:38Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: AnnaR/literature_summarizer
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# AnnaR/literature_summarizer
This model is a fine-tuned version of [sshleifer/distilbart-xsum-1-1](https://huggingface.co/sshleifer/distilbart-xsum-1-1) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 3.2180
- Validation Loss: 4.7198
- Epoch: 10
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 5300, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.1}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.6694 | 5.0234 | 0 |
| 4.9191 | 4.8161 | 1 |
| 4.5770 | 4.7170 | 2 |
| 4.3268 | 4.6571 | 3 |
| 4.1073 | 4.6296 | 4 |
| 3.9225 | 4.6279 | 5 |
| 3.7564 | 4.6288 | 6 |
| 3.5989 | 4.6731 | 7 |
| 3.4611 | 4.6767 | 8 |
| 3.3356 | 4.6934 | 9 |
| 3.2180 | 4.7198 | 10 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.8.0
- Datasets 2.0.0
- Tokenizers 0.11.6
|
krinal214/bert-3lang
|
krinal214
| 2022-03-15T23:30:47Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:tydiqa",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-15T23:17:42Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- tydiqa
model-index:
- name: bert-3lang
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-3lang
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the tydiqa dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 1.8161 | 1.0 | 905 | 0.6422 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.9.1
- Datasets 2.0.0
- Tokenizers 0.10.3
|
responsibility-framing/predict-perception-xlmr-focus-concept
|
responsibility-framing
| 2022-03-15T23:28:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:23:34Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-concept
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-concept
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8296
- Rmse: 1.0302
- Rmse Focus::a Su un concetto astratto o un'emozione: 1.0302
- Mae: 0.7515
- Mae Focus::a Su un concetto astratto o un'emozione: 0.7515
- R2: 0.1804
- R2 Focus::a Su un concetto astratto o un'emozione: 0.1804
- Cos: 0.4783
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3415
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un concetto astratto o un'emozione | Mae | Mae Focus::a Su un concetto astratto o un'emozione | R2 | R2 Focus::a Su un concetto astratto o un'emozione | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------------------------------:|:------:|:--------------------------------------------------:|:-------:|:-------------------------------------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0355 | 1.0 | 15 | 0.9822 | 1.1209 | 1.1209 | 0.9649 | 0.9649 | 0.0296 | 0.0296 | 0.2174 | 0.0 | 0.5 | 0.3706 | nan |
| 1.0083 | 2.0 | 30 | 1.1378 | 1.2065 | 1.2065 | 0.9954 | 0.9954 | -0.1241 | -0.1241 | 0.2174 | 0.0 | 0.5 | 0.3309 | nan |
| 0.9823 | 3.0 | 45 | 0.9669 | 1.1121 | 1.1121 | 0.9315 | 0.9315 | 0.0448 | 0.0448 | 0.3043 | 0.0 | 0.5 | 0.3810 | nan |
| 0.9468 | 4.0 | 60 | 0.8856 | 1.0644 | 1.0644 | 0.8584 | 0.8584 | 0.1251 | 0.1251 | 0.3913 | 0.0 | 0.5 | 0.3803 | nan |
| 0.9294 | 5.0 | 75 | 0.8136 | 1.0202 | 1.0202 | 0.8396 | 0.8396 | 0.1963 | 0.1963 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.881 | 6.0 | 90 | 0.7634 | 0.9882 | 0.9882 | 0.8192 | 0.8192 | 0.2458 | 0.2458 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.7589 | 7.0 | 105 | 0.8139 | 1.0204 | 1.0204 | 0.8136 | 0.8136 | 0.1960 | 0.1960 | 0.5652 | 0.0 | 0.5 | 0.4120 | nan |
| 0.7217 | 8.0 | 120 | 0.9105 | 1.0792 | 1.0792 | 0.9394 | 0.9394 | 0.1005 | 0.1005 | 0.3913 | 0.0 | 0.5 | 0.4108 | nan |
| 0.8059 | 9.0 | 135 | 1.0322 | 1.1491 | 1.1491 | 0.9115 | 0.9115 | -0.0197 | -0.0197 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan |
| 0.6483 | 10.0 | 150 | 0.7989 | 1.0109 | 1.0109 | 0.7899 | 0.7899 | 0.2108 | 0.2108 | 0.6522 | 0.0 | 0.5 | 0.4727 | nan |
| 0.5725 | 11.0 | 165 | 0.7175 | 0.9581 | 0.9581 | 0.7011 | 0.7011 | 0.2912 | 0.2912 | 0.5652 | 0.0 | 0.5 | 0.3738 | nan |
| 0.5091 | 12.0 | 180 | 0.8818 | 1.0621 | 1.0621 | 0.8775 | 0.8775 | 0.1289 | 0.1289 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan |
| 0.4526 | 13.0 | 195 | 0.8451 | 1.0398 | 1.0398 | 0.7990 | 0.7990 | 0.1651 | 0.1651 | 0.5652 | 0.0 | 0.5 | 0.4063 | nan |
| 0.361 | 14.0 | 210 | 0.8632 | 1.0508 | 1.0508 | 0.8124 | 0.8124 | 0.1472 | 0.1472 | 0.4783 | 0.0 | 0.5 | 0.3699 | nan |
| 0.3582 | 15.0 | 225 | 0.8461 | 1.0404 | 1.0404 | 0.7923 | 0.7923 | 0.1641 | 0.1641 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan |
| 0.2945 | 16.0 | 240 | 0.9142 | 1.0814 | 1.0814 | 0.8125 | 0.8125 | 0.0968 | 0.0968 | 0.3913 | 0.0 | 0.5 | 0.3672 | nan |
| 0.2891 | 17.0 | 255 | 0.8377 | 1.0352 | 1.0352 | 0.7718 | 0.7718 | 0.1724 | 0.1724 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2569 | 18.0 | 270 | 0.8106 | 1.0183 | 1.0183 | 0.7481 | 0.7481 | 0.1992 | 0.1992 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2583 | 19.0 | 285 | 0.8239 | 1.0266 | 1.0266 | 0.7597 | 0.7597 | 0.1861 | 0.1861 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.2217 | 20.0 | 300 | 0.8485 | 1.0419 | 1.0419 | 0.7663 | 0.7663 | 0.1617 | 0.1617 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1927 | 21.0 | 315 | 0.8304 | 1.0307 | 1.0307 | 0.7536 | 0.7536 | 0.1797 | 0.1797 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.176 | 22.0 | 330 | 0.8321 | 1.0317 | 1.0317 | 0.7539 | 0.7539 | 0.1780 | 0.1780 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1639 | 23.0 | 345 | 0.7914 | 1.0062 | 1.0062 | 0.7460 | 0.7460 | 0.2182 | 0.2182 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.177 | 24.0 | 360 | 0.8619 | 1.0500 | 1.0500 | 0.7725 | 0.7725 | 0.1486 | 0.1486 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1473 | 25.0 | 375 | 0.8101 | 1.0180 | 1.0180 | 0.7587 | 0.7587 | 0.1997 | 0.1997 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.181 | 26.0 | 390 | 0.8038 | 1.0141 | 1.0141 | 0.7433 | 0.7433 | 0.2059 | 0.2059 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1679 | 27.0 | 405 | 0.7982 | 1.0105 | 1.0105 | 0.7248 | 0.7248 | 0.2115 | 0.2115 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1529 | 28.0 | 420 | 0.8282 | 1.0293 | 1.0293 | 0.7454 | 0.7454 | 0.1818 | 0.1818 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1822 | 29.0 | 435 | 0.8310 | 1.0311 | 1.0311 | 0.7512 | 0.7512 | 0.1790 | 0.1790 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
| 0.1442 | 30.0 | 450 | 0.8296 | 1.0302 | 1.0302 | 0.7515 | 0.7515 | 0.1804 | 0.1804 | 0.4783 | 0.0 | 0.5 | 0.3415 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-focus-object
|
responsibility-framing
| 2022-03-15T23:23:19Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:19:04Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1927
- Rmse: 0.5495
- Rmse Focus::a Su un oggetto: 0.5495
- Mae: 0.4174
- Mae Focus::a Su un oggetto: 0.4174
- R2: 0.5721
- R2 Focus::a Su un oggetto: 0.5721
- Cos: 0.5652
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.5518
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Su un oggetto | Mae | Mae Focus::a Su un oggetto | R2 | R2 Focus::a Su un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:---------------------------:|:------:|:--------------------------:|:-------:|:-------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0316 | 1.0 | 15 | 0.6428 | 1.0035 | 1.0035 | 0.8806 | 0.8806 | -0.4272 | -0.4272 | -0.4783 | 0.0 | 0.5 | 0.5302 | nan |
| 1.0005 | 2.0 | 30 | 0.4564 | 0.8456 | 0.8456 | 0.7078 | 0.7078 | -0.0134 | -0.0134 | 0.4783 | 0.0 | 0.5 | 0.4440 | nan |
| 0.9519 | 3.0 | 45 | 0.4151 | 0.8063 | 0.8063 | 0.6797 | 0.6797 | 0.0784 | 0.0784 | 0.1304 | 0.0 | 0.5 | 0.4888 | nan |
| 0.92 | 4.0 | 60 | 0.3982 | 0.7898 | 0.7898 | 0.6516 | 0.6516 | 0.1159 | 0.1159 | 0.2174 | 0.0 | 0.5 | 0.5036 | nan |
| 0.8454 | 5.0 | 75 | 0.2739 | 0.6550 | 0.6550 | 0.5292 | 0.5292 | 0.3919 | 0.3919 | 0.6522 | 0.0 | 0.5 | 0.4160 | nan |
| 0.7247 | 6.0 | 90 | 0.2413 | 0.6148 | 0.6148 | 0.5347 | 0.5347 | 0.4642 | 0.4642 | 0.4783 | 0.0 | 0.5 | 0.3453 | nan |
| 0.6055 | 7.0 | 105 | 0.3109 | 0.6978 | 0.6978 | 0.6115 | 0.6115 | 0.3098 | 0.3098 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.5411 | 8.0 | 120 | 0.3932 | 0.7848 | 0.7848 | 0.6712 | 0.6712 | 0.1271 | 0.1271 | 0.4783 | 0.0 | 0.5 | 0.4154 | nan |
| 0.4784 | 9.0 | 135 | 0.1316 | 0.4540 | 0.4540 | 0.3750 | 0.3750 | 0.7079 | 0.7079 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.4039 | 10.0 | 150 | 0.2219 | 0.5896 | 0.5896 | 0.4954 | 0.4954 | 0.5074 | 0.5074 | 0.5652 | 0.0 | 0.5 | 0.4838 | nan |
| 0.3415 | 11.0 | 165 | 0.1935 | 0.5505 | 0.5505 | 0.4443 | 0.4443 | 0.5704 | 0.5704 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3369 | 12.0 | 180 | 0.2118 | 0.5761 | 0.5761 | 0.4554 | 0.4554 | 0.5296 | 0.5296 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.3083 | 13.0 | 195 | 0.1928 | 0.5496 | 0.5496 | 0.4368 | 0.4368 | 0.5718 | 0.5718 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2678 | 14.0 | 210 | 0.2205 | 0.5877 | 0.5877 | 0.4472 | 0.4472 | 0.5105 | 0.5105 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2199 | 15.0 | 225 | 0.2118 | 0.5760 | 0.5760 | 0.4689 | 0.4689 | 0.5297 | 0.5297 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2238 | 16.0 | 240 | 0.2461 | 0.6209 | 0.6209 | 0.5047 | 0.5047 | 0.4537 | 0.4537 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.2233 | 17.0 | 255 | 0.2307 | 0.6011 | 0.6011 | 0.4618 | 0.4618 | 0.4879 | 0.4879 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1903 | 18.0 | 270 | 0.2207 | 0.5880 | 0.5880 | 0.4432 | 0.4432 | 0.5100 | 0.5100 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1714 | 19.0 | 285 | 0.2146 | 0.5798 | 0.5798 | 0.4368 | 0.4368 | 0.5236 | 0.5236 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1759 | 20.0 | 300 | 0.1745 | 0.5228 | 0.5228 | 0.4152 | 0.4152 | 0.6126 | 0.6126 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1505 | 21.0 | 315 | 0.1944 | 0.5519 | 0.5519 | 0.4170 | 0.4170 | 0.5684 | 0.5684 | 0.5652 | 0.0 | 0.5 | 0.6247 | nan |
| 0.1467 | 22.0 | 330 | 0.1802 | 0.5313 | 0.5313 | 0.3910 | 0.3910 | 0.5999 | 0.5999 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1441 | 23.0 | 345 | 0.2360 | 0.6081 | 0.6081 | 0.4755 | 0.4755 | 0.4760 | 0.4760 | 0.4783 | 0.0 | 0.5 | 0.4938 | nan |
| 0.1553 | 24.0 | 360 | 0.2129 | 0.5774 | 0.5774 | 0.4539 | 0.4539 | 0.5274 | 0.5274 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1163 | 25.0 | 375 | 0.1780 | 0.5281 | 0.5281 | 0.3952 | 0.3952 | 0.6048 | 0.6048 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1266 | 26.0 | 390 | 0.2163 | 0.5821 | 0.5821 | 0.4569 | 0.4569 | 0.5198 | 0.5198 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1416 | 27.0 | 405 | 0.1829 | 0.5352 | 0.5352 | 0.4082 | 0.4082 | 0.5939 | 0.5939 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1576 | 28.0 | 420 | 0.1930 | 0.5498 | 0.5498 | 0.4126 | 0.4126 | 0.5716 | 0.5716 | 0.6522 | 0.0 | 0.5 | 0.6622 | nan |
| 0.118 | 29.0 | 435 | 0.2070 | 0.5694 | 0.5694 | 0.4378 | 0.4378 | 0.5405 | 0.5405 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
| 0.1179 | 30.0 | 450 | 0.1927 | 0.5495 | 0.5495 | 0.4174 | 0.4174 | 0.5721 | 0.5721 | 0.5652 | 0.0 | 0.5 | 0.5518 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
kSaluja/bert-finetuned-ner
|
kSaluja
| 2022-03-15T23:18:41Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-15T22:50:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-finetuned-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1555
- Precision: 0.9681
- Recall: 0.9670
- F1: 0.9675
- Accuracy: 0.9687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 253 | 0.1972 | 0.9467 | 0.9408 | 0.9437 | 0.9511 |
| 0.3572 | 2.0 | 506 | 0.1626 | 0.9677 | 0.9614 | 0.9645 | 0.9661 |
| 0.3572 | 3.0 | 759 | 0.1555 | 0.9681 | 0.9670 | 0.9675 | 0.9687 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
|
responsibility-framing/predict-perception-xlmr-focus-assassin
|
responsibility-framing
| 2022-03-15T23:13:17Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T23:08:52Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-focus-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-focus-assassin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3264
- Rmse: 0.9437
- Rmse Focus::a Sull'assassino: 0.9437
- Mae: 0.7093
- Mae Focus::a Sull'assassino: 0.7093
- R2: 0.6145
- R2 Focus::a Sull'assassino: 0.6145
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6131
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Focus::a Sull'assassino | Mae | Mae Focus::a Sull'assassino | R2 | R2 Focus::a Sull'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:----------------------------:|:------:|:---------------------------:|:-------:|:--------------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0403 | 1.0 | 15 | 1.1576 | 1.7771 | 1.7771 | 1.6028 | 1.6028 | -0.3670 | -0.3670 | -0.2174 | 0.0 | 0.5 | 0.2379 | nan |
| 0.9818 | 2.0 | 30 | 0.8916 | 1.5596 | 1.5596 | 1.4136 | 1.4136 | -0.0529 | -0.0529 | 0.3913 | 0.0 | 0.5 | 0.3793 | nan |
| 0.9276 | 3.0 | 45 | 0.9277 | 1.5909 | 1.5909 | 1.4560 | 1.4560 | -0.0955 | -0.0955 | 0.3913 | 0.0 | 0.5 | 0.3742 | nan |
| 0.8395 | 4.0 | 60 | 0.7958 | 1.4734 | 1.4734 | 1.3032 | 1.3032 | 0.0603 | 0.0603 | 0.5652 | 0.0 | 0.5 | 0.4598 | nan |
| 0.7587 | 5.0 | 75 | 0.4647 | 1.1259 | 1.1259 | 0.9316 | 0.9316 | 0.4513 | 0.4513 | 0.6522 | 0.0 | 0.5 | 0.5087 | nan |
| 0.696 | 6.0 | 90 | 0.5368 | 1.2101 | 1.2101 | 1.0847 | 1.0847 | 0.3661 | 0.3661 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.548 | 7.0 | 105 | 0.3110 | 0.9211 | 0.9211 | 0.7896 | 0.7896 | 0.6328 | 0.6328 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.4371 | 8.0 | 120 | 0.3392 | 0.9619 | 0.9619 | 0.8132 | 0.8132 | 0.5995 | 0.5995 | 0.6522 | 0.0 | 0.5 | 0.5261 | nan |
| 0.355 | 9.0 | 135 | 0.3938 | 1.0366 | 1.0366 | 0.8153 | 0.8153 | 0.5349 | 0.5349 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2919 | 10.0 | 150 | 0.3484 | 0.9749 | 0.9749 | 0.7487 | 0.7487 | 0.5886 | 0.5886 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2595 | 11.0 | 165 | 0.2812 | 0.8759 | 0.8759 | 0.6265 | 0.6265 | 0.6679 | 0.6679 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.2368 | 12.0 | 180 | 0.2534 | 0.8314 | 0.8314 | 0.6402 | 0.6402 | 0.7008 | 0.7008 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.227 | 13.0 | 195 | 0.2878 | 0.8861 | 0.8861 | 0.6769 | 0.6769 | 0.6601 | 0.6601 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1979 | 14.0 | 210 | 0.2405 | 0.8100 | 0.8100 | 0.6113 | 0.6113 | 0.7160 | 0.7160 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.1622 | 15.0 | 225 | 0.2575 | 0.8382 | 0.8382 | 0.6017 | 0.6017 | 0.6959 | 0.6959 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1575 | 16.0 | 240 | 0.2945 | 0.8963 | 0.8963 | 0.6741 | 0.6741 | 0.6523 | 0.6523 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1479 | 17.0 | 255 | 0.3563 | 0.9859 | 0.9859 | 0.7367 | 0.7367 | 0.5792 | 0.5792 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1269 | 18.0 | 270 | 0.2806 | 0.8750 | 0.8750 | 0.6665 | 0.6665 | 0.6686 | 0.6686 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1257 | 19.0 | 285 | 0.3267 | 0.9441 | 0.9441 | 0.6739 | 0.6739 | 0.6142 | 0.6142 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.134 | 20.0 | 300 | 0.3780 | 1.0155 | 1.0155 | 0.7331 | 0.7331 | 0.5536 | 0.5536 | 0.7391 | 0.0 | 0.5 | 0.5302 | nan |
| 0.1171 | 21.0 | 315 | 0.3890 | 1.0301 | 1.0301 | 0.7444 | 0.7444 | 0.5406 | 0.5406 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0934 | 22.0 | 330 | 0.3131 | 0.9242 | 0.9242 | 0.6923 | 0.6923 | 0.6303 | 0.6303 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1112 | 23.0 | 345 | 0.2912 | 0.8913 | 0.8913 | 0.6610 | 0.6610 | 0.6561 | 0.6561 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.1038 | 24.0 | 360 | 0.3109 | 0.9209 | 0.9209 | 0.7019 | 0.7019 | 0.6329 | 0.6329 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.085 | 25.0 | 375 | 0.3469 | 0.9728 | 0.9728 | 0.7383 | 0.7383 | 0.5904 | 0.5904 | 0.8261 | 0.0 | 0.5 | 0.6622 | nan |
| 0.0843 | 26.0 | 390 | 0.3017 | 0.9073 | 0.9073 | 0.6848 | 0.6848 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.093 | 27.0 | 405 | 0.3269 | 0.9443 | 0.9443 | 0.7042 | 0.7042 | 0.6140 | 0.6140 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0846 | 28.0 | 420 | 0.3161 | 0.9286 | 0.9286 | 0.6937 | 0.6937 | 0.6267 | 0.6267 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0764 | 29.0 | 435 | 0.3244 | 0.9408 | 0.9408 | 0.7079 | 0.7079 | 0.6169 | 0.6169 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
| 0.0697 | 30.0 | 450 | 0.3264 | 0.9437 | 0.9437 | 0.7093 | 0.7093 | 0.6145 | 0.6145 | 0.7391 | 0.0 | 0.5 | 0.6131 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-object
|
responsibility-framing
| 2022-03-15T22:42:55Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:38:38Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-object
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-object
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7219
- Rmse: 0.6215
- Rmse Blame::a Un oggetto: 0.6215
- Mae: 0.4130
- Mae Blame::a Un oggetto: 0.4130
- R2: 0.1200
- R2 Blame::a Un oggetto: 0.1200
- Cos: 0.3043
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.4335
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a Un oggetto | Mae | Mae Blame::a Un oggetto | R2 | R2 Blame::a Un oggetto | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0279 | 1.0 | 15 | 0.8483 | 0.6737 | 0.6737 | 0.4761 | 0.4761 | -0.0341 | -0.0341 | -0.3043 | 0.0 | 0.5 | 0.5507 | nan |
| 1.0676 | 2.0 | 30 | 0.7749 | 0.6439 | 0.6439 | 0.4291 | 0.4291 | 0.0554 | 0.0554 | 0.0435 | 0.0 | 0.5 | 0.2614 | nan |
| 0.9563 | 3.0 | 45 | 0.7765 | 0.6446 | 0.6446 | 0.4349 | 0.4349 | 0.0535 | 0.0535 | -0.0435 | 0.0 | 0.5 | 0.4515 | nan |
| 0.9622 | 4.0 | 60 | 0.7443 | 0.6311 | 0.6311 | 0.4061 | 0.4061 | 0.0927 | 0.0927 | 0.1304 | 0.0 | 0.5 | 0.2933 | nan |
| 0.948 | 5.0 | 75 | 0.8071 | 0.6571 | 0.6571 | 0.3817 | 0.3817 | 0.0162 | 0.0162 | 0.3043 | 0.0 | 0.5 | 0.4207 | nan |
| 0.9532 | 6.0 | 90 | 0.8007 | 0.6546 | 0.6546 | 0.4585 | 0.4585 | 0.0239 | 0.0239 | -0.0435 | 0.0 | 0.5 | 0.5507 | nan |
| 0.9101 | 7.0 | 105 | 0.7126 | 0.6175 | 0.6175 | 0.3649 | 0.3649 | 0.1313 | 0.1313 | 0.4783 | 0.0 | 0.5 | 0.6012 | nan |
| 0.8369 | 8.0 | 120 | 0.7194 | 0.6204 | 0.6204 | 0.3896 | 0.3896 | 0.1231 | 0.1231 | 0.3913 | 0.0 | 0.5 | 0.3494 | nan |
| 0.8062 | 9.0 | 135 | 0.7157 | 0.6188 | 0.6188 | 0.4192 | 0.4192 | 0.1275 | 0.1275 | 0.0435 | 0.0 | 0.5 | 0.3182 | nan |
| 0.7344 | 10.0 | 150 | 0.7161 | 0.6190 | 0.6190 | 0.3612 | 0.3612 | 0.1270 | 0.1270 | 0.3043 | 0.0 | 0.5 | 0.6035 | nan |
| 0.7439 | 11.0 | 165 | 0.5894 | 0.5616 | 0.5616 | 0.3723 | 0.3723 | 0.2816 | 0.2816 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.6241 | 12.0 | 180 | 0.7087 | 0.6158 | 0.6158 | 0.3972 | 0.3972 | 0.1361 | 0.1361 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.6123 | 13.0 | 195 | 0.6318 | 0.5814 | 0.5814 | 0.3673 | 0.3673 | 0.2298 | 0.2298 | 0.3913 | 0.0 | 0.5 | 0.4413 | nan |
| 0.5364 | 14.0 | 210 | 0.6504 | 0.5899 | 0.5899 | 0.3674 | 0.3674 | 0.2072 | 0.2072 | 0.3043 | 0.0 | 0.5 | 0.3846 | nan |
| 0.5586 | 15.0 | 225 | 0.7151 | 0.6186 | 0.6186 | 0.3850 | 0.3850 | 0.1283 | 0.1283 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.5133 | 16.0 | 240 | 0.5572 | 0.5460 | 0.5460 | 0.3540 | 0.3540 | 0.3208 | 0.3208 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan |
| 0.4193 | 17.0 | 255 | 0.6047 | 0.5688 | 0.5688 | 0.3710 | 0.3710 | 0.2629 | 0.2629 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3504 | 18.0 | 270 | 0.6103 | 0.5714 | 0.5714 | 0.3687 | 0.3687 | 0.2561 | 0.2561 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3328 | 19.0 | 285 | 0.6181 | 0.5751 | 0.5751 | 0.3915 | 0.3915 | 0.2466 | 0.2466 | 0.4783 | 0.0 | 0.5 | 0.5314 | nan |
| 0.3276 | 20.0 | 300 | 0.6334 | 0.5822 | 0.5822 | 0.3612 | 0.3612 | 0.2279 | 0.2279 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3271 | 21.0 | 315 | 0.6200 | 0.5760 | 0.5760 | 0.3827 | 0.3827 | 0.2442 | 0.2442 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.3139 | 22.0 | 330 | 0.6332 | 0.5821 | 0.5821 | 0.3723 | 0.3723 | 0.2281 | 0.2281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2872 | 23.0 | 345 | 0.6694 | 0.5985 | 0.5985 | 0.3966 | 0.3966 | 0.1840 | 0.1840 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3617 | 24.0 | 360 | 0.7022 | 0.6130 | 0.6130 | 0.4061 | 0.4061 | 0.1440 | 0.1440 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3227 | 25.0 | 375 | 0.7364 | 0.6277 | 0.6277 | 0.4205 | 0.4205 | 0.1024 | 0.1024 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.256 | 26.0 | 390 | 0.6938 | 0.6093 | 0.6093 | 0.3833 | 0.3833 | 0.1543 | 0.1543 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2605 | 27.0 | 405 | 0.7221 | 0.6216 | 0.6216 | 0.4036 | 0.4036 | 0.1198 | 0.1198 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
| 0.2558 | 28.0 | 420 | 0.6959 | 0.6102 | 0.6102 | 0.3859 | 0.3859 | 0.1518 | 0.1518 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.2403 | 29.0 | 435 | 0.7152 | 0.6186 | 0.6186 | 0.4088 | 0.4088 | 0.1281 | 0.1281 | 0.3913 | 0.0 | 0.5 | 0.4924 | nan |
| 0.3263 | 30.0 | 450 | 0.7219 | 0.6215 | 0.6215 | 0.4130 | 0.4130 | 0.1200 | 0.1200 | 0.3043 | 0.0 | 0.5 | 0.4335 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-victim
|
responsibility-framing
| 2022-03-15T22:38:23Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:33:07Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-victim
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-victim
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1098
- Rmse: 0.6801
- Rmse Blame::a La vittima: 0.6801
- Mae: 0.5617
- Mae Blame::a La vittima: 0.5617
- R2: -1.5910
- R2 Blame::a La vittima: -1.5910
- Cos: -0.1304
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.3333
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a La vittima | Mae | Mae Blame::a La vittima | R2 | R2 Blame::a La vittima | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------------------------:|:------:|:-----------------------:|:-------:|:----------------------:|:-------:|:----:|:----:|:---------:|:---:|
| 1.0422 | 1.0 | 15 | 0.4952 | 0.4542 | 0.4542 | 0.4095 | 0.4095 | -0.1560 | -0.1560 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0434 | 2.0 | 30 | 0.4851 | 0.4496 | 0.4496 | 0.4054 | 0.4054 | -0.1324 | -0.1324 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.038 | 3.0 | 45 | 0.4513 | 0.4337 | 0.4337 | 0.3885 | 0.3885 | -0.0536 | -0.0536 | -0.1304 | 0.0 | 0.5 | 0.2971 | nan |
| 1.0151 | 4.0 | 60 | 0.4395 | 0.4280 | 0.4280 | 0.3840 | 0.3840 | -0.0262 | -0.0262 | -0.1304 | 0.0 | 0.5 | 0.2715 | nan |
| 0.9727 | 5.0 | 75 | 0.4490 | 0.4325 | 0.4325 | 0.3811 | 0.3811 | -0.0482 | -0.0482 | 0.2174 | 0.0 | 0.5 | 0.3338 | nan |
| 0.9733 | 6.0 | 90 | 0.4540 | 0.4349 | 0.4349 | 0.3860 | 0.3860 | -0.0598 | -0.0598 | -0.2174 | 0.0 | 0.5 | 0.3248 | nan |
| 0.9396 | 7.0 | 105 | 0.4501 | 0.4331 | 0.4331 | 0.3849 | 0.3849 | -0.0508 | -0.0508 | 0.0435 | 0.0 | 0.5 | 0.2609 | nan |
| 0.8759 | 8.0 | 120 | 0.4597 | 0.4377 | 0.4377 | 0.3849 | 0.3849 | -0.0731 | -0.0731 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.8768 | 9.0 | 135 | 0.4575 | 0.4366 | 0.4366 | 0.3784 | 0.3784 | -0.0680 | -0.0680 | 0.4783 | 0.0 | 0.5 | 0.4615 | nan |
| 0.8312 | 10.0 | 150 | 0.5363 | 0.4727 | 0.4727 | 0.4071 | 0.4071 | -0.2520 | -0.2520 | -0.0435 | 0.0 | 0.5 | 0.2733 | nan |
| 0.7296 | 11.0 | 165 | 0.5291 | 0.4696 | 0.4696 | 0.4057 | 0.4057 | -0.2353 | -0.2353 | 0.3043 | 0.0 | 0.5 | 0.3898 | nan |
| 0.7941 | 12.0 | 180 | 0.5319 | 0.4708 | 0.4708 | 0.4047 | 0.4047 | -0.2417 | -0.2417 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6486 | 13.0 | 195 | 0.6787 | 0.5318 | 0.5318 | 0.4516 | 0.4516 | -0.5846 | -0.5846 | 0.1304 | 0.0 | 0.5 | 0.3381 | nan |
| 0.6241 | 14.0 | 210 | 1.0146 | 0.6502 | 0.6502 | 0.5580 | 0.5580 | -1.3687 | -1.3687 | -0.1304 | 0.0 | 0.5 | 0.3509 | nan |
| 0.5868 | 15.0 | 225 | 0.7164 | 0.5464 | 0.5464 | 0.4682 | 0.4682 | -0.6725 | -0.6725 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5305 | 16.0 | 240 | 0.9064 | 0.6146 | 0.6146 | 0.5173 | 0.5173 | -1.1161 | -1.1161 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.495 | 17.0 | 255 | 1.3860 | 0.7600 | 0.7600 | 0.6433 | 0.6433 | -2.2358 | -2.2358 | -0.0435 | 0.0 | 0.5 | 0.2935 | nan |
| 0.566 | 18.0 | 270 | 0.7618 | 0.5634 | 0.5634 | 0.4730 | 0.4730 | -0.7785 | -0.7785 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.4305 | 19.0 | 285 | 0.8849 | 0.6072 | 0.6072 | 0.5048 | 0.5048 | -1.0659 | -1.0659 | -0.0435 | 0.0 | 0.5 | 0.3333 | nan |
| 0.5108 | 20.0 | 300 | 0.7376 | 0.5544 | 0.5544 | 0.4716 | 0.4716 | -0.7220 | -0.7220 | 0.0435 | 0.0 | 0.5 | 0.3225 | nan |
| 0.44 | 21.0 | 315 | 1.1611 | 0.6956 | 0.6956 | 0.5921 | 0.5921 | -1.7108 | -1.7108 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.395 | 22.0 | 330 | 1.3004 | 0.7361 | 0.7361 | 0.6078 | 0.6078 | -2.0360 | -2.0360 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3945 | 23.0 | 345 | 0.9376 | 0.6251 | 0.6251 | 0.5272 | 0.5272 | -1.1890 | -1.1890 | -0.2174 | 0.0 | 0.5 | 0.3188 | nan |
| 0.3093 | 24.0 | 360 | 1.3586 | 0.7524 | 0.7524 | 0.6219 | 0.6219 | -2.1719 | -2.1719 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.2676 | 25.0 | 375 | 1.2200 | 0.7130 | 0.7130 | 0.5994 | 0.5994 | -1.8484 | -1.8484 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3257 | 26.0 | 390 | 1.2235 | 0.7140 | 0.7140 | 0.5900 | 0.5900 | -1.8564 | -1.8564 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.4004 | 27.0 | 405 | 1.0978 | 0.6763 | 0.6763 | 0.5624 | 0.5624 | -1.5629 | -1.5629 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.283 | 28.0 | 420 | 1.1454 | 0.6909 | 0.6909 | 0.5697 | 0.5697 | -1.6742 | -1.6742 | -0.2174 | 0.0 | 0.5 | 0.3587 | nan |
| 0.3326 | 29.0 | 435 | 1.1214 | 0.6836 | 0.6836 | 0.5646 | 0.5646 | -1.6181 | -1.6181 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
| 0.2632 | 30.0 | 450 | 1.1098 | 0.6801 | 0.6801 | 0.5617 | 0.5617 | -1.5910 | -1.5910 | -0.1304 | 0.0 | 0.5 | 0.3333 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
responsibility-framing/predict-perception-xlmr-blame-assassin
|
responsibility-framing
| 2022-03-15T22:32:51Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T22:28:27Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: predict-perception-xlmr-blame-assassin
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# predict-perception-xlmr-blame-assassin
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4439
- Rmse: 0.9571
- Rmse Blame::a L'assassino: 0.9571
- Mae: 0.7260
- Mae Blame::a L'assassino: 0.7260
- R2: 0.6437
- R2 Blame::a L'assassino: 0.6437
- Cos: 0.7391
- Pair: 0.0
- Rank: 0.5
- Neighbors: 0.6287
- Rsa: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 20
- eval_batch_size: 8
- seed: 1996
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Rmse Blame::a L'assassino | Mae | Mae Blame::a L'assassino | R2 | R2 Blame::a L'assassino | Cos | Pair | Rank | Neighbors | Rsa |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------------------------:|:------:|:------------------------:|:------:|:-----------------------:|:------:|:----:|:----:|:---------:|:---:|
| 1.0317 | 1.0 | 15 | 1.1311 | 1.5278 | 1.5278 | 1.3893 | 1.3893 | 0.0919 | 0.0919 | 0.5652 | 0.0 | 0.5 | 0.4512 | nan |
| 0.9475 | 2.0 | 30 | 1.0795 | 1.4926 | 1.4926 | 1.3387 | 1.3387 | 0.1334 | 0.1334 | 0.8261 | 0.0 | 0.5 | 0.6184 | nan |
| 0.9146 | 3.0 | 45 | 1.1092 | 1.5130 | 1.5130 | 1.4078 | 1.4078 | 0.1095 | 0.1095 | 0.4783 | 0.0 | 0.5 | 0.3116 | nan |
| 0.9539 | 4.0 | 60 | 1.1734 | 1.5561 | 1.5561 | 1.4238 | 1.4238 | 0.0580 | 0.0580 | 0.3913 | 0.0 | 0.5 | 0.3614 | nan |
| 0.8665 | 5.0 | 75 | 0.8910 | 1.3560 | 1.3560 | 1.2350 | 1.2350 | 0.2847 | 0.2847 | 0.5652 | 0.0 | 0.5 | 0.4136 | nan |
| 0.6564 | 6.0 | 90 | 0.8469 | 1.3220 | 1.3220 | 1.1570 | 1.1570 | 0.3201 | 0.3201 | 0.3913 | 0.0 | 0.5 | 0.3931 | nan |
| 0.5241 | 7.0 | 105 | 0.6429 | 1.1519 | 1.1519 | 0.9757 | 0.9757 | 0.4838 | 0.4838 | 0.5652 | 0.0 | 0.5 | 0.4222 | nan |
| 0.4589 | 8.0 | 120 | 0.5781 | 1.0923 | 1.0923 | 0.8714 | 0.8714 | 0.5359 | 0.5359 | 0.6522 | 0.0 | 0.5 | 0.4641 | nan |
| 0.4043 | 9.0 | 135 | 0.4525 | 0.9664 | 0.9664 | 0.8257 | 0.8257 | 0.6367 | 0.6367 | 0.5652 | 0.0 | 0.5 | 0.4263 | nan |
| 0.3498 | 10.0 | 150 | 0.4490 | 0.9627 | 0.9627 | 0.8272 | 0.8272 | 0.6395 | 0.6395 | 0.6522 | 0.0 | 0.5 | 0.5144 | nan |
| 0.3505 | 11.0 | 165 | 0.3721 | 0.8763 | 0.8763 | 0.7471 | 0.7471 | 0.7013 | 0.7013 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.3426 | 12.0 | 180 | 0.4117 | 0.9218 | 0.9218 | 0.7477 | 0.7477 | 0.6695 | 0.6695 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.3074 | 13.0 | 195 | 0.3761 | 0.8810 | 0.8810 | 0.7109 | 0.7109 | 0.6981 | 0.6981 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2261 | 14.0 | 210 | 0.3818 | 0.8877 | 0.8877 | 0.7042 | 0.7042 | 0.6935 | 0.6935 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2399 | 15.0 | 225 | 0.3893 | 0.8964 | 0.8964 | 0.7108 | 0.7108 | 0.6874 | 0.6874 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.2014 | 16.0 | 240 | 0.4606 | 0.9750 | 0.9750 | 0.8046 | 0.8046 | 0.6302 | 0.6302 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1937 | 17.0 | 255 | 0.4549 | 0.9689 | 0.9689 | 0.7679 | 0.7679 | 0.6348 | 0.6348 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1831 | 18.0 | 270 | 0.4113 | 0.9213 | 0.9213 | 0.6746 | 0.6746 | 0.6698 | 0.6698 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1758 | 19.0 | 285 | 0.4154 | 0.9259 | 0.9259 | 0.7053 | 0.7053 | 0.6665 | 0.6665 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1577 | 20.0 | 300 | 0.3970 | 0.9051 | 0.9051 | 0.7163 | 0.7163 | 0.6813 | 0.6813 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1597 | 21.0 | 315 | 0.4199 | 0.9309 | 0.9309 | 0.7270 | 0.7270 | 0.6629 | 0.6629 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1145 | 22.0 | 330 | 0.4250 | 0.9365 | 0.9365 | 0.6971 | 0.6971 | 0.6588 | 0.6588 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan |
| 0.1349 | 23.0 | 345 | 0.4168 | 0.9275 | 0.9275 | 0.7126 | 0.7126 | 0.6654 | 0.6654 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1481 | 24.0 | 360 | 0.4421 | 0.9552 | 0.9552 | 0.7441 | 0.7441 | 0.6451 | 0.6451 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1188 | 25.0 | 375 | 0.4356 | 0.9481 | 0.9481 | 0.7444 | 0.7444 | 0.6503 | 0.6503 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1119 | 26.0 | 390 | 0.4456 | 0.9590 | 0.9590 | 0.7139 | 0.7139 | 0.6422 | 0.6422 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1282 | 27.0 | 405 | 0.4456 | 0.9589 | 0.9589 | 0.7637 | 0.7637 | 0.6423 | 0.6423 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.142 | 28.0 | 420 | 0.4501 | 0.9637 | 0.9637 | 0.7146 | 0.7146 | 0.6387 | 0.6387 | 0.8261 | 0.0 | 0.5 | 0.6594 | nan |
| 0.126 | 29.0 | 435 | 0.4442 | 0.9575 | 0.9575 | 0.7189 | 0.7189 | 0.6433 | 0.6433 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
| 0.1308 | 30.0 | 450 | 0.4439 | 0.9571 | 0.9571 | 0.7260 | 0.7260 | 0.6437 | 0.6437 | 0.7391 | 0.0 | 0.5 | 0.6287 | nan |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/independentmlt-maltatoday-thetimesofmalta
|
huggingtweets
| 2022-03-15T22:00:58Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T21:42:12Z |
---
language: en
thumbnail: http://www.huggingtweets.com/independentmlt-maltatoday-thetimesofmalta/1647381547913/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1419612859244457987/Ph3kXUL3_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1338811551994826752/XQnrubON_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">MaltaToday & Times of Malta & The Malta Independent</div>
<div style="text-align: center; font-size: 14px;">@independentmlt-maltatoday-thetimesofmalta</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from MaltaToday & Times of Malta & The Malta Independent.
| Data | MaltaToday | Times of Malta | The Malta Independent |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 1 | 0 | 5 |
| Short tweets | 3 | 0 | 1 |
| Tweets kept | 3246 | 3250 | 3244 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/2z9a8ves/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @independentmlt-maltatoday-thetimesofmalta's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/117uvo5a/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/independentmlt-maltatoday-thetimesofmalta')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/maltatoday-netnewsmalta-one_news_malta
|
huggingtweets
| 2022-03-15T21:21:32Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T21:18:16Z |
---
language: en
thumbnail: http://www.huggingtweets.com/maltatoday-netnewsmalta-one_news_malta/1647379141053/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1442160889596026883/gq6jcObz_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1047423145077030912/0B4-Tgba_400x400.jpg')">
</div>
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1333858206012084227/XP6EKW-K_400x400.jpg')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI CYBORG ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">ONE news & NETnews & MaltaToday</div>
<div style="text-align: center; font-size: 14px;">@maltatoday-netnewsmalta-one_news_malta</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from ONE news & NETnews & MaltaToday.
| Data | ONE news | NETnews | MaltaToday |
| --- | --- | --- | --- |
| Tweets downloaded | 3250 | 3250 | 3250 |
| Retweets | 0 | 0 | 1 |
| Short tweets | 17 | 1 | 3 |
| Tweets kept | 3233 | 3249 | 3246 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1lme9vpn/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @maltatoday-netnewsmalta-one_news_malta's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/zkwd2sgh/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/maltatoday-netnewsmalta-one_news_malta')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/theshiftnews
|
huggingtweets
| 2022-03-15T20:56:54Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-15T20:56:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/theshiftnews/1647377809961/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1318831968352612352/blMpdUu4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">The Shift News</div>
<div style="text-align: center; font-size: 14px;">@theshiftnews</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from The Shift News.
| Data | The Shift News |
| --- | --- |
| Tweets downloaded | 3216 |
| Retweets | 446 |
| Short tweets | 43 |
| Tweets kept | 2727 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1k4siv5q/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @theshiftnews's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/2cedhhrz/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/theshiftnews')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
huggingtweets/hampshireomen
|
huggingtweets
| 2022-03-15T20:52:01Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
---
language: en
thumbnail: http://www.huggingtweets.com/hampshireomen/1647377480803/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1111434706745069575/7L1hshMt_400x400.png')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">the omen is cringe tbh</div>
<div style="text-align: center; font-size: 14px;">@hampshireomen</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from the omen is cringe tbh.
| Data | the omen is cringe tbh |
| --- | --- |
| Tweets downloaded | 1462 |
| Retweets | 68 |
| Short tweets | 109 |
| Tweets kept | 1285 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1792rc86/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @hampshireomen's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1y440us5) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1y440us5/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/hampshireomen')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Rustem/distilroberta-base-trainedmodel
|
Rustem
| 2022-03-15T19:32:36Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"fill-mask",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-15T19:28:05Z |
---
license: apache-2.0
---
|
Ebtihal/AraBertMo_base_V10
|
Ebtihal
| 2022-03-15T19:10:54Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
fill-mask
| 2022-03-04T19:18:16Z |
Arabic Model AraBertMo_base_V10
---
language: ar
tags: Fill-Mask
datasets: OSCAR
widget:
- text: " ุงูุณูุงู
ุนูููู
ูุฑุญู
ุฉ[MASK] ูุจุฑูุงุชุฉ"
- text: " ุงููุง ูุณููุง ุจูู
ูู [MASK] ู
ู ุณูุฑุจุญ ุงูู
ูููู"
- text: " ู
ุฑุญุจุง ุจู ุนุฒูุฒู ุงูุฒุงุฆุฑ [MASK] ู
ููุนูุง "
---
# Arabic BERT Model
**AraBERTMo** is an Arabic pre-trained language model based on [Google's BERT architechture](https://github.com/google-research/bert).
AraBERTMo_base uses the same BERT-Base config.
AraBERTMo_base now comes in 10 new variants
All models are available on the `HuggingFace` model page under the [Ebtihal](https://huggingface.co/Ebtihal/) name.
Checkpoints are available in PyTorch formats.
## Pretraining Corpus
`AraBertMo_base_V10' model was pre-trained on ~3 million words:
- [OSCAR](https://traces1.inria.fr/oscar/) - Arabic version "unshuffled_deduplicated_ar".
## Training results
this model achieves the following results:
| Task | Num examples | Num Epochs | Batch Size | steps | Wall time | training loss|
|:----:|:----:|:----:|:----:|:-----:|:----:|:-----:|
| Fill-Mask| 30024| 10 | 64 | 4700 | 9h 13m 43s | 7.2395 |
## Load Pretrained Model
You can use this model by installing `torch` or `tensorflow` and Huggingface library `transformers`. And you can use it directly by initializing it like this:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("Ebtihal/AraBertMo_base_V10")
model = AutoModelForMaskedLM.from_pretrained("Ebtihal/AraBertMo_base_V10")
```
## This model was built for master's degree research in an organization:
- [University of kufa](https://uokufa.edu.iq/).
- [Faculty of Computer Science and Mathematics](https://mathcomp.uokufa.edu.iq/).
- **Department of Computer Science**
|
DrishtiSharma/poem-gen-t5-small
|
DrishtiSharma
| 2022-03-15T18:50:42Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-15T15:08:11Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: poem-gen-t5-small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-t5-small
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.1066
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 3.67 | 0.32 | 5000 | 3.4705 |
| 3.573 | 0.63 | 10000 | 3.3747 |
| 3.5075 | 0.95 | 15000 | 3.3154 |
| 3.4486 | 1.26 | 20000 | 3.2704 |
| 3.4207 | 1.58 | 25000 | 3.2351 |
| 3.3933 | 1.89 | 30000 | 3.2069 |
| 3.3612 | 2.21 | 35000 | 3.1853 |
| 3.34 | 2.53 | 40000 | 3.1659 |
| 3.3422 | 2.84 | 45000 | 3.1503 |
| 3.3034 | 3.16 | 50000 | 3.1376 |
| 3.2886 | 3.47 | 55000 | 3.1283 |
| 3.2806 | 3.79 | 60000 | 3.1208 |
| 3.2745 | 4.1 | 65000 | 3.1141 |
| 3.2894 | 4.42 | 70000 | 3.1093 |
| 3.264 | 4.74 | 75000 | 3.1075 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
spasis/marian-finetuned-kde4-en-to-fr
|
spasis
| 2022-03-15T17:39:40Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"marian",
"text2text-generation",
"tanslation",
"generated_from_trainer",
"dataset:kde4",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-15T15:14:38Z |
---
license: apache-2.0
tags:
- tanslation
- generated_from_trainer
datasets:
- kde4
model-index:
- name: marian-finetuned-kde4-en-to-fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-fr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on the kde4 dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 128
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1
- Datasets 1.17.0
- Tokenizers 0.10.3
|
DrishtiSharma/wav2vec2-base-finetuned-ks
|
DrishtiSharma
| 2022-03-15T17:32:51Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"generated_from_trainer",
"dataset:superb",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
audio-classification
| 2022-03-15T14:04:38Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- superb
metrics:
- accuracy
model-index:
- name: wav2vec2-base-finetuned-ks
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-finetuned-ks
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0817
- Accuracy: 0.9844
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6386 | 1.0 | 399 | 0.5305 | 0.9601 |
| 0.2358 | 2.0 | 798 | 0.1774 | 0.9747 |
| 0.1982 | 3.0 | 1197 | 0.1172 | 0.9794 |
| 0.1554 | 4.0 | 1596 | 0.0884 | 0.9835 |
| 0.1261 | 5.0 | 1995 | 0.0817 | 0.9844 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
smartiros/BERT_for_sentiment_5k_2pcs_sampled_airlines_tweets
|
smartiros
| 2022-03-15T16:27:13Z | 3 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T16:26:59Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: tmpny35efxx
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# tmpny35efxx
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1996
- Train Accuracy: 0.9348
- Validation Loss: 0.8523
- Validation Accuracy: 0.7633
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'clipnorm': 1.0, 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5865 | 0.7626 | 0.5505 | 0.8010 | 0 |
| 0.1996 | 0.9348 | 0.8523 | 0.7633 | 1 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.0
- Tokenizers 0.11.6
|
mfleck/wav2vec2-large-xls-r-300m-slowenian-with-lm
|
mfleck
| 2022-03-15T16:15:30Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-15T15:01:45Z |
---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: wav2vec2-large-xls-r-300m-slowenian-with-lm
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-slowenian-with-lm
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3935
- Wer: 0.3480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.9937 | 2.5 | 100 | 3.1565 | 1.0 |
| 3.0466 | 5.0 | 200 | 3.0009 | 0.9992 |
| 2.9708 | 7.5 | 300 | 2.9494 | 0.9992 |
| 2.0519 | 10.0 | 400 | 0.8874 | 0.7290 |
| 0.5773 | 12.5 | 500 | 0.5258 | 0.5037 |
| 0.3427 | 15.0 | 600 | 0.4767 | 0.4649 |
| 0.2612 | 17.5 | 700 | 0.4549 | 0.4209 |
| 0.212 | 20.0 | 800 | 0.4294 | 0.3860 |
| 0.1748 | 22.5 | 900 | 0.4085 | 0.3769 |
| 0.1587 | 25.0 | 1000 | 0.4017 | 0.3673 |
| 0.1435 | 27.5 | 1100 | 0.3927 | 0.3538 |
| 0.1314 | 30.0 | 1200 | 0.3935 | 0.3480 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.9.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
public-data/StyleSwin
|
public-data
| 2022-03-15T14:39:14Z | 0 | 0 | null |
[
"region:us"
] | null | 2022-03-15T14:29:57Z |
# StyleSwin
- Repo: https://github.com/microsoft/StyleSwin
- https://drive.google.com/file/d/1OjYZ1zEWGNdiv0RFKv7KhXRmYko72LjO/view?usp=sharing
- https://drive.google.com/file/d/1HF0wFNuz1WFrqGEbPhOXjL4QrY05Zu_m/view?usp=sharing
- https://drive.google.com/file/d/1YtIJOgLFfkaMI_KL2gBQNABFb1cwOzvM/view?usp=sharing
- https://drive.google.com/file/d/17-ILwzLBoHq4HTdAPeaCug7iBvxKWkvp/view?usp=sharing
- https://drive.google.com/file/d/1y3wkykjvCbteTaGTRF8EedkG-N1Z8jFf/view?usp=sharing
|
clips/contact
|
clips
| 2022-03-15T12:57:53Z | 22 | 0 |
transformers
|
[
"transformers",
"pytorch",
"roberta",
"feature-extraction",
"arxiv:2203.07362",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
feature-extraction
| 2022-03-02T23:29:05Z |
# CoNTACT
### Model description
<u>Co</u>ntextual <u>N</u>eural <u>T</u>ransformer <u>A</u>dapted to <u>C</u>OVID-19 <u>T</u>weets or **CoNTACT** is a Dutch RobBERT model (```pdelobelle/robbert-v2-dutch-base```) adapted to the domain of COVID-19 tweets. The model was developed at [CLiPS](https://www.uantwerpen.be/en/research-groups/clips/) by Jens Lemmens, Jens Van Nooten, Tim Kreutz and Walter Daelemans. A full description of the model, the data that was used and the experiments that were conducted can be found in this ArXiv preprint: https://arxiv.org/abs/2203.07362
### Intended use
The model was developed with the intention of achieving high results on NLP tasks involving Dutch social media messages related to COVID-19.
### How to use
CoNTACT should be fine-tuned on a downstream task. This can be achieved by referring to ```clips/contact``` in the ```--model_name_or_path``` argument in Huggingface/Transformers' example scripts, or by loading CoNTACT (as shown below) and fine-tuning it using your own code:
```
from transformers import AutoModel, AutoTokenizer
model = AutoModel.from_pretrained('clips/contact')
tokenizer = AutoTokenizer.from_pretrained('clips/contact')
...
```
### Training data
CoNTACT was trained on 2.8M Dutch tweets related to COVID-19 that were posted in 2021.
### Training Procedure
The model's pre-training phase was extended by performing Masked Language Modeling (MLM) on the training data described above. This was done for 4 epochs, using the largest possible batch size that fit working memory (32).
### Evaluation
The model was evaluated on two tasks using data from two social media platforms: Twitter and Facebook. Task 1 involved the binary classification of COVID-19 vaccine stance (hesitant vs. not hesitant), whereas task 2 consisted of the mulilabel, multiclass classification of arguments for vaccine hesitancy. CoNTACT outperformed out-of-the-box RobBERT in virtually all our experiments, and with statistical significance in most cases.
### How to cite
```
@misc{lemmens2022contact,
title={CoNTACT: A Dutch COVID-19 Adapted BERT for Vaccine Hesitancy and Argumentation Detection},
author={Jens Lemmens and Jens Van Nooten and Tim Kreutz and Walter Daelemans},
year={2022},
eprint={2203.07362},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
mansidw/finetuning-sentiment-model-12000-samples
|
mansidw
| 2022-03-15T09:40:05Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:ag_news",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T19:40:20Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- ag_news
model-index:
- name:
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-12000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the ag_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Cedille/fr-boris
|
Cedille
| 2022-03-15T08:36:54Z | 2,990 | 39 |
transformers
|
[
"transformers",
"pytorch",
"gptj",
"text-generation",
"causal-lm",
"fr",
"dataset:c4",
"arxiv:2202.03371",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:04Z |
---
language: fr
license: mit
tags:
- pytorch
- causal-lm
datasets:
- c4
---
# Cedille AI
Cedille is a project to bring large language models to non-English languages.
## fr-boris
Boris is a 6B parameter autoregressive language model based on the GPT-J architecture and trained using the [mesh-transformer-jax](https://github.com/kingoflolz/mesh-transformer-jax) codebase.
Boris was trained on around 78B tokens of French text from the [C4](https://huggingface.co/datasets/c4) dataset. We started training from GPT-J, which has been trained on [The Pile](https://pile.eleuther.ai/). As a consequence the model still has good performance in English language. Boris makes use of the unmodified GPT-2 tokenizer.
Boris is named after the great French writer [Boris Vian](https://en.wikipedia.org/wiki/Boris_Vian).
# How do I test Cedille?
For the time being, the easiest way to test the model is to use our [publicly accessible playground](https://en.cedille.ai/).
Cedille is a relatively large model and running it in production can get expensive. Consider contacting us for API access at hello@cedille.ai.
## ๐ Cedille paper
Our paper is out now! https://arxiv.org/abs/2202.03371
Thanks for citing our work if you make use of Cedille
```bibtex
@misc{muller2022cedille,
title={Cedille: A large autoregressive French language model},
author={Martin M{\"{u}}ller and Florian Laurent},
year={2022},
eprint={2202.03371},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
## Contact us
For any custom development please contact us at hello@cedille.ai.
## Links
* [Official website](https://en.cedille.ai/)
* [Blog](https://en.cedille.ai/blog)
* [GitHub](https://github.com/coteries/cedille-ai)
* [Twitter](https://twitter.com/CedilleAI)
|
mjc00/distilbert-base-uncased-finetuned-emotion
|
mjc00
| 2022-03-15T05:48:00Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-15T05:23:44Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.924132235882821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2153
- Accuracy: 0.924
- F1: 0.9241
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7986 | 1.0 | 250 | 0.3021 | 0.91 | 0.9078 |
| 0.2386 | 2.0 | 500 | 0.2153 | 0.924 | 0.9241 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.0+cu111
- Datasets 1.16.1
- Tokenizers 0.10.3
|
StivenLancheros/biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
|
StivenLancheros
| 2022-03-14T23:42:29Z | 3 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-14T22:56:59Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# biobert-base-cased-v1.2-finetuned-ner-CRAFT_English
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1614
- Precision: 0.8585
- Recall: 0.8623
- F1: 0.8604
- Accuracy: 0.9724
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0725 | 1.0 | 1360 | 0.1242 | 0.8090 | 0.8698 | 0.8383 | 0.9681 |
| 0.0281 | 2.0 | 2720 | 0.1541 | 0.8497 | 0.8549 | 0.8523 | 0.9705 |
| 0.0162 | 3.0 | 4080 | 0.1510 | 0.8390 | 0.8681 | 0.8533 | 0.9711 |
| 0.0053 | 4.0 | 5440 | 0.1614 | 0.8585 | 0.8623 | 0.8604 | 0.9724 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
peterhsu/codeparrot-ds
|
peterhsu
| 2022-03-14T23:00:48Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-14T15:52:25Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codeparrot-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codeparrot-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9729
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.4939 | 0.93 | 5000 | 1.9729 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
sanchit-gandhi/wav2vec2-2-bart-large-no-adapter
|
sanchit-gandhi
| 2022-03-14T21:45:57Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-14T12:33:35Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 5.6120
- Wer: 1.0267
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 6.7189 | 0.56 | 500 | 6.9796 | 0.9350 |
| 6.5068 | 1.12 | 1000 | 6.4823 | 1.3923 |
| 6.4601 | 1.68 | 1500 | 6.1801 | 1.1578 |
| 6.1802 | 2.24 | 2000 | 6.0002 | 1.7750 |
| 6.0888 | 2.8 | 2500 | 5.8453 | 1.7581 |
| 6.0993 | 3.36 | 3000 | 5.7702 | 1.4096 |
| 6.0851 | 3.92 | 3500 | 5.6634 | 1.0944 |
| 5.9357 | 4.48 | 4000 | 5.6120 | 1.0267 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
pyf98/librispeech_conformer_hop_length160
|
pyf98
| 2022-03-14T18:24:04Z | 9 | 0 |
espnet
|
[
"espnet",
"audio",
"automatic-speech-recognition",
"en",
"dataset:librispeech",
"arxiv:1804.00015",
"license:cc-by-4.0",
"region:us"
] |
automatic-speech-recognition
| 2022-03-14T18:16:15Z |
---
tags:
- espnet
- audio
- automatic-speech-recognition
language: en
datasets:
- librispeech
license: cc-by-4.0
---
## ESPnet2 ASR model
### `pyf98/librispeech_conformer_hop_length160`
This model was trained by Yifan Peng using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout 33edd1fc077f6a35e8cb0a59f208cb4564aa4cfb
pip install -e .
cd egs2/librispeech/asr1
./run.sh --skip_data_prep false --skip_train true --download_model pyf98/librispeech_conformer_hop_length160
```
<!-- Generated by scripts/utils/show_asr_result.sh -->
# RESULTS
## Environments
- date: `Mon Mar 14 12:26:10 EDT 2022`
- python version: `3.9.7 (default, Sep 16 2021, 13:09:58) [GCC 7.5.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.10.1`
- Git hash: `467660021998c416ac366aed0f75f3399e321a3a`
- Commit date: `Sun Mar 13 17:08:56 2022 -0400`
## asr_train_asr_conformer10_hop_length160_raw_en_bpe5000_sp
### WER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.3/dev_clean|2703|54402|98.1|1.7|0.2|0.2|2.1|27.7|
|beam60_ctc0.3/dev_other|2864|50948|95.3|4.3|0.4|0.5|5.2|44.1|
|beam60_ctc0.3/test_clean|2620|52576|97.9|1.9|0.2|0.3|2.4|27.9|
|beam60_ctc0.3/test_other|2939|52343|95.4|4.1|0.4|0.6|5.2|44.8|
|beam60_ctc0.3_lm0.6/dev_clean|2703|54402|98.4|1.4|0.2|0.2|1.8|23.3|
|beam60_ctc0.3_lm0.6/dev_other|2864|50948|96.4|3.2|0.4|0.4|3.9|36.2|
|beam60_ctc0.3_lm0.6/test_clean|2620|52576|98.3|1.5|0.2|0.2|2.0|23.7|
|beam60_ctc0.3_lm0.6/test_other|2939|52343|96.2|3.3|0.4|0.5|4.2|39.6|
### CER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.3/dev_clean|2703|288456|99.5|0.3|0.2|0.2|0.7|27.7|
|beam60_ctc0.3/dev_other|2864|265951|98.4|1.0|0.6|0.6|2.2|44.1|
|beam60_ctc0.3/test_clean|2620|281530|99.4|0.3|0.3|0.2|0.8|27.9|
|beam60_ctc0.3/test_other|2939|272758|98.5|0.9|0.7|0.6|2.1|44.8|
|beam60_ctc0.3_lm0.6/dev_clean|2703|288456|99.5|0.2|0.2|0.2|0.6|23.3|
|beam60_ctc0.3_lm0.6/dev_other|2864|265951|98.5|0.8|0.6|0.5|1.9|36.2|
|beam60_ctc0.3_lm0.6/test_clean|2620|281530|99.5|0.2|0.3|0.2|0.7|23.7|
|beam60_ctc0.3_lm0.6/test_other|2939|272758|98.6|0.7|0.7|0.5|1.9|39.6|
### TER
|dataset|Snt|Wrd|Corr|Sub|Del|Ins|Err|S.Err|
|---|---|---|---|---|---|---|---|---|
|beam60_ctc0.3/dev_clean|2703|68010|97.6|1.7|0.6|0.4|2.7|27.7|
|beam60_ctc0.3/dev_other|2864|63110|94.2|4.3|1.5|0.9|6.7|44.1|
|beam60_ctc0.3/test_clean|2620|65818|97.4|1.8|0.8|0.4|3.0|27.9|
|beam60_ctc0.3/test_other|2939|65101|94.4|3.9|1.7|0.8|6.4|44.8|
|beam60_ctc0.3_lm0.6/dev_clean|2703|68010|98.0|1.4|0.6|0.3|2.3|23.3|
|beam60_ctc0.3_lm0.6/dev_other|2864|63110|95.2|3.4|1.4|0.6|5.5|36.2|
|beam60_ctc0.3_lm0.6/test_clean|2620|65818|97.8|1.4|0.8|0.3|2.5|23.7|
|beam60_ctc0.3_lm0.6/test_other|2939|65101|95.1|3.2|1.7|0.6|5.5|39.6|
## ASR config
<details><summary>expand</summary>
```
config: conf/tuning/train_asr_conformer10_hop_length160.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/asr_train_asr_conformer10_hop_length160_raw_en_bpe5000_sp
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: 4
dist_rank: 0
local_rank: 0
dist_master_addr: localhost
dist_master_port: 51595
dist_launcher: null
multiprocessing_distributed: true
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 50
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- acc
- max
keep_nbest_models: 10
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 4
no_forward_run: false
resume: true
train_dtype: float32
use_amp: true
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 20
valid_batch_size: null
batch_bins: 35000000
valid_batch_bins: null
train_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/train/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/train/text_shape.bpe
valid_shape_file:
- exp/asr_stats_raw_en_bpe5000_sp/valid/speech_shape
- exp/asr_stats_raw_en_bpe5000_sp/valid/text_shape.bpe
batch_type: numel
valid_batch_type: null
fold_length:
- 80000
- 150
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/train_960_sp/wav.scp
- speech
- sound
- - dump/raw/train_960_sp/text
- text
- text
valid_data_path_and_name_and_type:
- - dump/raw/dev/wav.scp
- speech
- sound
- - dump/raw/dev/text
- text
- text
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0025
weight_decay: 1.0e-06
scheduler: warmuplr
scheduler_conf:
warmup_steps: 40000
token_list:
- <blank>
- <unk>
- โTHE
- S
- โAND
- โOF
- โTO
- โA
- โIN
- โI
- โHE
- โTHAT
- โWAS
- ED
- โIT
- ''''
- โHIS
- ING
- โYOU
- โWITH
- โFOR
- โHAD
- T
- โAS
- โHER
- โIS
- โBE
- โBUT
- โNOT
- โSHE
- D
- โAT
- โON
- LY
- โHIM
- โTHEY
- โALL
- โHAVE
- โBY
- โSO
- โTHIS
- โMY
- โWHICH
- โME
- โSAID
- โFROM
- โONE
- Y
- E
- โWERE
- โWE
- โNO
- N
- โTHERE
- โOR
- ER
- โAN
- โWHEN
- โARE
- โTHEIR
- โWOULD
- โIF
- โWHAT
- โTHEM
- โWHO
- โOUT
- M
- โDO
- โWILL
- โUP
- โBEEN
- P
- R
- โMAN
- โTHEN
- โCOULD
- โMORE
- C
- โINTO
- โNOW
- โVERY
- โYOUR
- โSOME
- โLITTLE
- ES
- โTIME
- RE
- โCAN
- โLIKE
- LL
- โABOUT
- โHAS
- โTHAN
- โDID
- โUPON
- โOVER
- IN
- โANY
- โWELL
- โONLY
- B
- โSEE
- โGOOD
- โOTHER
- โTWO
- L
- โKNOW
- โGO
- โDOWN
- โBEFORE
- A
- AL
- โOUR
- โOLD
- โSHOULD
- โMADE
- โAFTER
- โGREAT
- โDAY
- โMUST
- โCOME
- โHOW
- โSUCH
- โCAME
- LE
- โWHERE
- โUS
- โNEVER
- โTHESE
- โMUCH
- โDE
- โMISTER
- โWAY
- G
- โS
- โMAY
- ATION
- โLONG
- OR
- โAM
- โFIRST
- โBACK
- โOWN
- โRE
- โAGAIN
- โSAY
- โMEN
- โWENT
- โHIMSELF
- โHERE
- NESS
- โTHINK
- V
- IC
- โEVEN
- โTHOUGHT
- โHAND
- โJUST
- โO
- โUN
- VE
- ION
- โITS
- 'ON'
- โMAKE
- โMIGHT
- โTOO
- K
- โAWAY
- โLIFE
- TH
- โWITHOUT
- ST
- โTHROUGH
- โMOST
- โTAKE
- โDON
- โEVERY
- F
- O
- โSHALL
- โTHOSE
- โEYES
- AR
- โSTILL
- โLAST
- โHOUSE
- โHEAD
- ABLE
- โNOTHING
- โNIGHT
- ITY
- โLET
- โMANY
- โOFF
- โBEING
- โFOUND
- โWHILE
- EN
- โSAW
- โGET
- โPEOPLE
- โFACE
- โYOUNG
- CH
- โUNDER
- โONCE
- โTELL
- AN
- โTHREE
- โPLACE
- โROOM
- โYET
- โSAME
- IL
- US
- U
- โFATHER
- โRIGHT
- EL
- โTHOUGH
- โANOTHER
- LI
- RI
- โHEART
- IT
- โPUT
- โTOOK
- โGIVE
- โEVER
- โE
- โPART
- โWORK
- ERS
- โLOOK
- โNEW
- โKING
- โMISSUS
- โSIR
- โLOVE
- โMIND
- โLOOKED
- W
- RY
- โASKED
- โLEFT
- ET
- โLIGHT
- CK
- โDOOR
- โMOMENT
- RO
- โWORLD
- โTHINGS
- โHOME
- UL
- โTHING
- LA
- โWHY
- โMOTHER
- โALWAYS
- โFAR
- FUL
- โWATER
- CE
- IVE
- UR
- โHEARD
- โSOMETHING
- โSEEMED
- I
- LO
- โBECAUSE
- OL
- โEND
- โTOLD
- โCON
- โYES
- โGOING
- โGOT
- RA
- IR
- โWOMAN
- โGOD
- EST
- TED
- โFIND
- โKNEW
- โSOON
- โEACH
- โSIDE
- H
- TON
- MENT
- โOH
- NE
- Z
- LING
- โAGAINST
- TER
- โNAME
- โMISS
- โQUITE
- โWANT
- โYEARS
- โFEW
- โBETTER
- ENT
- โHALF
- โDONE
- โALSO
- โBEGAN
- โHAVING
- โENOUGH
- IS
- โLADY
- โWHOLE
- LESS
- โBOTH
- โSEEN
- โSET
- โWHITE
- โCOURSE
- IES
- โVOICE
- โCALLED
- โD
- โEX
- ATE
- โTURNED
- โGAVE
- โC
- โPOOR
- MAN
- UT
- NA
- โDEAR
- ISH
- โGIRL
- โMORNING
- โBETWEEN
- LED
- โNOR
- IA
- โAMONG
- MA
- โ
- โSMALL
- โREST
- โWHOM
- โFELT
- โHANDS
- โMYSELF
- โHIGH
- โM
- โHOWEVER
- โHERSELF
- โP
- CO
- โSTOOD
- ID
- โKIND
- โHUNDRED
- AS
- โROUND
- โALMOST
- TY
- โSINCE
- โG
- AM
- โLA
- SE
- โBOY
- โMA
- โPERHAPS
- โWORDS
- ATED
- โHO
- X
- โMO
- โSAT
- โREPLIED
- โFOUR
- โANYTHING
- โTILL
- โUNTIL
- โBLACK
- TION
- โCRIED
- RU
- TE
- โFACT
- โHELP
- โNEXT
- โLOOKING
- โDOES
- โFRIEND
- โLAY
- ANCE
- โPOWER
- โBROUGHT
- VER
- โFIRE
- โKEEP
- PO
- FF
- โCOUNTRY
- โSEA
- โWORD
- โCAR
- โDAYS
- โTOGETHER
- โIMP
- โREASON
- KE
- โINDEED
- TING
- โMATTER
- โFULL
- โTEN
- TIC
- โLAND
- โRATHER
- โAIR
- โHOPE
- โDA
- โOPEN
- โFEET
- โEN
- โFIVE
- โPOINT
- โCO
- OM
- โLARGE
- โB
- โCL
- ME
- โGONE
- โCHILD
- INE
- GG
- โBEST
- โDIS
- UM
- โHARD
- โLORD
- OUS
- โWIFE
- โSURE
- โFORM
- DE
- โDEATH
- ANT
- โNATURE
- โBA
- โCARE
- โBELIEVE
- PP
- โNEAR
- โRO
- โRED
- โWAR
- IE
- โSPEAK
- โFEAR
- โCASE
- โTAKEN
- โALONG
- โCANNOT
- โHEAR
- โTHEMSELVES
- CI
- โPRESENT
- AD
- โMASTER
- โSON
- โTHUS
- โLI
- โLESS
- โSUN
- โTRUE
- IM
- IOUS
- โTHOUSAND
- โMONEY
- โW
- โBEHIND
- โCHILDREN
- โDOCTOR
- AC
- โTWENTY
- โWISH
- โSOUND
- โWHOSE
- โLEAVE
- โANSWERED
- โTHOU
- โDUR
- โHA
- โCERTAIN
- โPO
- โPASSED
- GE
- TO
- โARM
- โLO
- โSTATE
- โALONE
- TA
- โSHOW
- โNEED
- โLIVE
- ND
- โDEAD
- ENCE
- โSTRONG
- โPRE
- โTI
- โGROUND
- SH
- TI
- โSHORT
- IAN
- UN
- โPRO
- โHORSE
- MI
- โPRINCE
- ARD
- โFELL
- โORDER
- โCALL
- AT
- โGIVEN
- โDARK
- โTHEREFORE
- โCLOSE
- โBODY
- โOTHERS
- โSENT
- โSECOND
- โOFTEN
- โCA
- โMANNER
- MO
- NI
- โBRING
- โQUESTION
- โHOUR
- โBO
- AGE
- โST
- โTURN
- โTABLE
- โGENERAL
- โEARTH
- โBED
- โREALLY
- โSIX
- 'NO'
- IST
- โBECOME
- โUSE
- โREAD
- โSE
- โVI
- โCOMING
- โEVERYTHING
- โEM
- โABOVE
- โEVENING
- โBEAUTIFUL
- โFEEL
- โRAN
- โLEAST
- โLAW
- โALREADY
- โMEAN
- โROSE
- WARD
- โITSELF
- โSOUL
- โSUDDENLY
- โAROUND
- RED
- โANSWER
- ICAL
- โRA
- โWIND
- โFINE
- โWON
- โWHETHER
- โKNOWN
- BER
- NG
- โTA
- โCAPTAIN
- โEYE
- โPERSON
- โWOMEN
- โSORT
- โASK
- โBROTHER
- โUSED
- โHELD
- โBIG
- โRETURNED
- โSTRANGE
- โBU
- โPER
- โFREE
- โEITHER
- โWITHIN
- โDOUBT
- โYEAR
- โCLEAR
- โSIGHT
- โGRA
- โLOST
- โKEPT
- โF
- PE
- โBAR
- โTOWN
- โSLEEP
- ARY
- โHAIR
- โFRIENDS
- โDREAM
- โFELLOW
- PER
- โDEEP
- QUE
- โBECAME
- โREAL
- โPAST
- โMAKING
- RING
- โCOMP
- โACT
- โBAD
- HO
- STER
- โYE
- โMEANS
- โRUN
- MEN
- โDAUGHTER
- โSENSE
- โCITY
- โSOMETIMES
- โTOWARDS
- โROAD
- โSP
- โLU
- โREADY
- โFOOT
- โCOLD
- โSA
- โLETTER
- โELSE
- โMAR
- โSTA
- BE
- โTRUTH
- โLE
- BO
- โBUSINESS
- CHE
- โJOHN
- โSUBJECT
- โCOURT
- โIDEA
- ILY
- โRIVER
- ATING
- โFAMILY
- HE
- โDIDN
- โGLAD
- โSEVERAL
- IAL
- โUNDERSTAND
- โSC
- โPOSSIBLE
- โDIFFERENT
- โRETURN
- โARMS
- โLOW
- โHOLD
- โTALK
- โRU
- โWINDOW
- โINTEREST
- โSISTER
- SON
- โSH
- โBLOOD
- โSAYS
- โCAP
- โDI
- โHUMAN
- โCAUSE
- NCE
- โTHANK
- โLATE
- GO
- โCUT
- โACROSS
- โSTORY
- NT
- โCOUNT
- โABLE
- DY
- LEY
- โNUMBER
- โSTAND
- โCHURCH
- โTHY
- โSUPPOSE
- LES
- BLE
- OP
- โEFFECT
- BY
- โK
- โNA
- โSPOKE
- โMET
- โGREEN
- โHUSBAND
- โRESPECT
- โPA
- โFOLLOWED
- โREMEMBER
- โLONGER
- โAGE
- โTAKING
- โLINE
- โSEEM
- โHAPPY
- LAND
- EM
- โSTAY
- โPLAY
- โCOMMON
- โGA
- โBOOK
- โTIMES
- โOBJECT
- โSEVEN
- QUI
- DO
- UND
- โFL
- โPRETTY
- โFAIR
- WAY
- โWOOD
- โREACHED
- โAPPEARED
- โSWEET
- โFALL
- BA
- โPASS
- โSIGN
- โTREE
- IONS
- โGARDEN
- โILL
- โART
- โREMAIN
- โOPENED
- โBRIGHT
- โSTREET
- โTROUBLE
- โPAIN
- โCONTINUED
- โSCHOOL
- OUR
- โCARRIED
- โSAYING
- HA
- โCHANGE
- โFOLLOW
- โGOLD
- โSW
- โFEELING
- โCOMMAND
- โBEAR
- โCERTAINLY
- โBLUE
- โNE
- CA
- โWILD
- โACCOUNT
- โOUGHT
- UD
- โT
- โBREATH
- โWANTED
- โRI
- โHEAVEN
- โPURPOSE
- โCHARACTER
- โRICH
- โPE
- โDRESS
- OS
- FA
- โTH
- โENGLISH
- โCHANCE
- โSHIP
- โVIEW
- โTOWARD
- AK
- โJOY
- โJA
- โHAR
- โNEITHER
- โFORCE
- โUNCLE
- DER
- โPLAN
- โPRINCESS
- DI
- โCHIEF
- โHAT
- โLIVED
- โAB
- โVISIT
- โMOR
- TEN
- โWALL
- UC
- โMINE
- โPLEASURE
- โSMILE
- โFRONT
- โHU
- โDEAL
- OW
- โFURTHER
- GED
- โTRIED
- DA
- VA
- โNONE
- โENTERED
- โQUEEN
- โPAY
- โEL
- โEXCEPT
- โSHA
- โFORWARD
- โEIGHT
- โADDED
- โPUBLIC
- โEIGHTEEN
- โSTAR
- โHAPPENED
- โLED
- โWALKED
- โALTHOUGH
- โLATER
- โSPIRIT
- โWALK
- โBIT
- โMEET
- LIN
- โFI
- LT
- โMOUTH
- โWAIT
- โHOURS
- โLIVING
- โYOURSELF
- โFAST
- โCHA
- โHALL
- โBEYOND
- โBOAT
- โSECRET
- ENS
- โCHAIR
- RN
- โRECEIVED
- โCAT
- RESS
- โDESIRE
- โGENTLEMAN
- UGH
- โLAID
- EVER
- โOCCASION
- โWONDER
- โGU
- โPARTY
- DEN
- โFISH
- โSEND
- โNEARLY
- โTRY
- CON
- โSEEMS
- RS
- โBELL
- โBRA
- โSILENCE
- IG
- โGUARD
- โDIE
- โDOING
- โTU
- โCOR
- โEARLY
- โBANK
- โFIGURE
- IF
- โENGLAND
- โMARY
- โAFRAID
- LER
- โFO
- โWATCH
- โFA
- โVA
- โGRE
- โAUNT
- PED
- โSERVICE
- โJE
- โPEN
- โMINUTES
- โPAN
- โTREES
- NED
- โGLASS
- โTONE
- โPLEASE
- โFORTH
- โCROSS
- โEXCLAIMED
- โDREW
- โEAT
- โAH
- โGRAVE
- โCUR
- PA
- URE
- CENT
- โMILES
- โSOFT
- โAGO
- โPOSITION
- โWARM
- โLENGTH
- โNECESSARY
- โTHINKING
- โPICTURE
- โPI
- SHIP
- IBLE
- โHEAVY
- โATTENTION
- โDOG
- ABLY
- โSTANDING
- โNATURAL
- โAPPEAR
- OV
- โCAUGHT
- VO
- ISM
- โSPRING
- โEXPERIENCE
- โPAT
- OT
- โSTOPPED
- โREGARD
- โHARDLY
- โSELF
- โSTRENGTH
- โGREW
- โKNIGHT
- โOPINION
- โWIDE
- โINSTEAD
- โSOUTH
- โTRANS
- โCORNER
- โLEARN
- โISLAND
- โMI
- โTHIRD
- โSTE
- โSTRAIGHT
- โTEA
- โBOUND
- โSEEING
- โJU
- โDINNER
- โBEAUTY
- โPEACE
- AH
- โREP
- โSILENT
- โCRE
- ALLY
- RIC
- โSTEP
- โVER
- โJO
- GER
- โSITTING
- โTHIRTY
- โSAVE
- ENED
- โGLANCE
- โREACH
- โACTION
- โSAL
- โSAD
- โSTONE
- ITIES
- โFRENCH
- โSTRUCK
- โPAPER
- โWHATEVER
- โSUB
- โDISTANCE
- โWRONG
- โKNOWLEDGE
- โSAFE
- โSNOW
- โMUSIC
- โFIFTY
- RON
- โATTEMPT
- โGOVERNMENT
- TU
- โCROWD
- โBESIDES
- โLOVED
- โBOX
- โDIRECTION
- โTRAIN
- โNORTH
- โTHICK
- โGETTING
- AV
- โFLOOR
- โCOMPANY
- โBLOW
- โPLAIN
- TRO
- โBESIDE
- โROCK
- โIMMEDIATELY
- FI
- โSHADOW
- โSIT
- ORS
- ILE
- โDRINK
- โSPOT
- โDANGER
- โAL
- โSAINT
- โSLOWLY
- โPALACE
- IER
- โRESULT
- โPETER
- โFOREST
- โBELONG
- โSU
- โPAR
- RIS
- โTEARS
- โAPPEARANCE
- โGATE
- BU
- ITION
- โQUICKLY
- โQUIET
- โLONDON
- โSTART
- โBROWN
- TRA
- KIN
- โCONSIDER
- โBATTLE
- โANNE
- โPIECE
- โDIED
- โSUCCESS
- โLIPS
- โFILLED
- โFORGET
- โPOST
- IFIED
- โMARGARET
- โFOOD
- HAM
- โPLEASANT
- โFE
- โEXPRESSION
- โPOCKET
- โFRESH
- โWEAR
- TRI
- โBROKEN
- โLAUGHED
- GING
- โFOLLOWING
- WN
- IP
- โTOUCH
- โYOUTH
- ATIVE
- โLEG
- โWEEK
- โREMAINED
- โEASY
- NER
- RK
- โENTER
- โFIGHT
- โPLACED
- โTRAVEL
- โSIMPLE
- โGIRLS
- โWAITING
- โSTOP
- โWAVE
- AU
- โWISE
- โCAMP
- TURE
- UB
- โVE
- โOFFICE
- โGRAND
- โFIT
- โJUDGE
- UP
- MENTS
- โQUICK
- HI
- โFLO
- RIES
- VAL
- โCOMFORT
- โPARTICULAR
- โSTARTED
- โSUIT
- โNI
- โPALE
- โIMPOSSIBLE
- โHOT
- โCONVERSATION
- โSCENE
- โBOYS
- โWIN
- โBRE
- โSOCIETY
- โOUTSIDE
- โWRITE
- โEFFORT
- โTALKING
- โFORTUNE
- โNINE
- โWA
- โSINGLE
- โRULE
- โPORT
- โWINTER
- โCAST
- โCRA
- โHAPPEN
- โCRO
- โSHUT
- NING
- โGUN
- โNOBLE
- โBEGIN
- โPATH
- โSKY
- โWONDERFUL
- โSUDDEN
- โARMY
- โCHE
- โWORTH
- โMOUNTAIN
- โMIN
- AG
- โFLU
- โGRACE
- โCHAPTER
- โBELOW
- โRING
- โTURNING
- โIRON
- โTOP
- โAFTERNOON
- ORY
- โEVIL
- โTRUST
- โBOW
- โTRI
- โSAIL
- โCONTENT
- โHORSES
- ITE
- โSILVER
- AP
- โLAD
- โRUNNING
- โHILL
- โBEGINNING
- โMAD
- โHABIT
- GRA
- โCLOTHES
- โMORROW
- โCRY
- โFASHION
- โPRESENCE
- โZ
- FE
- โARRIVED
- โQUARTER
- โPERFECT
- โWO
- โTRA
- โUSUAL
- โNECK
- โMARRIED
- โSEAT
- โWI
- โGAR
- โSAND
- โSHORE
- โGIVING
- NY
- โPROBABLY
- โMINUTE
- โEXPECT
- โDU
- โSHOT
- โINSTANT
- โDEGREE
- โCOLOR
- โWEST
- RT
- โMARCH
- โBIRD
- โSHOWED
- โGREATER
- โSERIOUS
- โCARRY
- โCOVERED
- โFORMER
- โLOUD
- โMOVED
- โMASS
- โSEEK
- โCHO
- GEN
- โROMAN
- IB
- โMOON
- โBOARD
- โSTREAM
- โEASILY
- โWISHED
- โSEARCH
- โCOULDN
- โMONTHS
- โSICK
- LIE
- โDUTY
- โTWELVE
- โFAINT
- โSTRANGER
- โSURPRISE
- โKILL
- โLEAVING
- โJOURNEY
- โSCARCELY
- โRAISED
- โSPEAKING
- โTERRIBLE
- โTOM
- โFIELD
- โGAME
- โQUA
- โPROMISE
- โLIE
- โCONDITION
- โTRO
- โPERSONAL
- โTALL
- โSTICK
- โTHREW
- โMARRY
- โVAN
- โBURN
- โACCORDING
- โRISE
- โATTACK
- โSWORD
- โGUESS
- โTHOUGHTS
- โTHIN
- โTHROW
- โCALM
- SIDE
- โVILLAGE
- โDEN
- โANXIOUS
- โMER
- GI
- โEXPECTED
- โBALL
- โESPECIALLY
- โCHARGE
- โMEASURE
- ISE
- โNICE
- โTRYING
- โALLOW
- โSHARP
- โBREAD
- โHONOUR
- โHONOR
- โENTIRELY
- โBILL
- โBRI
- โWRITTEN
- โAR
- โBROKE
- โKILLED
- โMARK
- โVEN
- โLADIES
- โLEARNED
- โFLOWERS
- PLE
- โFORTY
- โOFFER
- โHAPPINESS
- โPRAY
- โCLASS
- โFER
- โPRINCIPLE
- GU
- โBOOKS
- โSHAPE
- โSUMMER
- โJACK
- โDRAW
- โGOLDEN
- โDECIDED
- โLEAD
- โUNLESS
- โHARM
- โLISTEN
- HER
- โSHOOK
- โINFLUENCE
- โPERFECTLY
- โMARRIAGE
- โBROAD
- โESCAPE
- โSTATES
- โMIDDLE
- โPLANT
- โMIL
- โMOVEMENT
- โNOISE
- โENEMY
- โHISTORY
- โBREAK
- ROUS
- โUNDERSTOOD
- โLATTER
- FER
- โCOMES
- โMERELY
- โSIMPLY
- WI
- โIMAGINE
- โLOWER
- โCONDUCT
- โBORN
- WA
- โYARD
- โKA
- โCLOSED
- โNOTE
- GA
- โSTRA
- RAN
- โEXIST
- EV
- โSPEECH
- โBITTER
- JO
- โMAKES
- โGRASS
- โREPLY
- โCHANGED
- โMON
- โLYING
- โDANCE
- โFINALLY
- โAMERICAN
- โENJOY
- โCONTAIN
- โMEANT
- USE
- โOBSERVED
- THER
- โLAUGH
- โAFTERWARDS
- โBEAT
- โRACE
- โEQUAL
- โRAIN
- PS
- โSTEPS
- โBENEATH
- โTAIL
- โTASTE
- IO
- EY
- โCHAR
- โGE
- GN
- TIN
- โGROW
- โTE
- IANS
- โMOVE
- โREPEATED
- โDRIVE
- TUR
- โSI
- CLOCK
- โBRAVE
- โMADAME
- โLOT
- โCASTLE
- โHI
- AND
- โFUTURE
- โRELATION
- โSORRY
- โHEALTH
- โDICK
- โR
- โBUILDING
- โEDGE
- โBLESS
- โSPITE
- WE
- โMIS
- โPRISONER
- โALLOWED
- โPH
- โCATCH
- MER
- ETH
- โCOAT
- โCOMPLETE
- โWOULDN
- โCREATURE
- โYELLOW
- โIMPORTANT
- โADD
- โPASSING
- โDARKNESS
- โCARRIAGE
- โMILL
- โFIFTEEN
- NCY
- โHUNG
- โOB
- โPLEASED
- โSPREAD
- โCURIOUS
- โWORSE
- โCIRCUMSTANCES
- โGI
- LAR
- โCAL
- โHY
- โMERE
- โJANE
- โEAST
- BI
- โCUP
- โBLIND
- โPASSION
- โDISCOVERED
- โNOTICE
- โREPORT
- โSPACE
- โPRESENTLY
- โSORROW
- โPACK
- โDIN
- CY
- โDRY
- โANCIENT
- โDRESSED
- โCOVER
- โVO
- โEXISTENCE
- โEXACTLY
- โBEAST
- โPROPER
- โDROPPED
- โCLEAN
- โCOLOUR
- โHOST
- โCHAMBER
- โFAITH
- LET
- โDETERMINED
- โPRIEST
- โSTORM
- โSKIN
- โDARE
- โPERSONS
- โPICK
- โNARROW
- โSUPPORT
- โPRIVATE
- โSMILED
- โCOUSIN
- โDRAWING
- โATTEND
- โCOOK
- โPREVENT
- โVARIOUS
- โBLA
- โFIXED
- โWEAK
- THE
- โHOLE
- โBOTTOM
- โNOBODY
- ADE
- โLEGS
- ITCH
- โINDIVIDUAL
- โEARS
- LIKE
- โADVANTAGE
- โFRANCE
- โBON
- โWINE
- โLIVES
- OD
- โWALLS
- โTIRED
- โSHOP
- โANIMAL
- โCRU
- โWROTE
- โROYAL
- โCONSIDERED
- โMORAL
- โCOMPANION
- โLOSE
- โISN
- โBAG
- โLAKE
- โINTER
- โCOM
- โLETTERS
- โLUCK
- โEAR
- โGERMAN
- โPET
- โSAKE
- โDROP
- โPAID
- โBREAKFAST
- โLABOR
- โDESERT
- โDECLARED
- โHUM
- โSTUDY
- โINSTANCE
- ONE
- โSOMEWHAT
- โCLOTH
- โSPECIAL
- โCOLONEL
- โSONG
- โMAIN
- โVALUE
- โPROUD
- โEXPRESS
- โNATION
- โHANDSOME
- โCONFESS
- โPU
- โPASSAGE
- โPERIOD
- โCUSTOM
- โHURT
- โSHOULDER
- โCHRIST
- ZA
- โRECEIVE
- โDIFFICULT
- โDEPEND
- โMEETING
- โCHI
- โGEN
- LIGHT
- โBELIEVED
- โSOCIAL
- โDIFFICULTY
- โGREATEST
- โDRAWN
- โGRANT
- โBIRDS
- โANGRY
- โHEAT
- UFF
- โDUE
- โPLACES
- โSIN
- โCOURAGE
- โEVIDENTLY
- โGENTLE
- โCRUEL
- โGEORGE
- โGRI
- โSERVANT
- โU
- โPURE
- OOK
- โKNOWS
- โKNOWING
- LF
- โWRITING
- โREMEMBERED
- โCU
- โHOLDING
- โTENDER
- โQUI
- โBURST
- โSURELY
- IGN
- โVALLEY
- โFU
- โBUTTER
- โSPOKEN
- โSTORE
- โDISC
- โCHRISTIAN
- โPARIS
- โHENRY
- โFINISHED
- โPROVE
- โFOOL
- โSOLDIERS
- โLANGUAGE
- โINSIDE
- โBAN
- โFALLEN
- ROW
- โMAL
- โBABY
- โSITUATION
- โWATCHED
- ANS
- โRUIN
- โGENTLEMEN
- โFRO
- โFANCY
- โACCEPT
- โSEASON
- โOURSELVES
- โSAN
- โSPEED
- IZED
- โCOOL
- โSERVE
- โVESSEL
- โWILLIAM
- โOBLIGED
- โGROUP
- FORM
- โGOES
- UOUS
- โLEAVES
- โPECULIAR
- โNEWS
- โVAIN
- โEVERYBODY
- โPIN
- UG
- โFORGOTTEN
- โFRA
- GAN
- โCAREFULLY
- โFLASH
- UCH
- โFUR
- โMURDER
- โDELIGHT
- โWAITED
- โRENDER
- โPROPERTY
- โNOTICED
- โROLL
- โKNOCK
- โEARNEST
- KI
- โHONEST
- โPROMISED
- โBAL
- AW
- โWALKING
- ANG
- โSQUARE
- โQUIETLY
- โCLOUD
- WOOD
- โFORMED
- โHIGHER
- โBUILT
- โFATE
- โTEACH
- MY
- โFALSE
- โYORK
- โDUST
- โCLIMB
- โFOND
- โGROWN
- โDESCEND
- โRAG
- โFRUIT
- โGENERALLY
- โOFFERED
- โER
- โNURSE
- POSE
- โSPENT
- โJOIN
- โSTATION
- โMEANING
- โSMOKE
- HOOD
- โROUGH
- JU
- โLIKELY
- โSURFACE
- โKE
- โMONTH
- โPOSSESSION
- โTONGUE
- โDUKE
- โNOSE
- โLAUGHING
- โWEATHER
- โWHISPERED
- โSYSTEM
- โLAWS
- DDLE
- โTOUCHED
- โTRADE
- LD
- โSURPRISED
- RIN
- โARCH
- โWEALTH
- FOR
- โTEMPER
- โFRANK
- โGAL
- โBARE
- โOPPORTUNITY
- โCLAIM
- โANIMALS
- โREV
- โCOST
- โWASH
- ZE
- โCORN
- โOPPOSITE
- โPOLICE
- โIDEAS
- LON
- โKEY
- โREADING
- โCOLLECT
- CHED
- โH
- โCROWN
- โTAR
- โSWIFT
- โSHOULDERS
- โICE
- โGRAY
- โSHARE
- โPREPARED
- โGRO
- โUND
- โTER
- โEMPTY
- CING
- โSMILING
- โAVOID
- โDIFFERENCE
- โEXPLAIN
- โPOUR
- โATTRACT
- โOPENING
- โWHEEL
- โMATERIAL
- โBREAST
- โSUFFERING
- โDISTINCT
- โBOOT
- โROW
- โFINGERS
- HAN
- โALTOGETHER
- โFAT
- โPAPA
- โBRAIN
- โASLEEP
- โGREY
- โSUM
- โGAS
- โWINDOWS
- โALIVE
- โPROCEED
- โFLOWER
- โLEAP
- โPUR
- โPIECES
- โALTER
- โMEMORY
- IENT
- โFILL
- โCLO
- โTHROWN
- โKINGDOM
- โRODE
- IUS
- โMAID
- โDIM
- โBAND
- โVIRTUE
- โDISH
- โGUEST
- โLOSS
- โCAUSED
- โMOTION
- โPOT
- โMILLION
- โFAULT
- โLOVELY
- โHERO
- PPING
- โUNITED
- โSPI
- SOME
- BRA
- โMOUNTAINS
- โNU
- โSATISFIED
- โDOLLARS
- โLOVER
- โCONCEAL
- โVAST
- โPULL
- โHATH
- โRUSH
- โJ
- โDESPAIR
- EX
- โHEIGHT
- โCE
- โBENT
- โPITY
- โRISING
- ATH
- โPRIDE
- โHURRY
- KA
- โSETTLED
- โJUSTICE
- โLIFTED
- PEN
- โSOLDIER
- โFINDING
- โREMARK
- โREGULAR
- โSTRUGGLE
- โMACHINE
- โSING
- โHURRIED
- โSUFFICIENT
- โREPRESENT
- โDOUBLE
- โALARM
- โSUPPER
- โDREADFUL
- โFORE
- ATOR
- โSTOCK
- โTIN
- โEXAMPLE
- โROOF
- โFLOW
- โSUPPOSED
- โPRESERV
- โL
- โLISTENED
- OC
- โSTO
- โSECURE
- โFRIGHTENED
- โDISTURB
- โEMOTION
- โSERVANTS
- โYO
- โBUY
- โFORCED
- โKITCHEN
- โTERROR
- โSTAIRS
- โSIXTY
- KER
- โORDINARY
- โDIRECTLY
- โHEADS
- โMETHOD
- โFORGIVE
- โAWFUL
- โREFLECT
- โGREATLY
- โTALKED
- โRIDE
- STONE
- โFAVOUR
- โWELCOME
- โSEIZED
- OU
- โCONTROL
- โORDERED
- โANGEL
- โUSUALLY
- โPOET
- โBOLD
- LINE
- โADVENTURE
- โWATCHING
- โFOLK
- โMISTRESS
- IZE
- โGROWING
- โCAVE
- โEVIDENCE
- โFINGER
- โSEVENTEEN
- โMOVING
- EOUS
- โDOESN
- โCOW
- โTYPE
- โBOIL
- โTALE
- โDELIVER
- โFARM
- โMONSIEUR
- โGATHERED
- โFEELINGS
- โRATE
- โREMARKED
- โPUTTING
- โMAT
- โCONTRARY
- โCRIME
- โPLA
- โCOL
- โNEARER
- TES
- โCIVIL
- โSHAME
- โLOOSE
- โDISCOVER
- โFLAT
- โTWICE
- โFAIL
- VIS
- โUNC
- EA
- โEUROPE
- โPATIENT
- โUNTO
- โSUFFER
- โPAIR
- โTREASURE
- OSE
- โEAGER
- โFLY
- โN
- โVAL
- โDAN
- โSALT
- โBORE
- BBE
- โARTHUR
- โAFFAIRS
- โSLOW
- โCONSIST
- โDEVIL
- LAN
- โAFFECTION
- โENGAGED
- โKISS
- โYA
- โOFFICER
- IFICATION
- โLAMP
- โPARTS
- HEN
- โMILK
- โPROCESS
- โGIFT
- โPULLED
- โHID
- โRAY
- โEXCELLENT
- โIMPRESSION
- โAUTHORITY
- โPROVED
- โTELLING
- TTE
- โTOWER
- โCONSEQUENCE
- โFAVOR
- โFLEW
- โCHARLES
- ISTS
- โADDRESS
- โFAMILIAR
- โLIMIT
- โCONFIDENCE
- โRARE
- โWEEKS
- โWOODS
- โINTENTION
- โDIRECT
- โPERFORM
- โSOLEMN
- โDISTANT
- โIMAGE
- โPRESIDENT
- โFIRM
- โINDIAN
- โRANK
- โLIKED
- โAGREE
- โHOUSES
- โWIL
- โMATTERS
- โPRISON
- โMODE
- โMAJOR
- โWORKING
- โSLIP
- โWEIGHT
- โAWARE
- โBUSY
- โLOOKS
- โWOUND
- โTHOR
- โBATH
- โEXERCISE
- โSIMILAR
- โWORE
- โAMOUNT
- โQUESTIONS
- โVIOLENT
- โEXCUSE
- โASIDE
- โTUR
- โDULL
- OF
- โEMPEROR
- โNEVERTHELESS
- โSHOUT
- โEXPLAINED
- โSIZE
- โACCOMPLISH
- FORD
- CAN
- โMISTAKE
- โINSTANTLY
- โSMOOTH
- โSTRIKE
- โBOB
- ISED
- โHORROR
- โSCIENCE
- โPROTEST
- โMANAGE
- โOBEY
- โNECESSITY
- โSPLENDID
- โPRESS
- โINTERESTING
- โRELIGION
- โUNKNOWN
- โFIERCE
- โDISAPPEARED
- โHOLY
- โHATE
- โPLAYED
- โLIN
- โNATURALLY
- โDROVE
- โLOUIS
- TIES
- โBRAND
- INESS
- RIE
- โSHOOT
- โCONSENT
- โSEATED
- โLINES
- GUE
- โAGREED
- โCIRCLE
- โSTIR
- โSTREETS
- โTASK
- โRID
- โPRODUCED
- โACCIDENT
- โWITNESS
- โLIBERTY
- โDETAIL
- โMINISTER
- โPOWERFUL
- โSAVAGE
- โSIXTEEN
- โPRETEND
- โCOAST
- โSQU
- โUTTER
- โNAMED
- โCLEVER
- โADMIT
- โCOUPLE
- โWICKED
- โMESSAGE
- โTEMPLE
- โSTONES
- โYESTERDAY
- โHILLS
- DAY
- โSLIGHT
- โDIAMOND
- โPOSSIBLY
- โAFFAIR
- โORIGINAL
- โHEARING
- โWORTHY
- โSELL
- NEY
- ICK
- โCOTTAGE
- โSACRIFICE
- โPROGRESS
- โSHOCK
- โDESIGN
- โSOUGHT
- โPIT
- โSUNDAY
- โOTHERWISE
- โCABIN
- โPRAYER
- โDWELL
- โGAIN
- โBRIDGE
- โPARTICULARLY
- โYIELD
- โTREAT
- RIGHT
- โOAK
- โROPE
- WIN
- โORDERS
- โSUSPECT
- โEDWARD
- AB
- โELEVEN
- โTEETH
- โOCCURRED
- DDING
- โAMERICA
- โFALLING
- โLION
- โDEPART
- โKEEPING
- โDEMAND
- โPAUSED
- โCEASED
- INA
- โFUN
- โCHEER
- โPARDON
- โNATIVE
- LUS
- LOW
- โDOGS
- โREQUIRED
- ILITY
- โELECT
- โENTERTAIN
- ITUDE
- โHUGE
- โCARRYING
- โBLU
- โINSIST
- โSATISFACTION
- โHUNT
- โCOUNTENANCE
- โUPPER
- โMAIDEN
- โFAILED
- โJAMES
- โFOREIGN
- โGATHER
- โTEST
- BOARD
- โTERMS
- โSILK
- โBEG
- โBROTHERS
- โPAGE
- โKNEES
- โSHOWN
- โPROFESSOR
- โMIGHTY
- โDEFI
- โCHARM
- โREQUIRE
- โLOG
- MORE
- โPROOF
- โPOSSESSED
- โSOFTLY
- โUNFORTUNATE
- โPRICE
- โSEVERE
- โSINGING
- โSTAGE
- โFREEDOM
- โSHOUTED
- โFARTHER
- โMAJESTY
- โPREVIOUS
- โGUIDE
- โMATCH
- โCHEST
- โINTENDED
- โBI
- โEXCITEMENT
- โOFFICERS
- โSUR
- โSHAKE
- โSENTIMENT
- โGENTLY
- โSUCCEEDED
- โMENTION
- โLOCK
- โACQUAINTANCE
- โIMAGINATION
- โPHYSICAL
- โLEADING
- โSLAVE
- โCART
- โPOINTED
- โSTEAM
- โSHADE
- โPIPE
- โBASE
- โINVENT
- โALAS
- โWORKED
- โREGRET
- โBUR
- โFAITHFUL
- โMENTIONED
- โRECORD
- โCOMPLAIN
- โSUPERIOR
- โBAY
- โPAL
- EMENT
- UE
- โSEVENTY
- โHOTEL
- โSHEEP
- โMEAL
- โADVICE
- โHIDDEN
- โDEMANDED
- โCONSCIOUS
- โBROW
- โPOSSESS
- โFOURTH
- โEVENTS
- โFRI
- โPRAISE
- โADVANCED
- โRESOLVED
- โSTUFF
- โCHEERFUL
- โBIRTH
- โGRIEF
- โAFFORD
- โFAIRY
- โWAKE
- โSIDES
- โSUBSTANCE
- โARTICLE
- โLEVEL
- โMIST
- โJOINED
- โPRACTICAL
- โCLEARLY
- โTRACE
- โAWAKE
- โOBSERVE
- โBASKET
- โLACK
- VILLE
- โSPIRITS
- โEXCITED
- โABANDON
- โSHINING
- โFULLY
- โCALLING
- โCONSIDERABLE
- โSPRANG
- โMILE
- โDOZEN
- โPEA
- โDANGEROUS
- โWIT
- โJEW
- โPOUNDS
- โFOX
- โINFORMATION
- โLIES
- โDECK
- NNY
- โPAUL
- โSTARS
- โANGER
- โSETTLE
- โWILLING
- โADAM
- โFACES
- โSMITH
- โIMPORTANCE
- โSTRAIN
- WAR
- โSAM
- โFEATHER
- โSERVED
- โAUTHOR
- โPERCEIVED
- โFLAME
- โDIVINE
- โTRAIL
- โANYBODY
- โSIGH
- โDELICATE
- KY
- โFOLD
- โHAVEN
- โDESIRED
- โCURIOSITY
- โPRACTICE
- โCONSIDERATION
- โABSOLUTELY
- โCITIZEN
- โBOTTLE
- โINTERESTED
- โMEAT
- โOCCUPIED
- โCHOOSE
- โTHROAT
- ETTE
- โCANDLE
- โDAWN
- โPROTECT
- โSENTENCE
- IED
- โROCKS
- โPORTION
- โAPPARENTLY
- โPRESENTED
- โTIGHT
- โACTUALLY
- โDYING
- โHAM
- โDAILY
- โSUFFERED
- โPOLITICAL
- โBODIES
- โMODERN
- โCOMPLETELY
- โSOONER
- TAN
- โPROP
- โADVANCE
- โREFUSED
- โFARMER
- โPOLITE
- โTHUNDER
- โBRIEF
- โELSIE
- โSAILOR
- โSUGGESTED
- โPLATE
- โAID
- โFLESH
- โWEEP
- โBUCK
- โANTI
- โOCEAN
- โSPEND
- WELL
- โODD
- โGOVERNOR
- โENTRANCE
- โSUSPICION
- โSTEPPED
- โRAPIDLY
- โCHECK
- โHIDE
- โFLIGHT
- โCLUB
- โENTIRE
- โINDIANS
- ASH
- โCAPITAL
- โMAMMA
- HAR
- โCORRECT
- โCRACK
- โSENSATION
- โWORST
- โPACE
- โMIDST
- โAUGUST
- โPROPORTION
- โINNOCENT
- LINESS
- โREGARDED
- โDRIVEN
- ORD
- โHASTE
- โEDUCATION
- โEMPLOY
- โTRULY
- โINSTRUMENT
- โMAG
- โFRAME
- โFOOLISH
- โTAUGHT
- โHANG
- โARGUMENT
- โNINETEEN
- โELDER
- โNAY
- โNEEDED
- โNEIGHBOR
- โINSTRUCT
- โPAPERS
- โREWARD
- โEQUALLY
- โFIELDS
- โDIG
- HIN
- โCONDITIONS
- JA
- โSPAR
- โREQUEST
- โWORN
- โREMARKABLE
- โLOAD
- โWORSHIP
- โPARK
- โKI
- โINTERRUPTED
- โSKILL
- โTERM
- LAC
- โCRITIC
- โDISTRESS
- โBELIEF
- โSTERN
- IGHT
- โTRACK
- โHUNTING
- โJEWEL
- โGRADUALLY
- โGLOW
- โRUSHED
- โMENTAL
- โVISITOR
- โPICKED
- โBEHOLD
- โEXPRESSED
- โRUB
- โSKI
- ARTAGNAN
- โMOREOVER
- โOPERATION
- โCAREFUL
- โKEEN
- โASSERT
- โWANDER
- โENEMIES
- โMYSTERIOUS
- โDEPTH
- โPREFER
- โCROSSED
- โCHARMING
- โDREAD
- โFLOUR
- โROBIN
- โTRE
- โRELIEF
- โINQUIRED
- โAPPLE
- โHENCE
- โWINGS
- โCHOICE
- โJUD
- OO
- โSPECIES
- โDELIGHTED
- IUM
- โRAPID
- โAPPEAL
- โFAMOUS
- โUSEFUL
- โHELEN
- โNEWSPAPER
- โPLENTY
- โBEARING
- โNERVOUS
- โPARA
- โURGE
- โROAR
- โWOUNDED
- โCHAIN
- โPRODUCE
- โREFLECTION
- โMERCHANT
- โQUARREL
- โGLORY
- โBEGUN
- โBARON
- CUS
- โQUEER
- โMIX
- โGAZE
- โWHISPER
- โBURIED
- โDIV
- โCARD
- โFREQUENTLY
- โTIP
- โKNEE
- โREGION
- โROOT
- โLEST
- โJEALOUS
- CTOR
- โSAVED
- โASKING
- โTRIP
- QUA
- โUNION
- HY
- โCOMPANIONS
- โSHIPS
- โHALE
- โAPPROACHED
- โHARRY
- โDRUNK
- โARRIVAL
- โSLEPT
- โFURNISH
- HEAD
- โPIG
- โABSENCE
- โPHIL
- โHEAP
- โSHOES
- โCONSCIOUSNESS
- โKINDLY
- โEVIDENT
- โSCAR
- โDETERMIN
- โGRASP
- โSTEAL
- โOWE
- โKNIFE
- โPRECIOUS
- โELEMENT
- โPROCEEDED
- โFEVER
- โLEADER
- โRISK
- โEASE
- โGRIM
- โMOUNT
- โMEANWHILE
- โCENTURY
- OON
- โJUDGMENT
- โAROSE
- โVISION
- โSPARE
- โEXTREME
- โCONSTANT
- โOBSERVATION
- โTHRUST
- โDELAY
- โCENT
- โINCLUD
- โLIFT
- โADMIRE
- โISSUE
- โFRIENDSHIP
- โLESSON
- โPRINCIPAL
- โMOURN
- โACCEPTED
- โBURNING
- โCAPABLE
- โEXTRAORDINARY
- โSANG
- โREMOVED
- โHOPED
- โHORN
- โALICE
- โMUD
- โAPARTMENT
- โFIGHTING
- โBLAME
- โTREMBLING
- โSOMEBODY
- โANYONE
- โBRIDE
- โREADER
- โROB
- โEVERYWHERE
- โLABOUR
- โRECALL
- โBULL
- โHIT
- โCOUNCIL
- โPOPULAR
- โCHAP
- โTRIAL
- โDUN
- โWISHES
- โBRILLIANT
- โASSURED
- โFORGOT
- โCONTINUE
- โACKNOWLEDG
- โRETREAT
- โINCREASED
- โCONTEMPT
- โGRANDFATHER
- โSYMPATHY
- โGHOST
- โSTRETCHED
- โCREATURES
- โCAB
- โHIND
- โPLAYING
- โMISERABLE
- โMEMBERS
- โKINDNESS
- โHIGHEST
- โPRIM
- โKISSED
- โDESERVE
- โHUT
- โBEGGED
- โEIGHTY
- โCLOSELY
- โWONDERED
- โMILITARY
- โREMIND
- โACCORDINGLY
- โLARGER
- โMAINTAIN
- โENGINE
- โMOTIVE
- โDESTROY
- โSTRIP
- โHANS
- โAHEAD
- โINFINITE
- โPROMPT
- โINFORMED
- TTLE
- โPEER
- โPRESSED
- โTRAP
- โSOMEWHERE
- โBOUGHT
- โVISIBLE
- โASHAMED
- โTEAR
- โNEIGHBOUR
- โCONSTITUTION
- โINTELLIGENCE
- โPROFESSION
- โHUNGRY
- RIDGE
- โSMELL
- โSTORIES
- โLISTENING
- โAPPROACH
- โSTRING
- โEXPLANATION
- โIMMENSE
- โRELIGIOUS
- โTHROUGHOUT
- โHOLLOW
- โAWAIT
- โFLYING
- โSCREAM
- โACTIVE
- โRUM
- โPRODUCT
- โUNHAPPY
- โVAGUE
- ARIES
- โELIZABETH
- โSTUPID
- โDIGNITY
- โISABEL
- GAR
- โBRO
- โPITCH
- โCOMRADE
- โSTIFF
- โRECKON
- โSOLD
- โSPARK
- โSTRO
- โCRYING
- โMAGIC
- โREPEAT
- PORT
- โMARKED
- โCOMFORTABLE
- โPROJECT
- โBECOMING
- โPARENTS
- โSHELTER
- โSTOLE
- โHINT
- โNEST
- โTRICK
- โTHOROUGHLY
- โHOSPITAL
- โWEAPON
- โROME
- โSTYLE
- โADMITTED
- โSAFETY
- FIELD
- โUNDERSTANDING
- โTREMBLE
- โPRINT
- โSLAVES
- โWEARY
- โARTIST
- โCREDIT
- BURG
- โCONCLUSION
- โSELDOM
- โUNUSUAL
- โCLOUDS
- โUNABLE
- โGAY
- โHANGING
- โSCR
- โBOWED
- โDAVID
- โVOL
- โPUSHED
- โESCAPED
- MOND
- โWARN
- โBETRAY
- โEGGS
- โPLAINLY
- โEXHIBIT
- โDISPLAY
- โMEMBER
- โGRIN
- โPROSPECT
- โBRUSH
- โBID
- โSUCCESSFUL
- โEXTENT
- โPERSUADE
- โMID
- โMOOD
- โARRANGED
- โUNIVERSAL
- โJIM
- โSIGNAL
- โWHILST
- โPHILIP
- โWOLF
- RATE
- โEAGERLY
- โBILLY
- โRETURNING
- โCONSCIENCE
- โFORTUNATE
- โFEMALE
- โGLEAM
- โHASTILY
- โPROVIDED
- โOBTAIN
- โINSTINCT
- โCONCERNED
- โCONCERNING
- โSOMEHOW
- โPINK
- โRAGE
- โACCUSTOMED
- โUNCONSCIOUS
- โADVISE
- โBRANCHES
- โTINY
- โREFUSE
- โBISHOP
- โSUPPLY
- โPEASANT
- โLAWYER
- โWASTE
- โCONNECTION
- โDEVELOP
- โCORRESPOND
- โPLUM
- โNODDED
- โSLIPPED
- โEU
- โCONSTANTLY
- CUM
- MMED
- โFAIRLY
- HOUSE
- โKIT
- โRANG
- โFEATURES
- โPAUSE
- โPAINFUL
- โJOE
- โWHENCE
- โLAUGHTER
- โCOACH
- โCHRISTMAS
- โEATING
- โWHOLLY
- โAPART
- โSUPER
- โREVOLUTION
- โLONELY
- โCHEEKS
- โTHRONE
- โCREW
- โATTAIN
- โESTABLISHED
- TIME
- โDASH
- โFRIENDLY
- โOPERA
- โEARL
- โEXHAUST
- โCLIFF
- โREVEAL
- โADOPT
- โCENTRE
- โMERRY
- โSYLVIA
- โIDEAL
- โMISFORTUNE
- โFEAST
- โARAB
- โNUT
- โFETCH
- โFOUGHT
- โPILE
- โSETTING
- โSOURCE
- โPERSIST
- โMERCY
- โBARK
- โLUC
- โDEEPLY
- โCOMPARE
- โATTITUDE
- โENDURE
- โDELIGHTFUL
- โBEARD
- โPATIENCE
- โLOCAL
- โUTTERED
- โVICTORY
- โTREATED
- โSEPARATE
- โWAG
- โDRAGG
- โTITLE
- โTROOPS
- โTRIUMPH
- โREAR
- โGAINED
- โSINK
- โDEFEND
- โTIED
- โFLED
- โDARED
- โINCREASE
- โPOND
- โCONQUER
- โFOREHEAD
- โFAN
- โANXIETY
- โENCOUNTER
- โSEX
- โHALT
- โSANK
- โCHEEK
- โHUMBLE
- โWRITER
- โEMPLOYED
- โDISTINGUISHED
- โRAISE
- โWHIP
- โGIANT
- โRANGE
- โOBTAINED
- โFLAG
- โMAC
- โJUMPED
- โDISCOVERY
- โNATIONAL
- โCOMMISSION
- โPOSITIVE
- โLOVING
- โEXACT
- โMURMURED
- โGAZED
- โREFER
- โCOLLEGE
- โENCOURAGE
- โNOVEL
- โCLOCK
- โMORTAL
- โROLLED
- โRAT
- IZING
- โGUILTY
- โVICTOR
- WORTH
- โPRA
- โAPPROACHING
- โRELATIVE
- โESTATE
- โUGLY
- โMETAL
- โROBERT
- โTENT
- โADMIRATION
- โFOURTEEN
- โBARBAR
- โWITCH
- ELLA
- โCAKE
- โSHONE
- โMANAGED
- โVOLUME
- โGREEK
- โDANCING
- โWRETCHED
- โCONDEMN
- โMAGNIFICENT
- โCONSULT
- J
- โORGAN
- โFLEET
- โARRANGEMENT
- โINCIDENT
- โMISERY
- โARROW
- โSTROKE
- โASSIST
- โBUILD
- โSUCCEED
- โDESPERATE
- โWIDOW
- UDE
- โMARKET
- โWISDOM
- โPRECISE
- โCURRENT
- โSPOIL
- โBADE
- โWOODEN
- โRESIST
- โOBVIOUS
- โSENSIBLE
- FALL
- โADDRESSED
- โGIL
- โCOUNSEL
- โPURCHASE
- โSELECT
- โUSELESS
- โSTARED
- โARREST
- โPOISON
- โFIN
- โSWALLOW
- โBLOCK
- โSLID
- โNINETY
- โSPORT
- โPROVIDE
- โANNA
- โLAMB
- โINTERVAL
- โJUMP
- โDESCRIBED
- โSTRIKING
- โPROVISION
- โPROPOSED
- โMELANCHOLY
- โWARRIOR
- โSUGGEST
- โDEPARTURE
- โBURDEN
- โLIMB
- โTROUBLED
- โMEADOW
- โSACRED
- โSOLID
- โTRU
- โLUCY
- โRECOVER
- โENERGY
- โPOWDER
- โRESUMED
- โINTENSE
- โBRITISH
- โSTRAW
- โAGREEABLE
- โEVERYONE
- โCONCERN
- โVOYAGE
- โSOUTHERN
- โBOSOM
- โUTTERLY
- โFEED
- โESSENTIAL
- โCONFINE
- โHOUSEHOLD
- โEXTREMELY
- โWONDERING
- โLIST
- โPINE
- PHA
- โEXPERIMENT
- โJOSEPH
- โMYSTERY
- โRESTORE
- โBLUSH
- FOLD
- โCHOSEN
- โINTELLECT
- โCURTAIN
- OLOGY
- โMOUNTED
- โLAP
- โEPI
- โPUNISH
- โWEDDING
- โRECOGNIZED
- โDRIFT
- โPREPARATION
- โRESOLUTION
- โOPPRESS
- โFIX
- โVICTIM
- OGRAPH
- โSUMMON
- โJULIA
- โFLOOD
- โWAL
- ULATION
- โSLIGHTLY
- โLODGE
- โWIRE
- โCONFUSION
- โUNEXPECTED
- โCONCEIVE
- โPRIZE
- โJESUS
- โADDITION
- โRUDE
- โFATAL
- โCARELESS
- โPATCH
- โKO
- โCATHERINE
- โPARLIAMENT
- โPROFOUND
- โALOUD
- โRELIEVE
- โPUSH
- ABILITY
- โACCOMPANIED
- โSOVEREIGN
- โSINGULAR
- โECHO
- โCOMPOSED
- โSHAKING
- ATORY
- โASSISTANCE
- โTEACHER
- โHORRIBLE
- โSTRICT
- โVERSE
- โPUNISHMENT
- โGOWN
- โMISTAKEN
- โVARI
- โSWEPT
- โGESTURE
- โBUSH
- โSTEEL
- โAFFECTED
- โDIRECTED
- โSURROUNDED
- โABSURD
- โSUGAR
- โSCRAP
- โIMMEDIATE
- โSADDLE
- โTY
- โARISE
- โSIGHED
- โEXCHANGE
- โIMPATIENT
- โSNAP
- โEMBRACE
- โDISEASE
- โPROFIT
- โRIDING
- โRECOVERED
- โGOVERN
- โSTRETCH
- โCONVINCED
- โLEANING
- โDOMESTIC
- โCOMPLEX
- โMANIFEST
- โINDULGE
- โGENIUS
- โAGENT
- โVEIL
- โDESCRIPTION
- โINCLINED
- โDECEIVE
- โDARLING
- โREIGN
- HU
- โENORMOUS
- โRESTRAIN
- โDUTIES
- BURY
- TTERED
- โPOLE
- โENABLE
- โEXCEPTION
- โINTIMATE
- โCOUNTESS
- โTRIBE
- โHANDKERCHIEF
- โMIDNIGHT
- โPROBLEM
- โTRAMP
- โOIL
- CAST
- โCRUSH
- โDISCUSS
- โRAM
- โTROT
- โUNRE
- โWHIRL
- โLOCKED
- โHORIZON
- โOFFICIAL
- โSCHEME
- โDROWN
- โPIERRE
- โPERMITTED
- โCONNECTED
- โASSURE
- โCOCK
- โUTMOST
- โDEVOTED
- โRELI
- โSUFFICIENTLY
- โINTELLECTUAL
- โCARPET
- โOBJECTION
- โAFTERWARD
- โREALITY
- โNEGRO
- โRETAIN
- โASCEND
- โCEASE
- โKATE
- โMARVEL
- KO
- โBOND
- MOST
- โCOAL
- GATE
- โIGNORANT
- โBREAKING
- โTWIN
- โASTONISHMENT
- โCOFFEE
- โJAR
- โCITIES
- โORIGIN
- โEXECUT
- โFINAL
- โINHABITANTS
- โSTABLE
- โCHIN
- โPARTIES
- โPLUNGE
- โGENEROUS
- โDESCRIBE
- โANNOUNCED
- โMERIT
- โREVERE
- โERE
- ACIOUS
- ZI
- โDISAPPOINT
- โSUGGESTION
- โDOUBTLESS
- โTRUNK
- โSTAMP
- โJOB
- โAPPOINTED
- โDIVIDED
- โACQUAINTED
- CHI
- โABSOLUTE
- โFEARFUL
- โPRIVILEGE
- โCRAFT
- โSTEEP
- โHUNTER
- โFORBID
- โMODEST
- โENDEAVOUR
- โSWEEP
- โBEHELD
- โABSORB
- โCONSTRUCT
- โEMPIRE
- โEXPEDITION
- โERECT
- โOFFEND
- โINTEND
- โPERMIT
- โDESTROYED
- โCONTRACT
- โTHIRST
- โWAGON
- โEVA
- โGLOOM
- โATMOSPHERE
- โRESERVE
- โVOTE
- โGER
- โNONSENSE
- โPREVAIL
- โQUALITY
- โCLASP
- โCONCLUDED
- โRAP
- โKATY
- โETERNAL
- โMUTTERED
- โNEGLECT
- โSQUIRE
- โCREEP
- LOCK
- โELECTRIC
- โHAY
- โEXPENSE
- โSCORN
- โRETIRED
- โSTOUT
- โMURMUR
- โSHARPLY
- โDISTRICT
- โLEAF
- โFAILURE
- WICK
- โJEAN
- โNUMEROUS
- โINFANT
- โREALIZED
- โTRAVELLER
- โHUNGER
- โJUNE
- โMUN
- โRECOMMEND
- โCREP
- ZZLE
- โRICHARD
- WORK
- โMONTE
- โPREACH
- โPALM
- AVI
- โANYWHERE
- โDISPOSITION
- โMIRROR
- โVENTURE
- โPOUND
- โCIGAR
- โINVITED
- โBENCH
- โPROTECTION
- โBENEFIT
- โTHOMAS
- โCLERK
- โREPROACH
- โUNIFORM
- โGENERATION
- โSEAL
- โCOMPASS
- โWARNING
- โEXTENDED
- โDIFFICULTIES
- โMAYBE
- โGROAN
- โAFFECT
- โCOMB
- โEARN
- โWESTERN
- โIDLE
- โSCORE
- โTAP
- โASTONISHED
- โINTRODUCED
- โLEISURE
- โLIEUTENANT
- โVIOLENCE
- โFIRMLY
- โMONSTER
- โUR
- โPROPERLY
- โTWIST
- โPIRATE
- โROBBER
- โBATTER
- โWEPT
- โLEANED
- โFOG
- โORNAMENT
- โANDREW
- โBUSHES
- โREPUBLIC
- โCONFIDENT
- โLEAN
- โDART
- โSTOOP
- โCURL
- โCOUNTER
- โNORTHERN
- โPEARL
- โNEAREST
- โFRANCIS
- โWANDERING
- โFREQUENT
- โSTARTLED
- โSTATEMENT
- โOCCUR
- โBLOOM
- โNERVE
- โINSPECT
- โINDUCE
- โFLATTER
- โDATE
- โAMBITION
- โSLOPE
- โMALE
- โMADAM
- โMONK
- โRENT
- โCONFIRM
- โINVESTIGAT
- โRABBIT
- โREGIMENT
- โSUBMIT
- โSPELL
- โFURIOUS
- โRAIL
- โBESTOW
- โRALPH
- โSCATTERED
- โCOMPELLED
- โTHREAD
- โCHILL
- โDENY
- โPRONOUNC
- โMANKIND
- โCATTLE
- โEXECUTION
- โREBEL
- โSUPREME
- โVALUABLE
- โLIKEWISE
- โCONVEY
- โTIDE
- โGLOOMY
- โCOIN
- โACTUAL
- โTAX
- โPROVINCE
- โGRATEFUL
- โSPIRITUAL
- โVANISHED
- โDIANA
- โHAUNT
- โDRAGON
- โCRAWL
- โCHINA
- โGRATITUDE
- โNEAT
- โFINISH
- โINTENT
- โFRIGHT
- โEMBARRASS
- โTHIRTEEN
- โRUTH
- โSLIGHTEST
- โDEVELOPMENT
- โINTERVIEW
- โSPECTACLE
- โBROOK
- VIE
- โWEAKNESS
- โAUDIENCE
- โCONSEQUENTLY
- โABROAD
- โASPECT
- โPAINTED
- โRELEASE
- โINSULT
- โSOOTH
- โDISAPPOINTMENT
- โEMERG
- โBRIG
- โESTEEM
- โINVITATION
- โPASSENGER
- โPUBLISH
- โPIANO
- โIRISH
- โDESK
- โBEATEN
- โFIFTH
- โIMPULSE
- โSWEAR
- โEATEN
- โPURPLE
- โCOMMITTED
- โCOUNTRIES
- โPERCEIVE
- ISON
- โCELEBRAT
- โGRANDMOTHER
- โSHUDDER
- โSUNSHINE
- โSPANISH
- โHITHERTO
- โMARILLA
- โSNAKE
- โMOCK
- โINTERFERE
- โWALTER
- โAMID
- โMARBLE
- โMISSION
- TERIOR
- โDRIVING
- โFURNITURE
- โSTEADY
- โCIRCUMSTANCE
- โINTERPRET
- โENCHANT
- โERROR
- โCONVICTION
- โHELPLESS
- โMEDICINE
- โQUALITIES
- โITALIAN
- โHASTENED
- โOCCASIONALLY
- โPURSUED
- โHESITATED
- โINDEPENDENT
- โOLIVER
- โLINGER
- UX
- โEXAMINED
- โREPENT
- โPHYSICIAN
- โCHASE
- โBELOVED
- โATTACHED
- โFLORENCE
- โHONEY
- โMOUSE
- โCRIES
- โBAKE
- โPOEM
- โDESTRUCTION
- โFULFIL
- โMESSENGER
- โTRISTRAM
- โFANCIED
- โEXCESS
- โCURSE
- โCHU
- โQUANTITY
- โTHORNTON
- โCREATED
- โCONTINUALLY
- โLIGHTNING
- โBORNE
- โTOTAL
- โDISPOSED
- โRIFLE
- โPOLLY
- โGOAT
- โBACKWARD
- โVIRGINIA
- โKICK
- โPERIL
- โQUO
- โGLORIOUS
- โMULTITUDE
- โLEATHER
- โABSENT
- โDEMON
- โDEBT
- โTORTURE
- โACCORD
- โMATE
- โCATHOLIC
- โPILL
- โLIBRARY
- โPURSUIT
- โSHIRT
- โDEAREST
- โCOLLAR
- โBEACH
- โROBE
- โDECLARE
- โBRANCH
- โTEMPT
- โSTEADILY
- โDISGUST
- โSILLY
- โARRIVE
- โDRANK
- โLEVI
- โCOMMUNICAT
- โRACHEL
- โWASHINGTON
- โRESIGN
- โMEANTIME
- โLACE
- โENGAGEMENT
- โQUIVER
- โSEPARATED
- โDISCUSSION
- โVENTURED
- โSURROUNDING
- โPOLISH
- โNAIL
- โSWELL
- โJOKE
- โLINCOLN
- โSTUDENT
- โGLITTER
- โRUSSIAN
- โREADILY
- โCHRIS
- โPOVERTY
- โDISGRACE
- โCHEESE
- โHEAVILY
- โSCALE
- โSTAFF
- โENTREAT
- โFAREWELL
- โLUNCH
- โPEEP
- โMULE
- โSOMEONE
- โDISAPPEAR
- โDECISION
- โPISTOL
- โPUN
- โSPUR
- โASSUMED
- โEXTEND
- โENTHUSIASM
- โDEFINITE
- โUNDERTAKE
- โCOMMITTEE
- โSIMON
- โFENCE
- โAPPLIED
- โRELATED
- โVICE
- โUNPLEASANT
- โPROBABLE
- โPROCURE
- โFROWN
- โCLOAK
- โHUMANITY
- โFAMILIES
- โPHILOSOPHER
- โDWARF
- โOVERCOME
- โDEFEAT
- โFASTENED
- โMARSH
- โCLASSES
- โTOMB
- โGRACIOUS
- โREMOTE
- โCELL
- โSHRIEK
- โRESCUE
- โPOOL
- โORGANIZ
- โCHOSE
- โCUTTING
- โCOWARD
- โBORDER
- โDIRTY
- โMONKEY
- โHOOK
- โCHUCK
- โEMILY
- โJEST
- โPLAC
- โWEIGH
- โASSOCIATE
- โGLIMPSE
- โSTUCK
- โBOLT
- โMURDERER
- โPONY
- โDISTINGUISH
- โINSTITUTION
- โCUNNING
- โCOMPLIMENT
- โAPPETITE
- โREPUTATION
- โFEEBLE
- โKIN
- โSERIES
- โGRACEFUL
- โPLATFORM
- โBREEZE
- โPHRASE
- โCLAY
- MONT
- โRATTL
- โOPPOSITION
- โLANE
- โBOAST
- โGROWTH
- โINCLINATION
- โBEHAVE
- โSUSAN
- โDISTINCTION
- โDISLIKE
- โNICHOLAS
- โSATISFY
- โDRAMA
- โELBOW
- โGAZING
- โCONSUM
- โSPIN
- โOATH
- โCHANNEL
- โCHARACTERISTIC
- โSPEAR
- โSLAIN
- โSAUCE
- โFROG
- โCONCEPTION
- โTIMID
- โZEAL
- โAPPARENT
- SHIRE
- โCENTER
- โVARIETY
- โDUSK
- โAPT
- โCOLUMN
- โREVENGE
- โRIVAL
- โIMITAT
- โPASSIONATE
- โSELFISH
- โNORMAN
- โREPAIR
- โTHRILL
- โTREATMENT
- โROSA
- โMARTIN
- โINDIFFERENT
- โTHITHER
- โGALLANT
- โPEPPER
- โRECOLLECT
- โVINE
- โSCARCE
- โSHIELD
- โMINGLED
- CLOSE
- โHARSH
- โBRICK
- โHUMOR
- โMISCHIEF
- โTREMENDOUS
- โFUNCTION
- โSMART
- โSULTAN
- โDISMISS
- โTHREATENED
- โCHEAP
- โFLOCK
- โENDEAVOR
- โWHISK
- โITALY
- โWAIST
- โFLUTTER
- โSMOKING
- โMONARCH
- โAFRICA
- โACCUSE
- โHERBERT
- โREFRESH
- โREJOICE
- โPILLOW
- โEXPECTATION
- โPOETRY
- โHOPELESS
- โPERISH
- โPHILOSOPHY
- โWHISTLE
- โBERNARD
- โLAMENT
- โIMPROVE
- โSUP
- โPERPLEX
- โFOUNTAIN
- โLEAGUE
- โDESPISE
- โIGNORANCE
- โREFERENCE
- โDUCK
- โGROVE
- โPURSE
- โPARTNER
- โPROPHET
- โSHIVER
- โNEIGHBOURHOOD
- โREPRESENTATIVE
- SAIL
- โWIP
- โACQUIRED
- โCHIMNEY
- โDOCTRINE
- โMAXIM
- โANGLE
- โMAJORITY
- โAUTUMN
- โCONFUSED
- โCRISTO
- โACHIEVE
- โDISGUISE
- โREDUCED
- โEARLIER
- โTHEATRE
- โDECIDE
- MINATED
- OLOGICAL
- โOCCUPATION
- โVIGOROUS
- โCONTINENT
- โDECLINE
- โCOMMUNITY
- โMOTIONLESS
- โHATRED
- โCOMMUNICATION
- โBOWL
- โCOMMENT
- โAPPROVE
- โCEREMONY
- โCRIMINAL
- โSCIENTIFIC
- โDUCHESS
- โVIVID
- โSHIFT
- โAVAIL
- โDAMP
- โJOHNSON
- โSLENDER
- โCONTRAST
- โAMUSEMENT
- โPLOT
- โLYN
- โASSOCIATION
- โSNATCH
- โUNCERTAIN
- โPRESSURE
- โPERCH
- โAPPLY
- โPLANET
- โNOTWITHSTANDING
- โSWUNG
- โSTIRRED
- โATTENDANT
- โENJOYMENT
- โWORRY
- โALBERT
- โNAKED
- โTALENT
- โMARIAN
- โREFORM
- โDELIBERATE
- โINTELLIGENT
- โSENSITIVE
- โYONDER
- โPUPIL
- โFRIGHTFUL
- โDOUBTFUL
- โSTANDARD
- โMAGISTRATE
- โSHEPHERD
- โSTOMACH
- โDEPOSIT
- โRENEW
- โHEDGE
- โFRANCS
- โPOSSIBILITY
- โRESEMBLE
- โFATIGUE
- โPORTRAIT
- โFAVORITE
- โCREAM
- โBURG
- โSECRETARY
- โDIVERS
- โACTIVITY
- โSPECULAT
- โHUMOUR
- โFITTED
- โEXTERNAL
- โCETERA
- โWRAPPED
- โWHIT
- โFRED
- โEXAMINATION
- โLODGING
- โOWING
- โJAW
- โCROW
- โBALANCE
- โPUFF
- โTENDERNESS
- โPORTHOS
- โANCHOR
- โINTERRUPT
- โNECESSARILY
- โPERPETUAL
- โAGONY
- โPOPE
- โSCHOLAR
- โSCOTLAND
- โSUPPRESS
- โWRATH
- โWRECK
- โEXCEED
- โPERFECTION
- โINDIA
- โTRADITION
- โSECTION
- โEASTERN
- โDOORWAY
- โWIVES
- โCONVENTION
- โANNOUNC
- โEGYPT
- โCONTRADICT
- โSCRATCH
- โCENTRAL
- โGLOVE
- โWAX
- โPREPARE
- โACCOMPANY
- โINCREASING
- โLIBERAL
- โRAISING
- โORANGE
- โSHOE
- โATTRIBUTE
- โLITERATURE
- โPUZZLED
- โWITHDRAW
- โWHITHER
- โHAWK
- โMOONLIGHT
- โEXAMINE
- โHAPPILY
- โPRECEDE
- โDETECTIVE
- โINCHES
- โSOLITARY
- โDUTCH
- โNAPOLEON
- โUNEASY
- โCARDINAL
- โBLEW
- โFOWL
- โDECORAT
- โCHILDHOOD
- โTORMENT
- โLOSING
- โPERMISSION
- โBLANK
- โUPSTAIRS
- โCAPACITY
- โTRIFLE
- โFOLLY
- โRECOGNIZE
- โREMOVE
- โVENGEANCE
- โENTERPRISE
- โBEDROOM
- โANYHOW
- โINQUIRY
- โASHES
- โDRAG
- โHUSH
- โAWKWARD
- โSATURDAY
- โGENUINE
- โSURVIV
- โSKIRT
- โAFFECTIONATE
- โTANG
- โMUTUAL
- โDISPUTE
- โEAGLE
- โINCOME
- โBIND
- โFAME
- โIMPROVEMENT
- ROVING
- โDIFFER
- โAWOKE
- โSLEEVE
- โSOLITUDE
- โFAVOURITE
- JI
- โDETECT
- โCOMPREHEND
- โPREPARING
- โSERPENT
- โSUMMIT
- โKNOT
- โKNIT
- โCOPY
- โSTOPPING
- โFADED
- โHIDEOUS
- โJULIE
- STEAD
- โSHINE
- โCONFLICT
- โPROPOSITION
- โREFUGE
- โGALLERY
- โBUNDLE
- โAXE
- โSLAVERY
- โMASK
- โALYOSHA
- โLADDER
- โDEPARTMENT
- โDISCHARGE
- โDEPRESS
- โGALLOP
- โSCARLET
- โKITTY
- โRECEIVING
- โSURRENDER
- โSUSTAIN
- โTWILIGHT
- โCONGRESS
- โIRELAND
- โFUNNY
- โLEND
- โCONSTITUTE
- โFUNERAL
- โCRYSTAL
- โSPAIN
- โEXCEEDINGLY
- โDAMN
- โCOMMUN
- โCIVILIZATION
- โPREJUDICE
- โPORCH
- โASSISTANT
- โINDUSTRY
- โTUMBLE
- โDEFENCE
- โHITHER
- โSMOT
- โCOLONI
- โAMAZEMENT
- โMARGUERITE
- โMIRACLE
- โINHERIT
- โBEGGAR
- โENVELOPE
- โINDIGNATION
- โNATASHA
- โPROPOSAL
- โFRAGMENT
- โROUSED
- โROAST
- ENCIES
- โCOMMENCED
- โRESOURCE
- โPOPULATION
- โQUOTH
- โPURSUE
- โEDUCAT
- โAFFLICT
- โCONTACT
- โCRIMSON
- โDIVISION
- โDISORDER
- โCOPPER
- โSOLICIT
- โMODERATE
- โDRUM
- โSWIM
- โSALUTE
- โASSUME
- โMUSCLE
- โOVERWHELM
- โSHAKESPEARE
- โSTRUGGLING
- โTRANQUIL
- โCHICKEN
- โTREAD
- โCLAW
- โBIBLE
- โRIDGE
- โTHREAT
- โVELVET
- โEXPOSED
- โIDIOT
- โBARREL
- โPENNY
- โTEMPTATION
- โDANGLARS
- โCENTURIES
- โDISTRIBUT
- โREJECT
- โRETORTED
- โCONCENTRAT
- โCORDIAL
- โMOTOR
- โCANNON
- KEEP
- โWRETCH
- โASSURANCE
- โTHIEF
- โSURVEY
- โVITAL
- โRAILWAY
- โJACKSON
- โCRASH
- โGROWL
- โCOMBAT
- โRECOLLECTION
- โSECURITY
- โJACOB
- โCLUTCH
- โBLANKET
- โNANCY
- โCELLAR
- โCONVENIENT
- โINDIGNANT
- โCOARSE
- โWORM
- โSCREEN
- โTRANSPORT
- โBULLET
- โAPPRECIATE
- โDEVOTION
- โINVISIBLE
- โDRIED
- โMIXTURE
- โCANDID
- โPERFORMANCE
- โRIPE
- โEXQUISITE
- โBARGAIN
- โTOBACCO
- โLOYAL
- โMOULD
- โATTENTIVE
- โDOROTHY
- โBRUTE
- โESTABLISHMENT
- โABILITY
- โINHABIT
- โOBSCURE
- โBORROW
- โESSENCE
- โDISMAY
- โFLEE
- โBLADE
- โPLUCK
- โCOFFIN
- โSUNSET
- โSTEPHEN
- โECONOMIC
- โHOLIDAY
- โMECHANICAL
- โCOTTON
- โAWAKENED
- โSEIZE
- โRIDICULOUS
- โSANCHO
- โHESITATION
- โCORPSE
- โSAVING
- HOLD
- FOOT
- โELDEST
- โDESPITE
- โEDITH
- โCHERISH
- โRESISTANCE
- โWILSON
- โARGUE
- โINQUIRE
- โAPPREHENSION
- โAVENUE
- โDRAKE
- โPROPOSE
- HURST
- โINFERIOR
- โSTAIRCASE
- โWHEREFORE
- โCARLYLE
- โCOUCH
- โROUTE
- โPOLITICS
- โTOMORROW
- โTHRONG
- โNAUGHT
- โSUNLIGHT
- โINDIFFERENCE
- โOBEDIENCE
- โRECEPTION
- โVEGETABLE
- โIMPERFECT
- โRESIDENCE
- โTURKEY
- โVIOLET
- โSARAH
- โALTAR
- โGRIEVE
- โJERK
- โENSU
- โMAGICIAN
- โBLOSSOM
- โLANTERN
- โRESOLUTE
- โTHOUGHTFULLY
- โFORTNIGHT
- โTRUMPET
- โVALJEAN
- โUNWILLING
- โLECTURE
- โWHEREUPON
- โHOLLAND
- โCHANGING
- โCREEK
- โSLICE
- โNORMAL
- โANNIE
- โACCENT
- โFREDERICK
- โDISAGREEABLE
- โRUBBED
- โDUMB
- โESTABLISH
- โIMPORT
- โAFFIRM
- โMATTHEW
- โBRISK
- โCONVERT
- โBENDING
- โIVAN
- โMADEMOISELLE
- โMICHAEL
- โEASIER
- โJONES
- โFACING
- โEXCELLENCY
- โLITERARY
- โGOSSIP
- โDEVOUR
- โSTAGGER
- โPENCIL
- โAVERAGE
- โHAMMER
- โTRIUMPHANT
- โPREFERRED
- โAPPLICATION
- โOCCUPY
- โAUTHORITIES
- BURN
- โASCERTAIN
- โCORRIDOR
- โDELICIOUS
- โPRACTISE
- โUNIVERSE
- โSHILLING
- โCONTEST
- โASHORE
- โCOMMIT
- โADMINISTRATION
- โSTUDIED
- โRIGID
- โADORN
- โELSEWHERE
- โINNOCENCE
- โJOURNAL
- โLANDSCAPE
- โTELEGRAPH
- โANGRILY
- โCAMPAIGN
- โUNJUST
- โCHALLENGE
- โTORRENT
- โRELATE
- โASSEMBLED
- โIMPRESSED
- โCANOE
- โCONCLUD
- โQUIXOTE
- โSATISFACTORY
- โNIECE
- โDEAF
- โRAFT
- โJIMMY
- โGLID
- โREGULAT
- โCHATTER
- โGLACIER
- โENVY
- โSTATUE
- โBOSTON
- โRICHMOND
- โDENIED
- โFANNY
- โSOLOMON
- โVULGAR
- โSTALK
- โREPLACE
- โSPOON
- โBASIN
- โFEATURE
- โCONVICT
- โARCHITECT
- โADMIRAL
- โRIBBON
- โPERMANENT
- โAPRIL
- โJOLLY
- โNEIGHBORHOOD
- โIMPART
- BOROUGH
- CAMP
- โHORRID
- โIMMORTAL
- โPRUDENCE
- โSPANIARD
- โSUPPOSING
- โTELEPHONE
- โTEMPERATURE
- โPENETRATE
- โOYSTER
- โAPPOINTMENT
- โEGYPTIAN
- โDWELT
- โNEPHEW
- โRAILROAD
- โSEPTEMBER
- โDEVICE
- โWHEAT
- โGILBERT
- โELEGANT
- โADVERTISE
- โRATIONAL
- โTURTLE
- โBROOD
- โASSEMBLY
- โCULTIVATE
- โEDITOR
- โSPECIMEN
- โUNDOUBTEDLY
- โWHALE
- โDROPPING
- โBALLOON
- โMEDICAL
- COMB
- โCOMPOSITION
- โFOOTSTEPS
- โLAUNCELOT
- โDISCOURSE
- โERRAND
- โCONVERSE
- โADVANCING
- โDOWNSTAIRS
- โTUMULT
- โCORRUPT
- โSUFFICE
- โANGUISH
- โSHAGGY
- โRETIRE
- โTIMBER
- โBLAZE
- โABSTRACT
- โEMBROIDER
- โPHOTOGRAPH
- โPROSPERITY
- โTERRIBLY
- โTERRITORY
- โTHRESHOLD
- โPAVEMENT
- โINJURED
- โLIMP
- โAGITATION
- โRASCAL
- โPRESUME
- โOBSERVING
- โOBSTACLE
- โSIMPLICITY
- โSLUMBER
- โSUPPLIED
- โCOMBINATION
- โDRAIN
- โWILDERNESS
- โBELIEVING
- โVILLAIN
- โRECKLESS
- โINJURY
- โCLAPP
- โFRIDAY
- โHERCULES
- โKENNEDY
- โSYMPTOM
- โSLEDGE
- โCEILING
- โLEMON
- โPLAGUE
- โMONDAY
- โCANVAS
- โIMPATIENCE
- โUNCOMFORTABLE
- โACCESS
- โFROZEN
- โSENATOR
- โFRANZ
- โSWIMMING
- โBARRIER
- โADJUST
- โCOMPARISON
- โPROCLAIM
- โWRINKL
- โOVERLOOK
- โMITYA
- โGUILT
- โPERCEPTION
- โPRECAUTION
- โSPECTATOR
- โSURPRISING
- โDISTRACT
- โDISDAIN
- โBONNET
- โMAGNET
- โPROFESS
- โCONFOUND
- โNARRATIVE
- โSTRUCTURE
- โSKETCH
- โULTIMATE
- โGLOBE
- โINSECT
- FICIENCY
- โORCHARD
- โAMIABLE
- โDESCENT
- โINDEPENDENCE
- โMANUFACTURE
- โSPRINKLE
- โNIGHTINGALE
- โCUSHION
- โEMINENT
- โSCOTT
- โARRAY
- โCOSETTE
- โWAVING
- โEXTRACT
- โIRREGULAR
- โPERSECUT
- โDERIVED
- โWITHDREW
- โCAUTION
- โSUSPICIOUS
- โMEMORIES
- โNOWHERE
- โSUBTLE
- โTHOROUGH
- Q
- โAPPROPRIATE
- โSLAUGHTER
- โYOURSELVES
- โTHUMB
- โTWAS
- โABODE
- โBIDDING
- โCONSPICUOUS
- โREBECCA
- โSERGEANT
- โAPRON
- โANTICIPATE
- โDISCIPLINE
- โGLANCING
- โPILGRIM
- โSULLEN
- โCONTRIBUTE
- โPRAIRIE
- โCARVED
- โCOMMERCE
- โEXCLAMATION
- โMUSCULAR
- โNOVEMBER
- โPHENOMENA
- โSYMBOL
- โUMBRELLA
- โDIMINISH
- โPARLOUR
- โTHREATENING
- โSTUMP
- โEXTENSIVE
- โPLEASING
- โREMEMBRANCE
- โCOMBINED
- โSHERIFF
- โSHAFT
- โLAURA
- โINTERCOURSE
- โSTRICKEN
- โSUPPLIES
- โLANDLORD
- โSHRINK
- โPRICK
- โCAESAR
- โDRUG
- โBEWILDERED
- โNAUTILUS
- โBRUTAL
- โCOMMERCIAL
- โMAGGIE
- โSPHERE
- โVIRGIN
- โBRETHREN
- โDESTINY
- โPOLICY
- โTERRIFIED
- โHOUSEKEEPER
- โCRAZY
- โARDENT
- โDISCERN
- โWRAP
- โMARQUIS
- โRUSSIA
- MOUTH
- โBRITAIN
- โHARBOUR
- โCONCERT
- โDONKEY
- โDAMAGE
- โSLIM
- ABOUT
- โLUXURY
- โMONSTROUS
- โTENDENCY
- โPARADISE
- โCULTURE
- โJULIUS
- โRAOUL
- โREMEDY
- โDECAY
- โSCOLD
- โSPLIT
- โASSAULT
- โDECEMBER
- โMOSCOW
- โEXPLORE
- โTROUSERS
- โWRIST
- PIECE
- โMUSKET
- โVALENTINE
- โTYRANT
- โABRAHAM
- โMEDIUM
- โARTIFICIAL
- โFACULTY
- โOBLIGATION
- โRESEMBLANCE
- โINQUIRIES
- โDETAIN
- โSWARM
- โPLEDGE
- โADMIRABLE
- โDEFECT
- โSUPERINTEND
- โPATRIOT
- โCLUNG
- โDISMAL
- โRECIT
- โIGNOR
- โAMELIA
- โJUSTIFY
- โELEPHANT
- โESTIMATE
- โKNELT
- โSERVING
- โWHIM
- โSHRILL
- โSTUDIO
- โTEXT
- โALEXANDER
- โWROUGHT
- โABUNDANT
- โSITUATED
- โREGAIN
- โFIERY
- โSNEER
- โSWEAT
- โGLARE
- โNIGH
- โESCORT
- โINEVITABLE
- โPSMITH
- โRELUCTANT
- โPRECEDING
- โRESORT
- โOUTRAGE
- โAMBASSADOR
- โCONSOLATION
- โRECOGNITION
- โREMORSE
- โBEHALF
- โFORMIDABLE
- โGRAVITY
- โDIVIDE
- โCONFRONT
- โGIGANTIC
- โOCTOBER
- โFLANK
- โSLEW
- โCLARA
- โFILM
- โBULK
- โPOMP
- โELEANOR
- โEMPHASIS
- โJAPANESE
- โCAVALRY
- โEXCLUSIVE
- โPERFUME
- โBRONZE
- โFEDERAL
- โLIQUID
- โRUBBING
- โOVEN
- DOLPH
- โCONVULS
- โDEPRIVED
- โRESPONSIBILITY
- โSIGNIFICANT
- โWAISTCOAT
- โCLUSTER
- โMARTHA
- โREVERSE
- โATTORNEY
- โDROOP
- โSKILFUL
- โHABITUAL
- โPUMP
- โINTERVEN
- โOWL
- โCONJECTURE
- โFANTASTIC
- โRESPONSIBLE
- โDESTINED
- โDOCUMENT
- โTHEREUPON
- โGODDESS
- โPACIFIC
- โWARRANT
- โCOSTUME
- โBRIDLE
- โCALIFORNIA
- โDEMOCRATIC
- โEUSTACE
- โSQUIRREL
- โUNCOMMON
- โMARVELLOUS
- โPLOUGH
- โTRAGEDY
- โVAULT
- โHESITATE
- โREFRAIN
- โADMIRING
- โCORPORAL
- โENTITLED
- โSHREWD
- โSQUEEZ
- โACCURATE
- โTEMPEST
- โMONUMENT
- โSIEGE
- โCHINESE
- โRAVEN
- โLOUNG
- โASSASSIN
- โINFLICT
- โAGITATED
- โDESIRABLE
- โEARLIEST
- โLAUNCH
- โPILOT
- โPULSE
- โMUTE
- LEIGH
- โLIQUOR
- โSCARECROW
- โSKULL
- โDESOLATE
- โSUBLIME
- โSERENE
- โRECESS
- โWAKING
- โCHARLOTTE
- โCIRCULAR
- โINJUSTICE
- โPINOCCHIO
- โPRISCILLA
- โTHYSELF
- โOCCURRENCE
- โCASUAL
- โFRANTIC
- โLEGEND
- โFERTIL
- โBACKGROUND
- โDELICACY
- โESTRALLA
- โMANUSCRIPT
- โRESPONSE
- โUNIVERSITY
- โWOLVES
- โSCANDAL
- โSTUMBLE
- โHOARSE
- โBODILY
- โCONVENT
- โEXAMINING
- โINCAPABLE
- โPERCEIVING
- โPHILADELPHIA
- โSUBSEQUENT
- โTHIEVES
- โACCUMULAT
- โDAMSEL
- โSCOTCH
- โUNDERNEATH
- โNOBILITY
- โSMASH
- โREVOLT
- โENGAGE
- โCATHEDRAL
- โCHAMPION
- โDESPATCH
- โETERNITY
- โJANUARY
- โPLEADED
- โPROBABILITY
- โJIMMIE
- โPARALLEL
- โFISHERMAN
- โJERRY
- โSWORE
- โDRAUGHT
- โOPPONENT
- โPRIMITIVE
- โSIGNIFICANCE
- โSUBSTANTIAL
- โAMAZED
- โDUNBAR
- โCOMMEND
- โCONTEMPLATE
- โTESTIMONY
- โIMPERIAL
- โADAPT
- โJUICE
- โCALAMIT
- CULAR
- โCHATEAU
- โPHOENIX
- โPRUDENT
- โSOLUTION
- โVILLEFORT
- โREACTION
- โRELAX
- โYU
- โPROHIBIT
- โDISTRUST
- โPLUNDER
- โWELFARE
- โNAVIGAT
- โPARLOR
- โLAZY
- โDETACH
- OMETER
- โPRIV
- โDISCOURAGE
- โOBSTINATE
- โREJOICING
- โSERMON
- โVEHICLE
- โFANCIES
- โENLIGHTEN
- โACUTE
- โILLUSION
- โANTHEA
- โMARTIAN
- โEXCITE
- โGENEROSITY
- OLOGIST
- โAMAZING
- โUNWORTHY
- โINTERNAL
- โINCENSE
- โVIBRAT
- โADHERE
- ROACH
- โFEBRUARY
- โMEXICAN
- โPOTATOES
- โINCESSANT
- โINTERPOSED
- โPARCEL
- โVEXED
- โPROMOTE
- MIDST
- โARISTOCRAT
- โCYRIL
- โEMBARK
- โABUNDANCE
- โLITERALLY
- โSURGEON
- โTERRACE
- โATLANTIC
- โMARTYR
- โSPECK
- โSENATE
- โLOAF
- โADMINISTER
- โAPPREHEND
- โSUBDUED
- โTEMPORARY
- โDOMINION
- โELABORATE
- โDIGNIFIED
- โELIZA
- โSPLASH
- โCONSEIL
- โDEXTER
- โUNSEEN
- โTRAGIC
- VOCATION
- โGRATIFY
- โBACHELOR
- โDEFENSE
- โEXCURSION
- โFACULTIES
- โPROPRIETOR
- โSYMPATHETIC
- โUNNECESSARY
- โRADIANT
- โVACANT
- โOUNCE
- โSCREW
- โPHENOMENON
- โPROMINENT
- โWORRIED
- โSTUDIES
- โCLIMATE
- โKEITH
- โARAMIS
- โBLISS
- โCONTINUAL
- โSURPASS
- โHEBREW
- โIDENTITY
- โPROVOKE
- โTEMPERAMENT
- โCHARIOT
- โHARBOR
- โNINTH
- โPRIOR
- โDESIROUS
- โJERUSALEM
- โUNDERTAKING
- โEDISON
- โMIRTH
- โSCOUT
- โAPPARATUS
- โILLUSTRATION
- โINTELLIGIBLE
- โINVARIABLY
- โPIERCED
- โREVIEW
- โFLICKER
- โHAZARD
- โREVELATION
- โDIXON
- โEXCITING
- โGOSPEL
- โCONSTANCE
- โOVERTAKE
- โGUINEA
- โALADDIN
- โCHICAGO
- โTULLIVER
- โHAMILTON
- โGARRISON
- โDISCIPLE
- โINTENSITY
- โTRAITOR
- โCHANCELLOR
- โPROVERB
- โDAGGER
- โFORESEE
- โCONFIDE
- โGLIMMER
- โCHAUVELIN
- โILLUSTRATE
- โVOLUNTEER
- โJUNGLE
- โSTREAK
- โSUNRISE
- โDISSOLV
- โQUEST
- โAWHILE
- โFELICITY
- โLEGISLATURE
- โLEONORA
- โMAGAZINE
- โPITIFUL
- โCOLONY
- โSHAWL
- โARRIVING
- โFUNDAMENTAL
- โCARPENTER
- โOVERFLOW
- โEXPAND
- โHARVEST
- โFEMININE
- โINNUMERABLE
- โSCRAMBLE
- โTWENTIETH
- โTRIFLING
- โGHASTL
- โCONQUEST
- โDANIEL
- โFACILIT
- โFORSAKE
- โBEHAVIOUR
- โGORGEOUS
- โPRODUCING
- โHAPPIER
- โPROMISING
- โRAINBOW
- โINSTINCTIVELY
- โDECREE
- โEYEBROWS
- โIRRESISTIBLE
- โPHARAOH
- โSCROOGE
- โUNNATURAL
- โCRUMBS
- โREFINED
- โDREARY
- โTRENCH
- โCONVINCE
- โFRINGE
- โEXTREMITY
- โINTIMACY
- โSCOUNDREL
- โSUFFRAGE
- โUNEASINESS
- โBARRICADE
- โCIRCULAT
- โSAMUEL
- โBRUCE
- โDARCY
- <sos/eos>
init: null
input_size: null
ctc_conf:
dropout_rate: 0.0
ctc_type: builtin
reduce: true
ignore_nan_grad: true
joint_net_conf: null
model_conf:
ctc_weight: 0.3
lsm_weight: 0.1
length_normalized_loss: false
use_preprocessor: true
token_type: bpe
bpemodel: data/en_token_list/bpe_unigram5000/bpe.model
non_linguistic_symbols: null
cleaner: null
g2p: null
speech_volume_normalize: null
rir_scp: null
rir_apply_prob: 1.0
noise_scp: null
noise_apply_prob: 1.0
noise_db_range: '13_15'
frontend: default
frontend_conf:
n_fft: 512
hop_length: 160
fs: 16k
specaug: specaug
specaug_conf:
apply_time_warp: true
time_warp_window: 5
time_warp_mode: bicubic
apply_freq_mask: true
freq_mask_width_range:
- 0
- 27
num_freq_mask: 2
apply_time_mask: true
time_mask_width_ratio_range:
- 0.0
- 0.05
num_time_mask: 10
normalize: global_mvn
normalize_conf:
stats_file: exp/asr_stats_raw_en_bpe5000_sp/train/feats_stats.npz
preencoder: null
preencoder_conf: {}
encoder: conformer
encoder_conf:
output_size: 512
attention_heads: 8
linear_units: 2048
num_blocks: 12
dropout_rate: 0.1
positional_dropout_rate: 0.1
attention_dropout_rate: 0.1
input_layer: conv2d
normalize_before: true
macaron_style: true
rel_pos_type: latest
pos_enc_layer_type: rel_pos
selfattention_layer_type: rel_selfattn
activation_type: swish
use_cnn_module: true
cnn_module_kernel: 31
postencoder: null
postencoder_conf: {}
decoder: transformer
decoder_conf:
attention_heads: 8
linear_units: 2048
num_blocks: 6
dropout_rate: 0.1
positional_dropout_rate: 0.1
self_attention_dropout_rate: 0.1
src_attention_dropout_rate: 0.1
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: true
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
|
navteca/ms-marco-MiniLM-L-12-v2
|
navteca
| 2022-03-14T15:56:35Z | 4 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"jax",
"bert",
"text-classification",
"en",
"license:mit",
"region:us"
] |
text-classification
| 2022-03-14T14:52:30Z |
---
language: en
license: mit
pipeline_tag: text-classification
tags:
- sentence-transformers
---
# Cross-Encoder for MS Marco
The model can be used for Information Retrieval: Given a query, encode the query will all possible passages (e.g. retrieved with ElasticSearch). Then sort the passages in a decreasing order. See [SBERT.net Retrieve & Re-rank](https://www.sbert.net/examples/applications/retrieve_rerank/README.html) for more details. The training code is available here: [SBERT.net Training MS Marco](https://github.com/UKPLab/sentence-transformers/tree/master/examples/training/ms_marco)
## Training Data
This model was trained on the [MS Marco Passage Ranking](https://github.com/microsoft/MSMARCO-Passage-Ranking) task.
## Usage
The usage becomes easier when you have [SentenceTransformers](https://www.sbert.net/) installed. Then, you can use the pre-trained models like this:
```python
from sentence_transformers import CrossEncoder
model = CrossEncoder('model_name', max_length=512)
scores = model.predict([('Query', 'Paragraph1'), ('Query', 'Paragraph2')])
```
## Performance
In the following table, we provide various pre-trained Cross-Encoders together with their performance on the [TREC Deep Learning 2019](https://microsoft.github.io/TREC-2019-Deep-Learning/) and the [MS Marco Passage Reranking](https://github.com/microsoft/MSMARCO-Passage-Ranking/) dataset.
| Model-Name | NDCG@10 (TREC DL 19) | MRR@10 (MS Marco Dev) | Docs / Sec |
| ------------- |:-------------| -----| --- |
| **Version 2 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2-v2 | 69.84 | 32.56 | 9000
| cross-encoder/ms-marco-MiniLM-L-2-v2 | 71.01 | 34.85 | 4100
| cross-encoder/ms-marco-MiniLM-L-4-v2 | 73.04 | 37.70 | 2500
| cross-encoder/ms-marco-MiniLM-L-6-v2 | 74.30 | 39.01 | 1800
| cross-encoder/ms-marco-MiniLM-L-12-v2 | 74.31 | 39.02 | 960
| **Version 1 models** | | |
| cross-encoder/ms-marco-TinyBERT-L-2 | 67.43 | 30.15 | 9000
| cross-encoder/ms-marco-TinyBERT-L-4 | 68.09 | 34.50 | 2900
| cross-encoder/ms-marco-TinyBERT-L-6 | 69.57 | 36.13 | 680
| cross-encoder/ms-marco-electra-base | 71.99 | 36.41 | 340
| **Other models** | | |
| nboost/pt-tinybert-msmarco | 63.63 | 28.80 | 2900
| nboost/pt-bert-base-uncased-msmarco | 70.94 | 34.75 | 340
| nboost/pt-bert-large-msmarco | 73.36 | 36.48 | 100
| Capreolus/electra-base-msmarco | 71.23 | 36.89 | 340
| amberoad/bert-multilingual-passage-reranking-msmarco | 68.40 | 35.54 | 330
| sebastian-hofstaetter/distilbert-cat-margin_mse-T2-msmarco | 72.82 | 37.88 | 720
Note: Runtime was computed on a V100 GPU.
|
GPL/webis-touche2020-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:25:36Z | 119 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:25:34Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/signal1m-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:25:02Z | 127 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:25:00Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/nq-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:24:29Z | 125 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:24:27Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/arguana-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:22:47Z | 121 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:22:45Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/bioasq-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:22:31Z | 124 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:22:29Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/trec-covid-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:22:13Z | 125 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:22:10Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/cqadupstack-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:18:20Z | 114 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:18:17Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/trec-covid-v2-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:18:03Z | 127 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:18:01Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
GPL/scifact-distilbert-tas-b-gpl-self_miner
|
GPL
| 2022-03-14T14:17:30Z | 120 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"transformers",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T14:16:53Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
def cls_pooling(model_output, attention_mask):
return model_output[0][:,0]
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, cls pooling.
sentence_embeddings = cls_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 140000 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`gpl.toolkit.loss.MarginDistillationLoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": 140000,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 350, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': True, 'pooling_mode_mean_tokens': False, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
cptanalatriste/request-for-help
|
cptanalatriste
| 2022-03-14T11:54:48Z | 4 | 0 |
transformers
|
[
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-12T17:19:43Z |
---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: cptanalatriste/request-for-help
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# cptanalatriste/request-for-help
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1342
- Train Sparse Categorical Accuracy: 1.0
- Validation Loss: 0.1514
- Validation Sparse Categorical Accuracy: 0.9796
- Epoch: 19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:|
| 0.8291 | 0.375 | 0.7483 | 0.3673 | 0 |
| 0.7470 | 0.375 | 0.6302 | 0.8163 | 1 |
| 0.6504 | 0.625 | 0.6079 | 0.9184 | 2 |
| 0.6128 | 0.7812 | 0.5882 | 0.8980 | 3 |
| 0.5939 | 0.8125 | 0.5639 | 0.9184 | 4 |
| 0.5300 | 0.9688 | 0.5378 | 0.9184 | 5 |
| 0.5306 | 0.9688 | 0.5098 | 0.9388 | 6 |
| 0.4963 | 1.0 | 0.4806 | 0.9388 | 7 |
| 0.4683 | 0.9688 | 0.4434 | 0.9592 | 8 |
| 0.3959 | 1.0 | 0.4070 | 0.9796 | 9 |
| 0.3807 | 1.0 | 0.3762 | 0.9796 | 10 |
| 0.3509 | 1.0 | 0.3439 | 0.9796 | 11 |
| 0.3013 | 1.0 | 0.3064 | 0.9796 | 12 |
| 0.2848 | 1.0 | 0.2931 | 0.9796 | 13 |
| 0.2587 | 1.0 | 0.2681 | 0.9796 | 14 |
| 0.2510 | 1.0 | 0.2295 | 0.9796 | 15 |
| 0.1867 | 1.0 | 0.2000 | 0.9796 | 16 |
| 0.1652 | 1.0 | 0.1793 | 0.9796 | 17 |
| 0.1297 | 1.0 | 0.1637 | 0.9796 | 18 |
| 0.1342 | 1.0 | 0.1514 | 0.9796 | 19 |
### Framework versions
- Transformers 4.17.0
- TensorFlow 2.6.2
- Datasets 1.18.4
- Tokenizers 0.11.6
|
fenixobia/distilbert-base-uncased-finetuned-cola
|
fenixobia
| 2022-03-14T11:52:00Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-07T17:07:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5595884617444483
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7808
- Matthews Correlation: 0.5596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.522 | 1.0 | 535 | 0.5361 | 0.4215 |
| 0.3472 | 2.0 | 1070 | 0.5309 | 0.5046 |
| 0.2342 | 3.0 | 1605 | 0.6451 | 0.5351 |
| 0.1673 | 4.0 | 2140 | 0.7808 | 0.5596 |
| 0.1249 | 5.0 | 2675 | 0.8750 | 0.5565 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.8.1
- Datasets 1.18.4
- Tokenizers 0.11.6
|
STSP/CT_Test
|
STSP
| 2022-03-14T11:31:46Z | 15 | 0 |
tf-keras
|
[
"tf-keras",
"keras",
"image-classification",
"region:us"
] |
image-classification
| 2022-03-13T17:04:36Z |
---
tags:
- keras
- image-classification
---
|
Kalaoke/embeddings_dense_model
|
Kalaoke
| 2022-03-14T09:54:04Z | 119 | 0 |
sentence-transformers
|
[
"sentence-transformers",
"pytorch",
"distilbert",
"feature-extraction",
"sentence-similarity",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] |
sentence-similarity
| 2022-03-14T09:53:55Z |
---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
---
# Kalaoke/embeddings_dense_model
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 50 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('Kalaoke/embeddings_dense_model')
embeddings = model.encode(sentences)
print(embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=Kalaoke/embeddings_dense_model)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 1050 with parameters:
```
{'batch_size': 32, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 3,
"evaluation_steps": 1000,
"evaluator": "sentence_transformers.evaluation.BinaryClassificationEvaluator.BinaryClassificationEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'transformers.optimization.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 315,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
(2): Asym(
(topic-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'})
(title-0): Dense({'in_features': 768, 'out_features': 50, 'bias': False, 'activation_function': 'torch.nn.modules.activation.Tanh'})
)
)
```
## Citing & Authors
<!--- Describe where people can find more information -->
|
robertou2/roberta-base-bne-finetuned-amazon_reviews_multi
|
robertou2
| 2022-03-14T09:17:59Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T08:34:50Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
metrics:
- accuracy
model-index:
- name: roberta-base-bne-finetuned-amazon_reviews_multi
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: amazon_reviews_multi
type: amazon_reviews_multi
args: es
metrics:
- name: Accuracy
type: accuracy
value: 0.9325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-amazon_reviews_multi
This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2368
- Accuracy: 0.9325
## Model description
Modelo de prueba del curso NLP de 0 a 100 sesion 4
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1919 | 1.0 | 1250 | 0.1690 | 0.933 |
| 0.0972 | 2.0 | 2500 | 0.2368 | 0.9325 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
lijingxin/distilbert-base-uncased-finetuned-clinc
|
lijingxin
| 2022-03-14T09:09:37Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T09:05:40Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2992 | 1.0 | 318 | 3.2969 | 0.7339 |
| 2.6329 | 2.0 | 636 | 1.8817 | 0.8235 |
| 1.5442 | 3.0 | 954 | 1.1561 | 0.8939 |
| 1.0132 | 4.0 | 1272 | 0.8595 | 0.9103 |
| 0.7953 | 5.0 | 1590 | 0.7755 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.10.2
- Datasets 1.16.1
- Tokenizers 0.10.3
|
z-uo/led-base-qasper
|
z-uo
| 2022-03-14T09:04:40Z | 4 | 0 |
transformers
|
[
"transformers",
"tensorboard",
"question_answering",
"en",
"dataset:qasper",
"endpoints_compatible",
"region:us"
] | null | 2022-03-11T18:27:48Z |
---
language: en
tags:
- question_answering
datasets:
- qasper
---
# led-base for QA with qasper
A 10 epochs train of [Longformer Encoder Decoder Baselines for Qasper](https://github.com/allenai/qasper-led-baseline).
## How to use
```
git clone https://github.com/allenai/qasper-led-baseline.git
cd qasper-led-baseline
git clone https://huggingface.co/z-uo/led-base-qasper
pip install -r requirements.txt
# TODO test
python scripts/sample_qasper_answers.py --model led-base-qasper --data qasper-dev-v0.2.json --samples 10 --out test_only.log
```
|
holtin/distilbert-base-uncased-holtin-finetuned-squad
|
holtin
| 2022-03-14T08:09:33Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad_v2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-14T07:57:59Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad_v2
model-index:
- name: distilbert-base-uncased-holtin-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-holtin-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.8541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 84 | 4.4978 |
| No log | 2.0 | 168 | 3.9588 |
| No log | 3.0 | 252 | 3.8541 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ComCom/skt_kogpt2-base-v2
|
ComCom
| 2022-03-14T07:37:27Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"jax",
"gpt2",
"text-generation",
"ko",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-14T06:28:29Z |
---
language: ko
tags:
- gpt2
license: cc-by-nc-sa-4.0
---
- This model forked from [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2).
- You can use this model in [Teachable-NLP](https://ainize.ai/teachable-nlp).
For more details: https://github.com/SKT-AI/KoGPT2
|
armytun/GoodFoodPicker
|
armytun
| 2022-03-14T07:20:09Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-14T07:20:09Z |
---
license: apache-2.0
---
|
BAHIJA/bert-base-uncased-finetuned-sst2
|
BAHIJA
| 2022-03-14T05:48:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-14T04:52:41Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: bert-base-uncased-finetuned-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.9346330275229358
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-sst2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2745
- Accuracy: 0.9346
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1778 | 1.0 | 4210 | 0.3553 | 0.9060 |
| 0.1257 | 2.0 | 8420 | 0.2745 | 0.9346 |
| 0.0779 | 3.0 | 12630 | 0.3272 | 0.9300 |
| 0.0655 | 4.0 | 16840 | 0.3412 | 0.9323 |
| 0.0338 | 5.0 | 21050 | 0.3994 | 0.9300 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
katanaml/layoutlmv2-finetuned-cord
|
katanaml
| 2022-03-13T22:01:58Z | 1,073 | 3 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"layoutlmv2",
"token-classification",
"generated_from_trainer",
"dataset:katanaml/cord",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-06T20:44:36Z |
---
license: cc-by-nc-sa-4.0
datasets:
- katanaml/cord
tags:
- generated_from_trainer
model-index:
- name: layoutlmv2-finetuned-cord
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# layoutlmv2-finetuned-cord
This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on CORD dataset.
## Model description
Model implementation code [Sparrow](https://github.com/katanaml/sparrow)
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 3000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Taekyoon/komrc_train
|
Taekyoon
| 2022-03-13T15:11:14Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"dataset:korquad",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-13T12:22:58Z |
---
tags:
- generated_from_trainer
datasets:
- korquad
model-index:
- name: komrc_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# komrc_train
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the korquad dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6544
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.8187 | 0.31 | 2000 | 0.7377 |
| 0.6947 | 0.63 | 4000 | 0.6934 |
| 0.6352 | 0.94 | 6000 | 0.6544 |
| 0.3869 | 1.25 | 8000 | 0.7633 |
| 0.3812 | 1.56 | 10000 | 0.7047 |
| 0.3579 | 1.88 | 12000 | 0.7097 |
| 0.2053 | 2.19 | 14000 | 0.8511 |
| 0.2173 | 2.5 | 16000 | 0.8457 |
| 0.2094 | 2.82 | 18000 | 0.8433 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
Ramu/distilbert-base-uncased-finetuned-emotion
|
Ramu
| 2022-03-13T14:27:54Z | 7 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-13T01:55:37Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9262005126757141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2167
- Accuracy: 0.926
- F1: 0.9262
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8112 | 1.0 | 250 | 0.3147 | 0.903 | 0.8992 |
| 0.2454 | 2.0 | 500 | 0.2167 | 0.926 | 0.9262 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.8.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
|
avorozhko/ruDialoGpt3-medium-finetuned-context
|
avorozhko
| 2022-03-13T11:41:17Z | 8 | 1 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-02T23:29:05Z |
## ะะฟะธัะฐะฝะธะต ะผะพะดะตะปะธ
ะญัะพั ัะฐัะฑะพั - ะดะธะฟะปะพะผะฝะฐั ัะฐะฑะพัะฐ ัััะดะตะฝัะฐ ะะฝะดัะตั ะะพัะพะถะบะพ ะฒ ะฃะะ (ะฃะฝะธะฒะตััะธัะตั ะัะบััััะฒะตะฝะฝะพะณะพ ะะฝัะตะปะปะตะบัะฐ).
ะะบะพะฝัะฐะฝะธะต ะพะฑััะตะฝะธั - ะผะฐัั 2022 ะณะพะดะฐ.
ะงะฐัะฑะพั ัะดะตะปะฐะฝ ะฝะฐ ะพัะฝะพะฒะต ะผะพะดะตะปะธ [Kirili4ik/ruDialoGpt3-medium-finetuned-telegram](https://huggingface.co/Kirili4ik/ruDialoGpt3-medium-finetuned-telegram)
ะขะตะฟะตัั ะผะพะดะตะปั ะดะพะพะฑััะตะฝะฐ ะฝะฐ ะพัะฝะพะฒะต 27000 ะฐะฝะตะบะดะพัะพะฒ (14 ัะฟะพั
, ัะบะพัะพััั ะพะฑััะตะฝะธั ะฒ ะบะพะปะฐะฑะต 2-6 ัะฐัะพะฒ ะฝะฐ ัะฟะพั
ั) ะธ ัะผะตะตั ะฟะพะฝะธะผะฐัั ะบะพะฝัะตะบัั ัะฐะทะณะพะฒะพัะฐ. ะะดะฝะฐะบะพ ะบะพะฝัะตะบัั ะฟัะธั
ะพะดะธััั ะพะณัะฐะฝะธัะธะฒะฐัั ะฝะตัะบะพะปัะบะธะผะธ ะฟะพัะปะตะดะฝะธะผะธ ัะพะพะฑัะตะฝะธัะผะธ ะฟะพัะพะผั ััะพ ัะตะผ ะฑะพะปััะต ะบะพะฝัะตะบััะฐ ัะตะผ ะผะตะดะปะตะฝะฝะตะต ะผะพะดะตะปั ัะฐะฑะพัะฐะตั, ะฐ ะบะพะฝัะตะบัั ัะฐััะตั ะบะฐะบ ัะฝะตะถะฝัะน ะบะพะผ ะฒ ะฟัะพัะตััะต ัะฐะทะณะพะฒะพัะฐ.
ะะฝัะตัะตะฝั ะฝะฐั
ะพะดะธััั ะฒ [spaces](https://huggingface.co/spaces/avorozhko/funbot):
ะขะฐะผ ั ะฑะพัะพะผ ะผะพะถะฝะพ ะฟะพะณะพะฒะพัะธัั. ะะพะฝัะตะบัั ะพะณัะฐะฝะธัะตะฝ 10 ะฟะพัะปะตะดะฝะธะผะธ ัะพะพะฑัะตะฝะธัะผะธ.
ะจััะบะธ ะฑะพั ะฒัะดะฐะตั, ะฝะพ ะฟะพะบะฐ ัะบะพัะตะต ัะปััะฐะนะฝะพ, ัะตะผ ะฝะฐะผะตัะตะฝะฝะพ. ะะดะฝะฐะบะพ ัะฐะทะณะพะฒะพั ะฟะพะดะดะตัะถะฐัั ัะฟะพัะพะฑะตะฝ ะธ ะดะฐะถะต ะฝะตะผะฝะพะณะพ ัะฐะทะฒะปะตัั.
ะขะฐะบ ะบะฐะบ ััะพ ะณะตะฝะตัะฐัะธั ัะตะบััะฐ, ัะพ ะฝะฐ ะพะดะฝั ะธ ัั ะถะต ััะฐะทั ะฑะพั ะฒัะตะณะดะฐ ะฑัะดะตั ะฒัะดะฐะฒะฐัั ัะฐะทะฝัะต ะพัะฒะตัั.
ะขะฐะบะถะต ะดะปั ะพะฟัะตะดะตะปะตะฝะธั ะบะฐัะตััะฒะฐ ะดะฐะฝะฝะพะน ะผะพะดะตะปะธ ะธัะฟะพะปัะทะพะฒะฐะปะฐัั ะบะฐััะพะผะฝะฐั ะผะตััะธะบะฐ - ัะณะปะพะฒะพะต ัะฐัััะพัะฝะธั ะผะตะถะดั ัะผะฑะตะดะดะธะฝะณะฐะผะธ y_train ะธ ะฟัะตะดะธะบัะฐ.
ะขะพ ะตััั ะผั ะฒะทัะปะธ ะฟะตัะฒัะน ัะปะพะน ัะผะฑะตะดะดะธะฝะณะฐ ะผะพะดะตะปะธ ะธ ะฟัะพะณะพะฝัะปะธ ะฟัะตะดะธะบัั ะธ ะปะตะนะฑะปั, ะฟะพะปััะธะปะธ ะฒะตะบัะพัะฐ ัะปะพะฒ. ะะพัะพะผ ะฒะตะบัะพัะฐ ัะปะพะฒ ััะผะผะธัะพะฒะฐะปะธ ะธ ะฟะพะปััะธะปะธ ะพะฑัะธะต (ััะผะผะฐัะฝัะต) ะฒะตะบัะพัะฐ ะปะตะนะฑะปะพะฒ ะธ ะฟัะตะดะธะบัะพะฒ. ะงะตะผ ะผะตะฝััะต ัะณะพะป ะผะตะถะดั ะฝะธะผะธ, ัะตะผ ะปัััะต. ะัะธ ัะฐัััะตัะฐั
ะพัะธะตะฝัะธัะพะฒะฐะปะธัั ะฝะฐ ะบะพัะธะฝัั ััะพะณะพ ัะณะปะฐ, ัะฐะบ ะบะฐะบ cos 0 = 1, ัะพ ััะพ ะพัะตะฝั ัะดะพะฑะฝะพ - ัะตะผ ะฑะปะธะถะต ะฟะพะบะฐะทะฐัะตะปั ะบ 1, ัะตะผ ะปัััะต.
ะะพั ัะฐะบะพะต ัะฐัะฟัะตะดะตะปะตะฝะธะต ััะธั
ะทะฝะฐัะตะฝะธะน ะฟะพะปััะธะปะพัั ะฟะพ ัะฟะพั
ะฐะผ ะฝะฐ ะะ ะะะะ ะะงะะะ ะฒัะฑะพัะบะต (1406 ะฐะฝะตะบะดะพัะพะฒ):
```
{1: tensor(0.9357, device='cuda:0', grad_fn=<DivBackward0>),
2: tensor(0.9390, device='cuda:0', grad_fn=<DivBackward0>),
3: tensor(0.9417, device='cuda:0', grad_fn=<DivBackward0>),
4: tensor(0.9439, device='cuda:0', grad_fn=<DivBackward0>),
5: tensor(0.9470, device='cuda:0', grad_fn=<DivBackward0>),
6: tensor(0.9537, device='cuda:0', grad_fn=<DivBackward0>),
7: tensor(0.9568, device='cuda:0', grad_fn=<DivBackward0>),
8: tensor(0.9592, device='cuda:0', grad_fn=<DivBackward0>),
9: tensor(0.9610, device='cuda:0', grad_fn=<DivBackward0>),
10: tensor(0.9622, device='cuda:0', grad_fn=<DivBackward0>),
11: tensor(0.9628, device='cuda:0', grad_fn=<DivBackward0>),
12: tensor(0.9632, device='cuda:0', grad_fn=<DivBackward0>),
13: tensor(0.9630, device='cuda:0', grad_fn=<DivBackward0>),
14: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>),
15: tensor(0.9634, device='cuda:0', grad_fn=<DivBackward0>)}
```
ะะปั ะธะฝัะตัะตะฝัะฐ ะฒัะฑัะฐะฝะฐ 14-ั ัะฟะพั
ะฐ ั ัะพัะฝะพัััั 0.9634. ะะฐะปะตะต, ััะดั ะฟะพ ะฒัะตะผั ะธะดะตั ัะถะต ะฟะตัะตะพะฑััะตะฝะธะต.
|
cammy/bart-large-cnn-100-lit-evalMA-NOpad2
|
cammy
| 2022-03-13T11:11:08Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-13T10:56:35Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-100-lit-evalMA-NOpad2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100-lit-evalMA-NOpad2
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2126
- Rouge1: 25.6196
- Rouge2: 7.2753
- Rougel: 18.0987
- Rougelsum: 20.8416
- Gen Len: 67.3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.0890 | 23.5493 | 8.9875 | 17.1471 | 20.1643 | 67.8 |
| No log | 2.0 | 200 | 1.2126 | 25.6196 | 7.2753 | 18.0987 | 20.8416 | 67.3 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-1000-lit-evalMA-NOpad
|
cammy
| 2022-03-13T10:50:26Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-13T10:08:09Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-1000-lit-evalMA-NOpad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-1000-lit-evalMA-NOpad
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9804
- Rouge1: 27.2698
- Rouge2: 11.8561
- Rougel: 20.5948
- Rougelsum: 23.5497
- Gen Len: 67.67
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.5372 | 1.0 | 1000 | 1.7499 | 27.7275 | 12.7894 | 21.1334 | 24.4929 | 66.31 |
| 0.7344 | 2.0 | 2000 | 1.9804 | 27.2698 | 11.8561 | 20.5948 | 23.5497 | 67.67 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
anasaqsme/distilbert-base-uncased-finetuned-squad
|
anasaqsme
| 2022-03-13T08:15:26Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
cammy/bart-large-cnn-weaksup-1000-NOpad-early
|
cammy
| 2022-03-13T05:51:27Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-13T05:36:31Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-1000-NOpad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-1000-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.9082
- Rouge1: 26.9663
- Rouge2: 11.3027
- Rougel: 20.7327
- Rougelsum: 23.5965
- Gen Len: 67.19
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.4775 | 1.0 | 1000 | 1.6796 | 27.208 | 12.01 | 20.8401 | 24.1333 | 66.06 |
| 0.6972 | 2.0 | 2000 | 1.9082 | 26.9663 | 11.3027 | 20.7327 | 23.5965 | 67.19 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
cammy/bart-large-cnn-weaksup-100-NOpad-early
|
cammy
| 2022-03-13T05:24:09Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-13T05:23:53Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-100-NOpad-early
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-100-NOpad-early
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0768
- Rouge1: 28.7908
- Rouge2: 10.6989
- Rougel: 20.534
- Rougelsum: 24.1294
- Gen Len: 68.5
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 100 | 1.8905 | 31.1534 | 13.7074 | 21.6489 | 27.0709 | 64.2 |
| No log | 2.0 | 200 | 2.0768 | 28.7908 | 10.6989 | 20.534 | 24.1294 | 68.5 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
khavitidala/xlmroberta-large-fine-tuned-indo-hoax-classification
|
khavitidala
| 2022-03-13T02:01:19Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"exbert",
"multilingual",
"arxiv:1911.02116",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text-classification
| 2022-03-12T12:40:20Z |
---
tags:
- exbert
language: multilingual
inference: true
license: mit
---
# Fine-tuned version of XLM-RoBERTa (large-sized model)
fine tune by Ryan Abdurohman
# XLM-RoBERTa (large-sized model)
XLM-RoBERTa model pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages. It was introduced in the paper [Unsupervised Cross-lingual Representation Learning at Scale](https://arxiv.org/abs/1911.02116) by Conneau et al. and first released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/xlmr).
Disclaimer: The team releasing XLM-RoBERTa did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
XLM-RoBERTa is a multilingual version of RoBERTa. It is pre-trained on 2.5TB of filtered CommonCrawl data containing 100 languages.
RoBERTa is a transformers model pretrained on a large corpus in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts.
More precisely, it was pretrained with the Masked language modeling (MLM) objective. Taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence.
This way, the model learns an inner representation of 100 languages that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the XLM-RoBERTa model as inputs.
## Intended uses & limitations
You can use the raw model for masked language modeling, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?search=xlm-roberta) to look for fine-tuned versions on a task that interests you.
Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation, you should look at models like GPT2.
## Usage
You can use this model directly with a pipeline for masked language modeling:
```python
>>> from transformers import pipeline
>>> unmasker = pipeline('fill-mask', model='xlm-roberta-large')
>>> unmasker("Hello I'm a <mask> model.")
[{'score': 0.10563907772302628,
'sequence': "Hello I'm a fashion model.",
'token': 54543,
'token_str': 'fashion'},
{'score': 0.08015287667512894,
'sequence': "Hello I'm a new model.",
'token': 3525,
'token_str': 'new'},
{'score': 0.033413201570510864,
'sequence': "Hello I'm a model model.",
'token': 3299,
'token_str': 'model'},
{'score': 0.030217764899134636,
'sequence': "Hello I'm a French model.",
'token': 92265,
'token_str': 'French'},
{'score': 0.026436051353812218,
'sequence': "Hello I'm a sexy model.",
'token': 17473,
'token_str': 'sexy'}]
```
Here is how to use this model to get the features of a given text in PyTorch:
```python
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
model = AutoModelForMaskedLM.from_pretrained("xlm-roberta-large")
# prepare input
text = "Replace me by any text you'd like."
encoded_input = tokenizer(text, return_tensors='pt')
# forward pass
output = model(**encoded_input)
```
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-1911-02116,
author = {Alexis Conneau and
Kartikay Khandelwal and
Naman Goyal and
Vishrav Chaudhary and
Guillaume Wenzek and
Francisco Guzm{\'{a}}n and
Edouard Grave and
Myle Ott and
Luke Zettlemoyer and
Veselin Stoyanov},
title = {Unsupervised Cross-lingual Representation Learning at Scale},
journal = {CoRR},
volume = {abs/1911.02116},
year = {2019},
url = {http://arxiv.org/abs/1911.02116},
eprinttype = {arXiv},
eprint = {1911.02116},
timestamp = {Mon, 11 Nov 2019 18:38:09 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-1911-02116.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
<a href="https://huggingface.co/exbert/?model=xlm-roberta-base">
<img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png">
</a>
|
willcai/wav2vec2_common_voice_accents
|
willcai
| 2022-03-13T01:55:11Z | 4 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-10T21:28:18Z |
---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- common_voice
model-index:
- name: wav2vec2_common_voice_accents
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2_common_voice_accents
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9095
- Wer: 0.4269
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0003
- train_batch_size: 16
- eval_batch_size: 4
- seed: 42
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 4.0135 | 5.33 | 400 | 1.3259 | 0.8067 |
| 0.5608 | 10.67 | 800 | 0.7832 | 0.5024 |
| 0.1441 | 16.0 | 1200 | 0.9309 | 0.4698 |
| 0.0724 | 21.33 | 1600 | 0.9750 | 0.4461 |
| 0.0444 | 26.67 | 2000 | 0.9095 | 0.4269 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.4
- Tokenizers 0.11.6
|
cammy/bart-large-cnn-weaksup-original-100k
|
cammy
| 2022-03-13T00:10:30Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-12T12:19:39Z |
---
license: mit
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-cnn-weaksup-original-100k
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-weaksup-original-100k
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5931
- Rouge1: 30.4429
- Rouge2: 15.6691
- Rougel: 24.1975
- Rougelsum: 27.4761
- Gen Len: 68.4568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:------:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 1.261 | 1.0 | 100000 | 1.5931 | 30.4429 | 15.6691 | 24.1975 | 27.4761 | 68.4568 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
richielo/small-e-czech-finetuned-ner-wikiann
|
richielo
| 2022-03-12T20:18:42Z | 12,031 | 2 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"electra",
"token-classification",
"generated_from_trainer",
"dataset:wikiann",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-12T17:57:32Z |
---
license: cc-by-4.0
tags:
- generated_from_trainer
datasets:
- wikiann
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: small-e-czech-finetuned-ner-wikiann
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: wikiann
type: wikiann
args: cs
metrics:
- name: Precision
type: precision
value: 0.8713322894683097
- name: Recall
type: recall
value: 0.8970423324922905
- name: F1
type: f1
value: 0.8840004144075699
- name: Accuracy
type: accuracy
value: 0.9557089381093997
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# small-e-czech-finetuned-ner-wikiann
This model is a fine-tuned version of [Seznam/small-e-czech](https://huggingface.co/Seznam/small-e-czech) on the wikiann dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2547
- Precision: 0.8713
- Recall: 0.8970
- F1: 0.8840
- Accuracy: 0.9557
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.2924 | 1.0 | 2500 | 0.2449 | 0.7686 | 0.8088 | 0.7882 | 0.9320 |
| 0.2042 | 2.0 | 5000 | 0.2137 | 0.8050 | 0.8398 | 0.8220 | 0.9400 |
| 0.1699 | 3.0 | 7500 | 0.1912 | 0.8236 | 0.8593 | 0.8411 | 0.9466 |
| 0.1419 | 4.0 | 10000 | 0.1931 | 0.8349 | 0.8671 | 0.8507 | 0.9488 |
| 0.1316 | 5.0 | 12500 | 0.1892 | 0.8470 | 0.8776 | 0.8620 | 0.9519 |
| 0.1042 | 6.0 | 15000 | 0.2058 | 0.8433 | 0.8811 | 0.8618 | 0.9508 |
| 0.0884 | 7.0 | 17500 | 0.2020 | 0.8602 | 0.8849 | 0.8724 | 0.9531 |
| 0.0902 | 8.0 | 20000 | 0.2118 | 0.8551 | 0.8837 | 0.8692 | 0.9528 |
| 0.0669 | 9.0 | 22500 | 0.2171 | 0.8634 | 0.8906 | 0.8768 | 0.9550 |
| 0.0529 | 10.0 | 25000 | 0.2228 | 0.8638 | 0.8912 | 0.8773 | 0.9545 |
| 0.0613 | 11.0 | 27500 | 0.2293 | 0.8626 | 0.8898 | 0.8760 | 0.9544 |
| 0.0549 | 12.0 | 30000 | 0.2276 | 0.8694 | 0.8958 | 0.8824 | 0.9554 |
| 0.0516 | 13.0 | 32500 | 0.2384 | 0.8717 | 0.8940 | 0.8827 | 0.9552 |
| 0.0412 | 14.0 | 35000 | 0.2443 | 0.8701 | 0.8931 | 0.8815 | 0.9554 |
| 0.0345 | 15.0 | 37500 | 0.2464 | 0.8723 | 0.8958 | 0.8839 | 0.9557 |
| 0.0412 | 16.0 | 40000 | 0.2477 | 0.8705 | 0.8948 | 0.8825 | 0.9552 |
| 0.0363 | 17.0 | 42500 | 0.2525 | 0.8742 | 0.8973 | 0.8856 | 0.9559 |
| 0.0341 | 18.0 | 45000 | 0.2529 | 0.8727 | 0.8962 | 0.8843 | 0.9561 |
| 0.0194 | 19.0 | 47500 | 0.2533 | 0.8699 | 0.8966 | 0.8830 | 0.9557 |
| 0.0247 | 20.0 | 50000 | 0.2547 | 0.8713 | 0.8970 | 0.8840 | 0.9557 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
ABIINNOVATIONS/Filmstack
|
ABIINNOVATIONS
| 2022-03-12T18:53:50Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-12T18:53:50Z |
---
license: apache-2.0
---
|
rocca/informative-drawings-line-art-onnx
|
rocca
| 2022-03-12T17:59:37Z | 0 | 0 | null |
[
"onnx",
"region:us"
] | null | 2022-03-12T17:52:02Z |
All credit to this repo: https://huggingface.co/spaces/carolineec/informativedrawings
JavaScript/browser demo here: https://github.com/josephrocca/image-to-line-art-js
|
Babygirl/Daddy
|
Babygirl
| 2022-03-12T17:48:58Z | 0 | 1 | null |
[
"license:artistic-2.0",
"region:us"
] | null | 2022-03-12T17:48:58Z |
---
license: artistic-2.0
---
|
Sakil/Humanoid_robot
|
Sakil
| 2022-03-12T17:47:41Z | 0 | 0 | null |
[
"license:apache-2.0",
"region:us"
] | null | 2022-03-12T17:47:41Z |
---
license: apache-2.0
---
|
Taekyoon/neg_komrc_train
|
Taekyoon
| 2022-03-12T16:36:37Z | 13 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"question-answering",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] |
question-answering
| 2022-03-02T23:29:05Z |
---
tags:
- generated_from_trainer
model-index:
- name: neg_komrc_train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# neg_komrc_train
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4016
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 1234
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.277 | 0.51 | 10000 | 0.4016 |
| 0.1671 | 1.03 | 20000 | 0.4116 |
| 0.1725 | 1.54 | 30000 | 0.4390 |
| 0.0868 | 2.06 | 40000 | 0.5147 |
| 0.0868 | 2.57 | 50000 | 0.5064 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
StivenLancheros/Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
|
StivenLancheros
| 2022-03-12T11:40:00Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-11T20:09:49Z |
---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Biobert-base-cased-v1.2-finetuned-ner-CRAFT_es_en
This model is a fine-tuned version of [dmis-lab/biobert-base-cased-v1.2](https://huggingface.co/dmis-lab/biobert-base-cased-v1.2) on the CRAFT dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1811
- Precision: 0.8555
- Recall: 0.8539
- F1: 0.8547
- Accuracy: 0.9706
## Model description
This model performs Named Entity Recognition for 6 entity tags: Sequence, Cell, Protein, Gene, Taxon, and Chemical from the [CRAFT](https://github.com/UCDenver-ccp/CRAFT/releases)(Colorado Richly Annotated Full Text) Corpus in Spanish and English.
Entity tags have been normalized and replaced from the original three letter code to a full name e.g. B-Protein, I-Chemical.
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.052 | 1.0 | 1360 | 0.1413 | 0.8300 | 0.8442 | 0.8370 | 0.9677 |
| 0.0199 | 2.0 | 2720 | 0.1673 | 0.8461 | 0.8458 | 0.8459 | 0.9689 |
| 0.011 | 3.0 | 4080 | 0.1647 | 0.8588 | 0.8528 | 0.8558 | 0.9704 |
| 0.0031 | 4.0 | 5440 | 0.1811 | 0.8555 | 0.8539 | 0.8547 | 0.9706 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.11.6
|
Splend1dchan/deberta-large-slue-goldtrascription-e50
|
Splend1dchan
| 2022-03-12T10:30:29Z | 2 | 0 |
transformers
|
[
"transformers",
"pytorch",
"deberta",
"endpoints_compatible",
"region:us"
] | null | 2022-03-12T03:52:10Z |
Deberta large trained on slue transcriptions for 50 epochs, lr = 5e-6
|
sanchit-gandhi/wav2vec2-2-rnd-2-layer-bart
|
sanchit-gandhi
| 2022-03-12T03:02:56Z | 15 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"speech-encoder-decoder",
"automatic-speech-recognition",
"generated_from_trainer",
"dataset:librispeech_asr",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-10T20:56:10Z |
---
tags:
- generated_from_trainer
datasets:
- librispeech_asr
model-index:
- name: ''
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
#
This model was trained from scratch on the librispeech_asr dataset.
It achieves the following results on the evaluation set:
- Loss: 4.6263
- Wer: 0.8568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 5.9849 | 1.68 | 1500 | 5.9623 | 1.1028 |
| 5.1696 | 3.36 | 3000 | 5.5504 | 1.6345 |
| 4.1412 | 5.04 | 4500 | 5.3853 | 1.3565 |
| 2.7226 | 6.73 | 6000 | 5.3072 | 0.9908 |
| 3.2607 | 8.41 | 7500 | 5.4121 | 1.2854 |
| 2.4017 | 10.09 | 9000 | 5.1094 | 1.0303 |
| 1.7361 | 11.77 | 10500 | 4.8928 | 0.9506 |
| 2.0638 | 13.45 | 12000 | 4.8352 | 0.9127 |
| 1.2832 | 15.13 | 13500 | 4.7271 | 0.9103 |
| 1.0439 | 16.82 | 15000 | 4.5980 | 0.8720 |
| 0.4112 | 18.5 | 16500 | 4.6263 | 0.8568 |
### Framework versions
- Transformers 4.17.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.3
- Tokenizers 0.11.0
|
huggingtweets/thed3linquent_
|
huggingtweets
| 2022-03-11T22:57:28Z | 3 | 0 |
transformers
|
[
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text-generation
| 2022-03-11T22:57:19Z |
---
language: en
thumbnail: https://github.com/borisdayma/huggingtweets/blob/master/img/logo.png?raw=true
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1502166273064517632/RdLwNuR6_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">๐ค AI BOT ๐ค</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">rogueโ๐|| BIRFDAY BOY</div>
<div style="text-align: center; font-size: 14px;">@thed3linquent_</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from rogueโ๐|| BIRFDAY BOY.
| Data | rogueโ๐|| BIRFDAY BOY |
| --- | --- |
| Tweets downloaded | 3246 |
| Retweets | 334 |
| Short tweets | 710 |
| Tweets kept | 2202 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/1tal3g38/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @thed3linquent_'s tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/1aw76tml/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/thed3linquent_')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
|
Ayham/albert_ernie_50beam_summarization_cnn_dailymail
|
Ayham
| 2022-03-11T21:58:56Z | 6 | 0 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-11T14:33:54Z |
---
tags:
- generated_from_trainer
datasets:
- cnn_dailymail
model-index:
- name: albert_ernie_summarization_cnn_dailymail
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# albert_ernie_summarization_cnn_dailymail
This model is a fine-tuned version of [](https://huggingface.co/) on the cnn_dailymail dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 3.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.0.dev0
- Pytorch 1.10.0+cu111
- Datasets 1.18.4
- Tokenizers 0.10.3
|
GroNLP/wav2vec2-dutch-base
|
GroNLP
| 2022-03-11T16:04:18Z | 58 | 0 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"nl",
"endpoints_compatible",
"region:us"
] | null | 2022-03-11T15:43:01Z |
---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Base
A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-base`](https://huggingface.co/facebook/wav2vec2-base) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
This model is one of two Dutch Wav2Vec2 models:
- [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base) (this model)
- [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large)
|
GroNLP/wav2vec2-dutch-large
|
GroNLP
| 2022-03-11T16:04:07Z | 14 | 2 |
transformers
|
[
"transformers",
"pytorch",
"wav2vec2",
"pretraining",
"speech",
"nl",
"endpoints_compatible",
"region:us"
] | null | 2022-03-11T15:41:51Z |
---
language: nl
tags:
- speech
---
# Wav2Vec2-Dutch-Large
A Dutch Wav2Vec2 model. This model is created by further pre-training the original English [`facebook/wav2vec2-large`](https://huggingface.co/facebook/wav2vec2-large) model on Dutch speech from [Het Corpus Gesproken Nederlands](https://taalmaterialen.ivdnt.org/download/tstc-corpus-gesproken-nederlands/).
This model is one of two Dutch Wav2Vec2 models:
- [`GroNLP/wav2vec2-dutch-base`](https://huggingface.co/GroNLP/wav2vec2-dutch-base)
- [`GroNLP/wav2vec2-dutch-large`](https://huggingface.co/GroNLP/wav2vec2-dutch-large) (this model)
|
anton-l/xtreme_s_xlsr_minds14_fr
|
anton-l
| 2022-03-11T13:39:16Z | 5 | 1 |
transformers
|
[
"transformers",
"pytorch",
"tensorboard",
"wav2vec2",
"audio-classification",
"automatic-speech-recognition",
"google/xtreme_s",
"generated_from_trainer",
"dataset:xtreme_s",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] |
automatic-speech-recognition
| 2022-03-08T20:17:30Z |
---
license: apache-2.0
tags:
- automatic-speech-recognition
- google/xtreme_s
- generated_from_trainer
datasets:
- xtreme_s
metrics:
- accuracy
model-index:
- name: xtreme_s_xlsr_minds14_fr
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xtreme_s_xlsr_minds14_fr
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the GOOGLE/XTREME_S - MINDS14.FR-FR dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3922
- Accuracy: 0.9135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- distributed_type: multi-GPU
- num_devices: 2
- total_train_batch_size: 64
- total_eval_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 50.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.9751 | 10.0 | 50 | 2.0203 | 0.3462 |
| 0.4275 | 20.0 | 100 | 0.7434 | 0.7981 |
| 0.2484 | 30.0 | 150 | 0.7686 | 0.8462 |
| 0.0263 | 40.0 | 200 | 0.3922 | 0.9135 |
| 0.0118 | 50.0 | 250 | 0.4859 | 0.9038 |
### Framework versions
- Transformers 4.18.0.dev0
- Pytorch 1.10.2+cu113
- Datasets 1.18.4.dev0
- Tokenizers 0.11.6
|
leftthomas/resnet50
|
leftthomas
| 2022-03-11T12:53:14Z | 83 | 0 |
transformers
|
[
"transformers",
"pytorch",
"resnet",
"image-classification",
"custom_code",
"dataset:imagenet",
"arxiv:1512.03385",
"license:afl-3.0",
"autotrain_compatible",
"region:us"
] |
image-classification
| 2022-03-02T23:29:05Z |
---
tags:
- image-classification
- resnet
license: afl-3.0
datasets:
- imagenet
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
---
# ResNet-50
Pretrained model on [ImageNet](http://www.image-net.org/). The ResNet architecture was introduced in
[this paper](https://arxiv.org/abs/1512.03385).
## Intended uses
You can use the raw model to classify images along the 1,000 ImageNet labels, but you can also change its head
to fine-tune it on a downstream task (another classification task with different labels, image segmentation or
object detection, to name a few).
## Evaluation results
This model has a top1-accuracy of 76.13% and a top-5 accuracy of 92.86% in the evaluation set of ImageNet.
|
ratishsp/SeqPlan-RotoWire
|
ratishsp
| 2022-03-11T12:26:18Z | 0 | 0 | null |
[
"arxiv:2202.13756",
"region:us"
] | null | 2022-03-11T12:20:13Z |
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [RotoWire dataset](https://github.com/harvardnlp/boxscore-data). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py).
## Citation
```
@article{puduppully-2021-seq-plan,
author = {Ratish Puduppully and Yao Fu and Mirella Lapata},
title = {Data-to-text Generation with Variational Sequential Planning},
journal = {Transactions of the Association for Computational Linguistics (to appear)},
url = {https://arxiv.org/abs/2202.13756},
year = {2022}
}
```
## License
The model is available under the MIT License.
|
ratishsp/SeqPlan-MLB
|
ratishsp
| 2022-03-11T12:08:06Z | 0 | 0 | null |
[
"arxiv:2202.13756",
"region:us"
] | null | 2022-03-11T11:54:01Z |
This repo contains model for [Data-to-text Generation with Variational Sequential Planning](https://arxiv.org/abs/2202.13756) (Ratish Puduppully and Yao Fu and Mirella Lapata; In Transactions of the Association for Computational Linguistics (TACL)). This model is trained on the [MLB dataset](https://huggingface.co/datasets/GEM/mlb_data_to_text). The code is available on github [repo](https://github.com/ratishsp/data2text-seq-plan-py).
## Citation
```
@article{puduppully-2021-seq-plan,
author = {Ratish Puduppully and Yao Fu and Mirella Lapata},
title = {Data-to-text Generation with Variational Sequential Planning},
journal = {Transactions of the Association for Computational Linguistics (to appear)},
url = {https://arxiv.org/abs/2202.13756},
year = {2022}
}
```
## License
The model is available under the MIT License.
|
cammy/bart-large-cnn-100k-lit-evalMA
|
cammy
| 2022-03-11T10:34:13Z | 5 | 0 |
transformers
|
[
"transformers",
"pytorch",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-10T04:44:55Z |
---
license: mit
tags:
- generated_from_trainer
model-index:
- name: bart-large-cnn-100k-lit-evalMA
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-100k-lit-evalMA
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.7715
- eval_rouge1: 29.7037
- eval_rouge2: 15.0234
- eval_rougeL: 23.5169
- eval_rougeLsum: 26.8682
- eval_gen_len: 68.1209
- eval_runtime: 28898.0987
- eval_samples_per_second: 0.346
- eval_steps_per_second: 0.346
- epoch: 1.0
- step: 100000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
|
waboucay/french-camembert-postag-model-finetuned-perceo
|
waboucay
| 2022-03-11T09:37:32Z | 4 | 2 |
transformers
|
[
"transformers",
"pytorch",
"camembert",
"token-classification",
"pos-tagging",
"fr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] |
token-classification
| 2022-03-02T23:29:05Z |
---
language:
- fr
tags:
- pos-tagging
---
## Eval results
We obtain the following results on ```validation``` and ```test``` sets:
| Set | F1<sub>micro</sub> | F1<sub>macro</sub> |
|------------|--------------------|--------------------|
| validation | 98.2 | 93.2 |
| test | 97.7 | 87.4 |
|
everdoubling/byt5-Korean-large
|
everdoubling
| 2022-03-11T09:16:25Z | 10 | 2 |
transformers
|
[
"transformers",
"pytorch",
"t5",
"text2text-generation",
"dataset:mc4",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] |
text2text-generation
| 2022-03-04T09:03:25Z |
---
datasets:
- mc4
license: apache-2.0
---
# ByT5-Korean - large
ByT5-Korean is a Korean specific extension of Google's [ByT5](https://github.com/google-research/byt5).
A Korean syllable has three components (called Jamo): a beginning consonant, a middle vowel, and an optional final consonant; they are like individual characters of alphabet.
While the ByT5's utf-8 encoding allows generic encoding for multiple languages, it is unnatural for Korean because it splits the bits representation of each Jamo in the middle.
ByT5-Korean extends ByT5's utf-8 encoding with special care for Korean syllables; each Jamo is represented with a extra token.
ByT5-Korean was pre-trained on [mC4](https://www.tensorflow.org/datasets/catalog/c4#c4multilingual) with 70% Korean and 30% English.
## Encoding Scheme
```text
id: token
0: <pad>
1: <eos>
2: <unk>
3~258: utf-8 encoding
259~277: beginning consonants(์ด์ฑ), 19๊ฐ(ใฑใฒใดใทใธในใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
)
278~298: middle vowel(์ค์ฑ), 21๊ฐ(ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
กใ
ขใ
ฃ)
299~326: final consonant(์ข
์ฑ), ๋ฌด์ข
์ฑ+27๊ฐ(ใฑใฒใณใดใตใถใทในใบใปใผใฝใพใฟใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
ใ
)
327~384: from <extra_id_0> to <extra_id_57>
```
## Example Inference
```python
import torch
from tokenizer import ByT5KoreanTokenizer # https://huggingface.co/everdoubling/byt5-Korean-large/blob/main/tokenizer.py
from transformers import T5ForConditionalGeneration
tokenizer_jamo = ByT5KoreanTokenizer()
model = T5ForConditionalGeneration.from_pretrained('everdoubling/byt5-Korean-large')
input_sentence = 'ํ๊ตญ์ด ์ํค๋ฐฑ๊ณผ(์์ด: Korean Wikipedia)๋ ํ๊ตญ์ด๋ก ์ด์๋๋ ์ํค๋ฐฑ๊ณผ์ ๋ค์ธ์ดํ ๊ฐ์ด๋ฐ ํ๋๋ก์, 2002๋
10์ 11์ผ์ <extra_id_0>. ๋ํ ํ์ฌ ํ๊ตญ์ด ์ํค๋ฐฑ๊ณผ์๋ ๋๊ฒจ์ฃผ๊ธฐ, ํ ๋ก , ๊ทธ๋ฆผ ๋ฑ ํ์ด์ง๋ก ๋ถ๋ฆฌ๋ ๋ชจ๋ ๋ฌธ์๋ฅผ ํฌํจํ๋ฉด ์ด 2,629,860๊ฐ๊ฐ <extra_id_1>๋์ด ์์ผ๋ฉฐ, ๋๊ฒจ์ฃผ๊ธฐ๋ฅผ ํฌํจํ ์ผ๋ฐ ๋ฌธ์ ์๋ 1,278,560๊ฐ,[1] ๊ทธ์ค ๋๊ฒจ์ฃผ๊ธฐ, ๋ง๋ค๋ฅธ ๋ฌธ์๋ฅผ ์ ์ธํ ์ผ๋ฐ ๋ฌธ์ ์๋ 573,149๊ฐ์ด๋ค.'
input_ids_jamo = tokenizer_jamo(input_sentence).input_ids
outputs_jamo = model_jamo.generate(torch.tensor([input_ids_jamo]))
print(tokenizer_jamo.decode(outputs_jamo[0]))
# <pad><extra_id_0>์ค๋ฆฝ๋์๋ค<extra_id_1>ฤฤ
```
Additional information coming soon...
|
Subsets and Splits
Filtered Qwen2.5 Distill Models
Identifies specific configurations of models by filtering cards that contain 'distill', 'qwen2.5', '7b' while excluding certain base models and incorrect model ID patterns, uncovering unique model variants.
Filtered Model Cards Count
Finds the count of entries with specific card details that include 'distill', 'qwen2.5', '7b' but exclude certain base models, revealing valuable insights about the dataset's content distribution.
Filtered Distill Qwen 7B Models
Filters for specific card entries containing 'distill', 'qwen', and '7b', excluding certain strings and patterns, to identify relevant model configurations.
Filtered Qwen-7b Model Cards
The query performs a detailed filtering based on specific keywords and excludes certain entries, which could be useful for identifying a specific subset of cards but does not provide deeper insights or trends.
Filtered Qwen 7B Model Cards
The query filters for specific terms related to "distilled" or "distill", "qwen", and "7b" in the 'card' column but excludes certain base models, providing a limited set of entries for further inspection.
Qwen 7B Distilled Models
The query provides a basic filtering of records to find specific card names that include keywords related to distilled Qwen 7b models, excluding a particular base model, which gives limited insight but helps in focusing on relevant entries.
Qwen 7B Distilled Model Cards
The query filters data based on specific keywords in the modelId and card fields, providing limited insight primarily useful for locating specific entries rather than revealing broad patterns or trends.
Qwen 7B Distilled Models
Finds all entries containing the terms 'distilled', 'qwen', and '7b' in a case-insensitive manner, providing a filtered set of records but without deeper analysis.
Distilled Qwen 7B Models
The query filters for specific model IDs containing 'distilled', 'qwen', and '7b', providing a basic retrieval of relevant entries but without deeper analysis or insight.
Filtered Model Cards with Distill Qwen2.
Filters and retrieves records containing specific keywords in the card description while excluding certain phrases, providing a basic count of relevant entries.
Filtered Model Cards with Distill Qwen 7
The query filters specific variations of card descriptions containing 'distill', 'qwen', and '7b' while excluding a particular base model, providing limited but specific data retrieval.
Distill Qwen 7B Model Cards
The query filters and retrieves rows where the 'card' column contains specific keywords ('distill', 'qwen', and '7b'), providing a basic filter result that can help in identifying specific entries.